AstralStorm 5 years ago

Statistics. Garbage in, garbage out.

This shows why you have to explicitly quantify statistical power and error model for your study design. Tools are good, but not good enough to do it for you.

Then, do calculate effect size instead of a binary answer. You will get answer with a digit in fifth place with a proper size unbiased sample. Note that for proper genome study, the sample size for observational studies is in tend of thousands barring inbred model studies. (Mice or men, with easily detectable disease process.)

  • wallace_f 5 years ago

    It's amazing how convincing statistics are--even for supposed experts--and not just for the reason you elaborate on.

    In 2019, they really are the best way to lie.

    • lm28469 5 years ago

      Science became a new religion. Instead of saying "shut up god made it that way" you say "shut up science says it's that way".

      You can find hundred of studies to support your arguments, no matter what they are, it's even easier if you don't actually read the studies and stop at the summaries made by popular medias.

      • TheOperator 5 years ago

        Science has certainly become a fantastic way to bullshit people. Combined with how science is funded now its created a weird status quo where things that wealthy interest groups want to be true will tend to have more studies which point to the desired conclusion. Something that's true can be less scientifically backed than something that's false if the truth isn't in peoples financial interests.

        I also see a weird phenomena where I'll see people who are wrong like flat earthers running DIY scientific experiments. Yet this crowd seems to be thought of as generally anti-science when they're probably more scientifically curious than the actual population. Being "scientific" means subscribing to a certain set of consensus beliefs among scientists. You can not use science at all yourself directly and be considered a scientific person so long as you just believe in certain things.

      • Angostura 5 years ago

        > You can find hundred of studies to support your arguments, no matter what they are.

        I think that simply incorrect, assuming 'studies' != Youtube videos. I know because I've been involved in internet arguments where I've gone looking for studies and found my own argument is incorrect/poorly supported.

        • jerf 5 years ago

          I'd say it depends on the topic, and it depends on the standard you hold the papers in question to. For instance, I can find you a paper showing almost anything you like in the field of nutrition in terms of what people should eat. It'll even hold up in the abstracts. Whether it holds up beyond that can get complicated, though; I've lost count of things like "a study that X intervention is good in 20 rats over 5 days if you feed the X rats high-quality X but give the not-X rats low quality not-X" or n=8 human studies or n=20 self-reporting studies or all kinds of stuff like that.

          • fyrabanks 5 years ago

            Can you find me a paper that says eating a Sausage McMuffin every morning won't shorten my lifespan?

            • jerf 5 years ago

              In those exact words no, but if you break down the nutritional content of the Sausage McMuffin, yes, probably. A lot of studies are done on rats where even the rats eating "quality" food in the study are still eating things much lower in quality than a fast food muffin. That doesn't even sound particularly challenging.

              To be honest, if you're trying to make my point sound absurd by exaggeration, you've shot way too low. The mainstream nutritional view would be that a Sausage McMuffin every morning on its own isn't going to be a particularly bad thing. You need to be a lot more specific with an overall diet to be a problem. You should have asked something like whether you could find a study about eating nothing but lard is available.

              To which my reply would be that the principle of charity would make it clear that in general I meant any semi-realistic nutritional view is available, not that there are studies that prove humans can be healthy on a diet of rocks and asbestos. It really isn't to anyone's advantage if I also have to append to my little post a complete discussion of what is and is not within the boundaries of nutritional theories that have been studied.

              I mean... is that... really... what you want...?

              • antidesitter 5 years ago

                > what is and is not within the boundaries of nutritional theories that have been studied

                I don’t think this is clear to the average person at all.

        • coldtea 5 years ago

          Often the studies showing your argument is "incorrect/poorly supported" are poorly supported themselves, and just go uncontested for decades, piling citations.

          And that's on harder sciences. In soft sciences (and liberal arts, etc) it's the wild west -- what gets more grants and is more popular at the moment gets priority and is "proved".

          • wallace_f 5 years ago

            Yea... People have been even hoaxing journals for decades and it goes ignored. This scheme got some attention, probably just for how hilarious it is that they published a chapter of mein kampf in one of the leading feminist journals of gender studies: https://youtu.be/fvZNXRiAsn4

      • Nasrudith 5 years ago

        I don't see the comparison at all - "science" is saying that it is not the why in that context.

        It is saying that it does and we have reason to think that. It isn't religion to say that it is stupid to deny that the blue footed booby exists. Even if it is wrong and asseerts the piltdown man is real it is markedly different from religion.

      • ekianjo 5 years ago

        what you refer to is not Science,its pseudoscience. Pretty much any crap gets published these days, review standards are abysmal and there is hardly any replication.

        Only thru replication can you actually claim there is Science.

        • lm28469 5 years ago

          The end result is the same. When a big tv channel says "Eating fat is definitely bad because this study says so" nobody will check the actual study and see that it was done on 3 mice for a duration of 10 days with completely unrealistic settings. It's still science, it's just misinterpreted or extrapolated.

          I see it everyday on HN, people _believing_ we'll all migrate to Mars and terraform it by the end of the century which means climate change is a non issue. Or that we'll get fully autonomous cars by 2020 because Elon "Science" Musk said so and people _want_ to _believe_ in it even though absolutely nothing supports it. It really isn't much more than astrological predictions at that point.

          People pick whatever "science" supports their make believe-world and go with it.

          • gpm 5 years ago

            Elon Musk is basically completely unrelated to science. His projects timetables even more so if it's even possible to be less related to science.

            • joycian 5 years ago

              Not really true. Tesla is at the forefront of applying machine learning in real-world settings. It's definitely not unrelated to science, in my opinion. If autonomous driving in 2020 is on the timetable, Karpathy (Head of AI) is probably confident that it is possible. Musk is very aggressive on timelines (but always wraps it in "it is probable that", which to him is cautious but to which translates to almost certainly in newspapers), but from what I know, he has delivered most promises (which he sees more as projection of previous trends), albeit a little late.

              • gpm 5 years ago

                There's probably some real science happening behind the scenes, experimenting with ML algorithms (Karpathy is certainly up to it). But the challenge of getting it to work sufficiently well in the real world isn't science, it's engineering. Meeting schedules isn't science, it's management. And so on. "Believing in science" has nothing to do with believing musk will or will not succeed, and I think you would find the vast majority of people who think he will succeed (me included if you don't mind late) don't attribute it to "because science".

                • joycian 5 years ago

                  I don't see the line between science and engineering as clearly as you do, apparently. Is CERN a science or engineering project? Drug design? Genuinely curious what qualifies as science. Seeing some articles, it's not the quality. Applied vs. fundamental also seems like a difficult line to actually draw.

                  Edit: Especially in ML, a large part of the research is done in companies.

                  • gpm 5 years ago

                    Science is about discovering things via experiments and observations about the world, engineering is about making things that work. There is a tiny bit of overlap.

                    CERN is a gigantic engineering project used to do a bit of science. Experimenting with different concrete mixes to find one with a set of qualities is science used to let you do some engineering. OpenAI's dota bots are the sort of thing that might fall in the overlap of both discovering things and making things that work.

                    Maybe more to the point, "believing in science" means "believing that those experiments and observations reveal true facts", which has nothing to do with whether or not we believe Musk will succeed at his self driving car ambitions.

        • coldtea 5 years ago

          >what you refer to is not Science,its pseudoscience. Pretty much any crap gets published these days, review standards are abysmal and there is hardly any replication.

          That's like the argument that USSR was not real communism, etc.

          At some point, science is as science does.

          There's no some holier, better checked, domain of practice. It is what it is, and it sometimes has replication, more often than not it doesn't.

          • rjf72 5 years ago

            One factor you cannot ignore is the exponential increase in the number of 'scientists.'

            In times past, for better and for worse, college was generally relegated to an extremely small section of generally over-performing society. And of these a tiny minority would then go on to pursue the post-grad level education that would culminate in becoming a scientist. In today's society college has become high school 2.0. And to some degree post-graduate education is going down the same path. For instance today more than 1 in 8 people have some sort of postgraduate degree. [1] Sourcing that because it just sounds absurd. In other words today more people have a postgraduate education than the total that went to university in the 70s.

            This has resulted in an exponential increase in the amount of stuff getting published as well as a simultaneous and comparably sharp decrease in the overall quality of what's getting published. So I would actually tend to agree with you. This cynical state of science is generally pretty accurate for the state of what passes as science today, but it was not always this way. 'Science' as a whole is in many ways reflective of the mean, and in the public mind even the lowest common denominator. And both of those have undoubtedly fallen far below what they were in times past.

            [1] - https://en.wikipedia.org/wiki/Educational_attainment_in_the_...

            • wallace_f 5 years ago

              I mostly agree, but the GP is also certainly correct though. There is in fact a "holier than thou" science and it is that which follows the scientific method. It is reproduced, empirical, fundamental science. Most garbage published in journals today does not meet that criteria, and even economists, psychologists and even sociologists call themselves scientists when they cannot possibly follow the scientific method in nearly every part of what they study.

            • mattkrause 5 years ago

              I have a hard time believing that the issue is a dilution in the “quality” of scientists, but I would agree that ever-increasing competition for funds and jobs has produced some perverse incentives.

              The consequences for publishing something that’s wrong but not obviously indefensible are often pretty low. On average, it probably just languishes, uncited, in a dusty corner of PubMed. It might even pick up a few citations (“but see XYZ et al. 2019”) that help the stupid metrics used to evaluate scientists.

              The consequences of working slowly—-or not publishing at all—- are a lot worse. You get scooped by competitors that cut corners, and there’s not a lot of recognition for “we found pretty much what they did, but did it right.” Your apparent unproductivity gets called out in grant reviews and when job hunting. The increasing pace and career stage limits (no more than X years in grad school, Y as a postdoc, Z to quality for this funding) make it hard to build up a reputation as a slow-but-careful scientist.

              These are not insoluble problems, but they need top-down changes from the folks who “made it” under the current system....

              • rjf72 5 years ago

                The replication crisis that's plaguing much of the social sciences, but especially psychology, did not cherry pick studies. It started with an effort to replicate studies only from high impact well regarded journals in psychology. [1] It found that 64% of the studies could not be replicated, leading to the curious outcome that if you assumed the literal and exact opposite of what you read in psychology (e.g. - what is said to be statistically significant, is not) - you would tend to be substantially more accurately informed than those who believe the 'science.' [1]

                But more to our discussion, two of the journals from which studies were chosen were Psychological Science - impact factor 6.128, and the Journal of Personality and Social Psychology - impact factor 5.733. The replication success rate for those journals was 38% and 23% respectively. I'm certain you know, but impact factor is the yearly average number of citations for each article published in a journal. A high impact factor is generally anything above about 2. These are among the crème de la crème of psychology, and they're worthless.

                As you mention pubmed, preclinical research is also a field with just an absolutely abysmal replication rate. And once again these are not cherry picked. In an internal replication study Amgen, one of the world's largest biotech companies, alongside researchers from MD Anderson, one of the world's premier cancer hospitals, were only able to replicate 11% of landmark hematology and oncology papers. [2] Needless to say those papers, and their now unsupported conclusions, were acted upon in some cases.

                -----

                All that said I do completely agree with you that the current system of publish or perish is playing into this, but your characterization of the current state of bad science is inaccurate. Bad science is becoming ubiquitous. However, I'm not as optimistic that there is any clean solution. There are currently about 400 players in the NBA. If you increased that 4,000 what would you expect to happen to the mean quality and the lowest common denominators? Suddenly somebody who would normally not even make it into the NBA is a first round pick. And science is a skill like any other that relies on outliers to drive it forward. We now have a system that's mostly just shoveling people through it and outputting 'scientists' for commercial gain. The output of this system is, in my opinion, fundamentally harming the entire system of science and education. And this is a downward spiral because now these individuals of overall lower quality are working as the mentors and advisers for the next generation of scientists, and actively 'educating' the current generation of doe-eyed students. This is something that will get worse, not better, over time.

                [1] - https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...

                [2] - https://www.taconic.com/taconic-insights/quality/replication...

                [3] - http://graphics8.nytimes.com/packages/pdf/education/harvarde...

                • AstralStorm 5 years ago

                  Speaking of replication, my personal experience in very narrow field of audio DSP which is easy to test gave results of 9 papers impossible to implement mostly due to missing key details, 6 more where results only apply in specific test signals (total failure in reality), 3 where performance was overstated by over 12 dB in real samples. 8 were really good and detailed. Two had actual used test code available, one has it in printed form. None with the code were any good. :D

                  (IEEE database around 2005 in noise reduction, echo cancellation and speaker separation or detection.)

          • danaris 5 years ago

            No, science is always science. Just because the media portrays certain things as "scientific truth" (or, for that matter, scientifically unsure) doesn't make it so.

            Indeed, even if scientists claim something bogus, that doesn't make it science.

            So...actually, yeah, it's a lot like the argument that the USSR wasn't real communism, any more than the Democratic People's Republic of Korea is democratic. People claim it to be X, other people take that claim as gospel and use it to paint X as terrible, despite the fact that the people making the claim are full of shit.

            • Konnstann 5 years ago

              The argument that the USSR wasn't really communism isn't a semantic argument of "yeah it was a marxist utopia" but rather one of whether we support governments claiming to be communist. We don't have an example of successful communism, while we do have examples of successful science.

              • danaris 5 years ago

                I have literally never seen that argument, and frequently seen the argument that communism (and/or socialism) is Bad, because the USSR was communist, and they were Bad.

                Not that I'm saying you haven't encountered the reverse; I'm quite willing to believe that people who run in other circles (or make other claims) encounter different arguments. But yeah, I see the "we shouldn't want communism/socialism, it killed millions of people under Stalin" argument all the damn time.

          • bgie 5 years ago

            You are applying the 'no true scotsman' fallacy wrong.

            Depending on the definition of X, you CAN say that something is not X

            Only if you define 'science' as 'the thing that people labeled scientists do', can you arrive at your conclusion.

            I would define scientists as 'people practicing the scientific method'.

            • coldtea 5 years ago

              >Depending on the definition of X, you CAN say that something is not X

              That is a good strategy only if you already have a sample of the thing to derive a definition from.

              To create a good definition you should examine reality, and see the thing as it actually behaves, first. Only then, once you have a reality-based definition, you can judge other specimens and use the definition to say whether they are X or not.

              Else, you just impose some idealistic / non-empirical standards upon reality based on an arbitrary (since it's not based on observation) definition.

              The land and the people existed (as a land and as a people) and gave its name to Scotland (and content to the definition), not the inverse. It wasn't someone making up the word first and others then checking whether the people in Scotland fit it.

              >Only if you define 'science' as 'the thing that people labeled scientists do', can you arrive at your conclusion. I would define scientists as 'people practicing the scientific method'.

              In real life, people call themselves and are called by others scientists if they have studied for and are employed as such, whether or not they "practice the scientific method" and even more so, whether or not they practice it properly.

              So defining scientists as 'people practicing the scientific method' (and e.g. excepting people with Ph.Ds who practice it badly or care to get grants to the detriment of science) is rather the canonical 'no true scotsman' fallacy.

              In that sense, no scientist could ever falsify data or make up a theory and cook its research to support it, or prove something that a company paid them a grant to prove, because "by definition" such a person wouldn't be a scientist.

              • meroes 5 years ago

                The concept of science, which is the empirical study of reality, does not change. There are many concepts that can share the same word - is a Scotsman someone born in Scotland, one who moved there, one who shares Scotland's culture and ideals? There should be different labels for each of these concepts but there aren't.

                The importance, relevance, trust, and reality of science may change, but the underlying concept does not. Nevermind all the other forces trying to co-opt 'science' for their own purposes.

                How many papers and articles describe a purely empirical inquiry into reality and accurately describe all shortcomings and sources of error? 10%? 1? It matters that our trust in "science" may continue to degrade, but none of those change the underlying concept/ideal.

            • astazangasta 5 years ago

              The 'scientific method', of course, is that which is done by scientists.

      • bgie 5 years ago

        Please do find me hundred of studies to support objects on earth falling upwards... newton's gravity is pretty well established within the context it applies to... so in no way can science be compared to religion.

        There is well established science and science-in-development, which can turn out to be wrong when new evidence is discovered. But for religion there is NO such gradient - everything is based on faith, evidence does not even come into play...

        • BearsAreCool 5 years ago

          However, sadly, many people believe in much more questionably "scientifically" proven things, largely due to how simple scientific proof appears due to things like the absolute acceptance of gravity on earth. Going with the religion analogy, there are many basic facts in religious scriptures which are true but this does not make many of the more questionable statements such as how exactly the earth will be destroyed and where you go when you die to be true.

          I completely agree with you that science done properly is not a religion; however, that is simply not the case for most of us who either through lack of care or time will never go beyond seeing a news article about a certain scientific discovery and believing in it due to a supposed scientific breakthrough published in a fancy sounding journal.

        • flyingpineapple 5 years ago

          I agree with what you're saying in general, but there's also problems in play in academics that are broader than just bad statistics. The article tries to convey the nature of this problem, but the "chasing a unreplicable effect" or "science sometimes takes awhile to work itself out" is just the tip of the iceberg.

          This touches close to home because it's in my area of research, and for years I had many discussions with colleagues about this very same genetic effect and its problems. This SLC6A4 candidate gene research was not just a fluke of incompetence (unless by incompetence you mean much of an entire biomedical field of researchers), and it persisted wildly, with huge amounts of methodological research and money behind it.

          Papers advocating for this type of research (and even more statistically problematic research) were published in Nature, with lots of methodological arguments, by established quantitative experts. This doesn't mean it's correct, just that by all superficial indications, it was solid. You had to question these authority figures, and a body of research, in good journals, the actual nature of the argument, and even then you were branded a naysayer or curmudgeon.

          Even when people started questioning the effect, then you had people start advocating for more complex interactions (as intimated at in the article) that just amounted to unintentional (or intentional) data fishing and p-hacking with a theoretical cover.

          When I started pointing out how problematic this all was to my colleagues, I had some of them outright explain to me that they thought it might be bunk, but it was popular, and if they found something significant, and it landed them a paper in a prestigious journal, why wouldn't they publish it? That is, you gotta publish what's popular because that's how you build an academic career.

          I can't begin to explain all the shady stuff I've seen with the SLC6A4 effect being discussed in this article. Some of it was probably completely unintentional, and some of it probably amounts to conscious p-hacking and fishing.

          The worst part about this, that's hard to convey, is that, yes, science probably works itself out eventually most of the time? But there's such a focus on popularity and prestige, fads, regardless of veridicality, and much less so on boring rigor and correctness, that entire careers can be made or broken on complete nonsense. The person who catapaults a completely empty finding to fad status has a career elevated permanently, to a nice named-chair-full professor position. The person who is trying to be rigorous, maybe even dispute the finding or disprove it? Much less clear what happens to them, and often it's a thankless task. That is, the fad makes a career, and even after the fad is discredited, people shrug and say "oh well, that person just happened to have a good idea that was wrong." The people who do the hard work of replicating it, disproving it? Well, that's not interesting or worthwhile to reward.

          Academics is really broken, at least in many fields.

    • thanatropism 5 years ago

      We did go WAY overboard with the data-driven rhetoric. Data was supposed to replace theory going forward.

      What a generational clusterfuck.

  • searine 5 years ago

    >Statistics. Garbage in, garbage out.

    The problem isn't the statistics, and it isn't the solution either. Its the diagnosis data.

    How do you quantify if someone is depressed? How depressed are they? How do you compare two people depression? It is an incredibly nebulous disease.

    • AstralStorm 5 years ago

      That's part of "garbage in". Just because you can put a number on it does not mean the number is well scaled, validated or in any way meaningful.

thanatropism 5 years ago

I think pop sci rags going wild over papers that claim radical breaks are a part of the problem, even if a small one.

There's simply no reason that people who are not technical enough to read scientific papers would be up to date on the latest cancer research. People have a right to knowledge? Then open up the journals.

  • mattkrause 5 years ago

    It’s on a slightly different level.

    There are strong incentives to find ‘sexy’ results, but this is mostly aimed at getting a paper into Nature, Science, or Cell. Publishing in these journals can have an outsized effect on one’s career, even if the results don’t actually hold up.

    In contrast, there’s not a huge pay off for getting something into Popular Science or the New York Times science section. Publicity is good and can help show the relevance of your research. It’s also fun to show your mom (who still wants you to go to med school), but people tend not to chase it nearly as hard as a “glam” paper.

    • thanatropism 5 years ago

      That's probably a bigger effect.

      People have a huge trouble accepting that their wonderful science is a social process.

      • rossdavidh 5 years ago

        Yes, and also that it's a very economic one. If you need to churn out (and get accepted) research in quantity, then "hold on there, do we know that for sure? let's try to replicate that, with a much bigger sample size..." is downright unwelcome.

        Scientists are mostly good people, but they're not angels, and if we put them in a system that rewards the wrong thing, we will get the wrong thing.

        • AstralStorm 5 years ago

          Unfortunately neither economics nor sociology are good enough science branches to even fix themselves. ;)

hmd_imputer 5 years ago

imagine how many times a valid counter argument was silenced because of the "are you denying science?" bull* argument

DuskStar 5 years ago

I love the quote from the Slate Star Codex article:

> ...what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-rev...

scribu 5 years ago

The Slatestar Codex blog post linked in the article is a much more entertaining read, I think:

https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-rev...

  • matt4077 5 years ago

    Scott is always a good read, and rarely not entertaining. The Atlantic’s piece is a bit less snarky, but has other redeeming factors.

    It is quite obvious when comparing the two that a journalist’s first instinct is always to call a bunch of experts and incorporate their views. It’s also laudable that the author rejects the easy cynicism of accusing these scientists of individual intentional deception, and instead redirects our scorn onto the publish-or-perish dynamic.

searine 5 years ago

As someone who works on depression GWAS, this isn't the end of finding causal genes for depression, it's the beginning.

In the next few years we're going to see a waterfall of new rare variants linked to disease, all which have a much higher chance of causing functional change.

As WGS comes online for association studies it will both validate and more deeply explore the genetic nature of every disease. It is going to be mindblowing.

  • eggie 5 years ago

    > As WGS comes online for association studies it will both validate and more deeply explore the genetic nature of every disease. It is going to be mindblowing.

    But there is another layer of confusion, because WGS implies resequencing, and resequencing is only as good as your reference genome, and will distort results when the genome you're inferring is to far from the reference (reference bias).

    The real mind bend will come when we have thousands or millions of whole genome de novo assemblies and we compare these to each other to do our GWAS. Only then are we going to have a hope of knowing what is actually causal in a genomic sense. Until then we remain in the land of association.

    In nature, most adaptive variation appears to be large and structural. All the recent studies that have used whole genome assemblies to look at this have found the same thing. I would be surprised if this isn't the case for humans too. If it is, then much of current perspective on GWAS (both based on chips and WGS) will need to be rewritten.

    • asdff 5 years ago

      GWAS is nothing more than correlations and by themselves aren't very remarkable. What is remarkable is when someone does a functional analysis on the variant they found in their GWAS and is able to describe the mechanism that it uses to affect the disease or phenotype. The way you do that isn't more sequencing, but with model systems that you can easily manipulate in the lab.

      GWAS also doesn't compare to an arbitrary reference sequence. Some good numbers for a GWAS are >1000 cases and >2000 controls from the same population. You have to match populations, or else all your association study is going to find is the differences between east asians and europeans, for example. You need a lot of samples to get enough stats power to even see these rare variants.

lisper 5 years ago

> Between them, these 18 genes have been the subject of more than 1,000 research papers, on depression alone. And for what? If the new study is right, these genes have nothing to do with depression.

The article doesn't actually say anything about what these 1000 research papers actually claimed to show. It's entirely possible that most of them were negative results, there's no way to tell from the text of the article. It strongly implies that this is the case, but it doesn't actually say so. This is a significant omission for an article whose thesis is that these 1000 papers constituted a "house of cards."

[UPDATE]

"Sometimes the gene was linked to depression; sometimes it wasn’t. And crucially, the better the methods, the less likely he was to see such a link."

Seems to me like this is science working exactly as it is supposed to (except for the possible suppression of earlier negative results, but that's a known broad problem that has nothing to do with this particular gene study).

  • rossdavidh 5 years ago

    It worked as it was supposed to...eventually. Instead of double-checking the initial results, hundreds of research papers by many research teams looked into questions that would be very important, if the genes in question were connected to depression.

    Sure, it is great that it's not (quite) religious dogma, and it eventually gets revisited. But, a couple decades worth of work and ~1000 research papers, many of which took place AFTER the earlier 2005 paper that called into question the link to depression, is by no means what we should shoot for.

    More fundamentally, there was nothing in the system that motivated anyone to do replication of the original link. The problem is not that the original link turned out to be spurious, and it's not that anyone did anything unethical. More worrisomely, nobody appears to have done anything unethical, but because of the systemic incentives for research grants, tenure, etc., replication of the foundational result wasn't attempted in a serious enough way until many times that much work had been done on something spurious.

raverbashing 5 years ago

It's just naive to pin something as complex as depression on a single variable (but that's Popperian scientificism for you)

Now it might be that the 1st study identified an effect on a (very specific) subpopulation and in the bigger study case that subpopulation is not present or not identifiable. But who knows

Now of course it's not wrong to study a certain gene, but to go "all in on" a very narrow study path is stupid. Now, if on the course of the depression studies they found this gene had everything to do with a different disease that would be "p-hacking" or something, right? RIGHT?

More statistical power and "rigour" won't lead research anywhere because it's not the studies that were necessarily flawed, it's that the effects might be small, or dependent on a chain of other factors, so unless the effect of a single factor is predominant it might not even matter (as an isolated factor).

  • Nasrudith 5 years ago

    Thinking that a single gene could cause depression isn't fallacious - pinning all depression on it would be.

    But a single gene as a possible cause isn't surprising given other more extreme single gene effects with other cognitive and developmental effects - angelman syndrome is marked for its cheerfulness and niceness.

    If you were seeking such a gene you would search for anomalous histories of suicides in a family tree.

  • searine 5 years ago

    >It's just naive to pin something as complex as depression

    No it's not. Twin-based heritability studies show depression risk has about 10% heritability across all cases and 50% heritability among severe cases. There is a genetic component to the disease.

    That complexity is mediated by dozens of genes, but each one gene contributes a percentage to that heritability. Single genes can and will be found which link genotype to phenotype. It is just a matter of time and money.

    • logjammin 5 years ago

      You don't think you're mixing up "heritable" with "genetic" here? In my family, liking the Yankees is highly heritable across generations, but my grandfather would've laughed in your face if you claimed it was genetic.

      • searine 5 years ago

        >You don't think you're mixing up "heritable" with "genetic" here?

        "Heritable" in this context is a term of art. It is a statistic (H squared) that describes the amount of a phenotype that is transmitted from generation to generation. This value broadly describes the amount of a phenotype which can be ascribed to genetics compared to environment without linking it to any known genes or genomic elements. It is a classical technique that was primarily used before we were able to sequence DNA. This is usually figured out by using "twin studies" which are able to quantify inheritance by comparing two individuals with the same genetics.

        https://en.wikipedia.org/wiki/Heritability

  • raxxorrax 5 years ago

    > that's Popperian scientificism

    I believe his criticisms would be very close to yours. For me, this is on the border between medicine, anthropology and in general social sciences. I wouldn't list hubris as a strength for anyone in these fields to underline popular prejudices.

    But aside from that, even if statistics are vigorously checked for correctness, it still leaves a lot of room to get different interpretations.

  • fromthestart 5 years ago

    >It's just naive to pin something as complex as depression on a single variable

    Why? There are other complex diseases with purely genetic causes, e.g. Huntington's Disease; depression isn't necessarily different. Hindsight is 20/20.

    • thanatropism 5 years ago

      Yes, but Huntington's is an explicit cluster of symptoms at diagnosis + a specific prognosis. Uppercase D "Depression" is a diagnosis built around a single symptom, lowercase d "depression". But many people experience depression, both in "core mental illnesses" like bipolar and outside. (Note how bipolar is much better constructed -- indeed the method of differential diagnosis was invented by Kraepelin to differentiate bipolar from schizophrenia)

      If it was up to me, big D Depression would be redefined as "responds to Prozac" or something.

    • mrosett 5 years ago

      Huntington’s is a very precisely defined disease. Depression has a much wider range of manifestations.

pfdietz 5 years ago

What's needed is for the Ignoble Prize people to focus on the people who start this sort of crap, not on humorous low value targets. Really call out the people who contributed negatively to the progress of science.

mindgam3 5 years ago

“Nor... should his work be taken to mean that genes don’t affect depression. They do, and with newer, bigger studies, researchers are finally working out which ones do.”

This part really needs some context/sources. Anyone know which studies he’s talking about?

emily2749 5 years ago

Even you can't believe there is a hen in world's biggest animals See the video how they looks and what they do https://cutt.ly/1rOU7Q

Very funny dog using toilet Try to control your laugh http://bit.ly/2LVSNSG

Cute dog doing exercise like humans http://bit.ly/2Jq3rz1

Funny gorilla acting like humans see his video on link below http://bit.ly/2JmTNxr

Best way to get rid from dark spots on face just follow this https://zoomtips.blogspot.com/2019/05/Dark-spots.html

Funny panda is playing with friends they are sho cute http://bit.ly/2JvDQF4

unityByFreedom 5 years ago

Odd that this article cites Scott Alexander as if that is the name of a real psychiatrist,

> “What bothers me isn’t just that people said [the gene] mattered and it didn’t,” wrote the psychiatrist Scott Alexander in a widely shared blog post.

Scott Alexander is the pen name of a person running a blog who claims to be a psychiatrist, and AFAIK, his authenticity is not proven due to his anonymity.

  • Lazare 5 years ago

    His identity is hardly a deep secret; anyone who really cares can find it out, and verify his credentials.

    (Not that those credentials are really relevant here, no?)

    • unityByFreedom 5 years ago

      The blog actively misleads readers into thinking Scott Alexander is his real name,

      > SSC is the project of Scott Alexander, a psychiatrist on the US West Coast. You can email him at scott[at]slatestarcodex[dot]com. Note that emailing bloggers who say they are psychiatrists is a bad way to deal with your psychiatric emergencies, and you might wish to consider talking to your doctor or going to a hospital instead. [1]

      Journalists citing that blog should note it is an alias, and deeply question whether or not he really is a psychiatrist.

      And no, I don't find it easy to verify his identity, and yes, it is completely relevant here as he's being cited as a real psychiatrist. That's not verifiable without knowing a real name.

      [1] https://slatestarcodex.com/about/

      • Lazare 5 years ago

        > it is completely relevant here as he's being cited as a real psychiatrist. That's not verifiable without knowing a real name.

        We're discussing his criticism of how a ton of "real psychiatrists" got some major things wrong. This criticism stands or falls on its merits, not on his status as a "real psychiatrist". You don't need to be a "real psychiatrist" in order to this sort of analysis (and, it turns out, a ton of "real psychiatrists" got this wrong in the past).

        This is like reading a cooking blog by someone who says they're an auto mechanic, and demanding proof that they really know how to fix cars before you'll listen to them talk about pie crusts. If they're not claiming special expertise in the area (and he isn't), the credentials don't matter.

        Eg, Scott writes:

        > Border et al focus this infrastructure on 5-HTTLPR and its fellow depression genes, scanning a sample of 600,000+ people and using techniques twenty years more advanced than most of the studies above had access to. They claim to be able to simultaneously test almost every hypothesis ever made about 5-HTTLPR, including “main effects of polymorphisms and genes, interaction effects on both the additive and multiplicative scales and, in G3E analyses, considering multiple indices of environmental exposure (e.g., traumatic events in childhood or adulthood)”. What they find is…nothing.

        I mean, either he's right or he's wrong, and you could go read the paper yourself and find out. Or you can trust other people who have read the paper. Or, I dunno, you could ask one of the authors of the paper if they think Scott's summary is any good:

        > I have never in my career read a synopsis of a paper I've (co-)written that is better than the original paper. Until now. I have no clue who this person is or what this blog is about, but this simply nails every aspect of the issue

        (Source: https://twitter.com/matthewckeller/status/112638089124318822...)

        And again, none of this has anything to do with his status as a psychiatrist. You shouldn't blindly trust a blog post about this topic just because the author is verifiably a psychiatrist, but you shouldn't blindly distrust one just because they are not.

        • DanBC 5 years ago

          Psychiatrist is normally a protected title that requires a professional qualification and a registration, so it's not unreasonable that people are told that this is an alias.

        • unityByFreedom 5 years ago

          > We're discussing his criticism of how a ton of "real psychiatrists" got some major things wrong

          No, you replied to me. My comment was it is strange that this Atlantic article cites him as "the psychiatrist Scott Alexander" because that is a pen name, and it isn't verifiable that he is a psychiatrist due to his anonymity.

          Your refutation of my point is "it doesn't matter that it isn't verifiable", but that was part of my point. You can't just toss it out, that's being intellectually dishonest.

          Any piece of journalism that cites a person should do so only if they can verify the source. Journalists who skip this step means at least a portion of their article is fake news.

          > You shouldn't blindly trust a blog post about this topic just because the author is verifiably a psychiatrist, but you shouldn't blindly distrust one just because they are not

          This has nothing to do with my point, which is all about how The Atlantic journalist cites him, and has little to do with whatever his blog says.

          • skybrian 5 years ago

            You're assuming without evidence that Ed Yong didn't verify his credentials. How do you know? Maybe ask?

            • unityByFreedom 5 years ago

              Again, the author cited him as "the psychiatrist Scott Alexander".

              If he did verify credentials, he ought to have mentioned the name is an alias.

              • skybrian 5 years ago

                He could have mentioned it, but I don't understand why you think it's important?

                The authors of the actual scientific study are Richard Border and his co-authors. Scott Alexander is just a well-known blogger who gets credit for writing about it in a vivid way that got people's attention. He's not a primary source, so for the purposes of this article, it doesn't really matter whether he's using an alias or even whether he's a psychiatrist. (It's not a credential in the relevant field anyway.) You can verify the quotes by following the link.

                Calling up other scientists in the field and asking questions about a scientific paper is how science writers verify a science article, and Ed Yong did that.

                • unityByFreedom 5 years ago

                  > I don't understand why you think it's important?

                  Scott Alexander is a pseudoynm. Journalists referencing that name should note this, along with the fact that his credentials as a psychiatrist are not publicly verifiable.

          • DataWorker 5 years ago

            Perhaps publish or parish applies to journalism as well. Factchecking costs money and not everyone has ssc’s budget.

            • unityByFreedom 5 years ago

              > Factchecking costs money

              Authors who do not fact-check are writing fiction. That is not journalism.

              > not everyone has ssc’s budget

              What budget? It's one or a handful of people writing a blog. I'm sure the budget of The Atlantic, founded in the 1800s, dwarfs that of SSC.

    • YeGoblynQueenne 5 years ago

      >> His identity is hardly a deep secret

      Well, who is he then?

    • atomical 5 years ago

      I care about reading something written by a professional instead of a Tim Ferriss type. I've found several suspect items in his blog articles. I would like to know his credentials.

hairytrog 5 years ago

This happens when you have a boatload of "educated" people with degrees in psychology/neuroscience who have to do "research." There's something like 100,000 new psych grads each year - and there's only so many coffee shops.

gnoppa 5 years ago

Now please let the same epiphany come about cancer research or personalized medicine based on genes.

  • icegreentea2 5 years ago

    Let's not throw the baby out with the bath water. No doubt there's a lot of sketchy stuff going on, but also the fields are very wide with lots of "verticals" going on. Just cause one vertical might be built on a shakey foundation, doesn't mean all of them are (and the converse is also true).

    • gnoppa 5 years ago

      You might be right that not everything in those fields is bad, but most is. Nevertheless, what is so terrible is that all the money is put into research (gene mutation causing cancer) even though it has been proven wrong many years ago. That becomes very clear after reading Thomas Seyfried's work (cell fermentation causing cancer).

      Sadly, academics have the incentives to publish papers in prestigious journals that make them sound smart and not to find a simple cure. Likewise, the free market does not have the incentive to easily and cheaply cure people but to maximze profits. Hence, so much research is absolutely abysmal, wrong, misleading and harmful. Sadly the human mind is constructed in such a way that we self-deceive to gain advantages. That is why there is no need for a conspiracy but just the wrong kind of incentive structure.

      • icegreentea2 5 years ago

        I know you're getting downvoted. I don't think things are as "bad" as you think they are, but it's also impossible to deny that our current incentive structure is not ideal and can lead us to bad results.

        That said, with respect with Seyfried's work, I urge you to give something like https://sciencebasedmedicine.org/ketogenic-diets-for-cancer-... a read. Go ahead and skip all of the bits about his chosen associations and focus on just the science parts if you would like. I think you'll find that quite a lot of research as been done on cancer metabolism, and that Seyfriend's work at best can be considered incomplete.

        If you find the blog's arguments about lack of clinical trial data to be circular (I guess they kind of are), I don't have any answers for you now, but I suggest you keep on eye out for this guy: https://clinicaltrials.gov/ct2/show/study/NCT01754350

        This is one of the studies in early stages that the blog links to (it's been a few years). They just wrapped up the study a few months ago, and hopefully there will be results posted soon. You can also search for that study number (NCT01754350) or study name (ERGO2) for papers when they do come out (again, expect at least a few more months).

      • asdff 5 years ago

        >You might be right that not everything in those fields is bad, but most is.

        Citation needed.