> Finally, if you want to simply know which Science™ you can trust, I’d recommend finding and following individuals who repeatedly demonstrate competence in statistical methods and scientific interpretation.
So like, the scientists themselves?
> If in doubt, read the study critically yourself.
I cannot believe the author manages to say this with a straight face. “Hey you average person (with maybe a college degree), go read the original academic paper yourself. Doesn’t matter that you don’t have the background, struggle with basic math (much less statistics), can’t evaluate the claims, and don’t know which questions to ask.”
The age of the polymath is long dead, we’re living in The Great Endarkenment. You trust your pilot to do their job, you trust your civil engineers with the bridge you driver over, and the mechanical engineers with the controlled explosions happening in your car, but when it comes to cutting edge scientific articles, here is where you, average Joe, will be able to know better that the experts in the field who specialized in this and do it every day.
In this case, the "pilot" (the combined media and researcher science communication system) is deliberately steering the plane into the side of a mountain. A coin flip would do a better job. They've burned their credibility to the ground, and you're trying to repair it by invoking other professions that haven't done so.
Okay, I’ll bite. Who exactly is “they”? How have they burned their credibility to the ground? And how does reading scientific papers yourself address this issue if in your telling it was created by people who are no better than a coin flip?
Well, this study and the BBC's reporting on it (ignoring the misleading title) isn't quite as bad as worse-than-coin-flip, but the study from my other comment [1] is worse-than-coin flip - not only did they fail to adjust for birth weight, they even cut out data they didn't like [2]. So "they" varies by field, institute, and researcher.
Okay, let’s even grant you both studies at face value based on your description.
Are two studies enough to “burn their credibility to the grounds”?
Science is a process, not individual studies. Your daily caller article is actually a good example of this. It is a replication study that disproves the original study. This is how science is supposed to work; not by hinging on one individual paper (as influencers and cranks do) but on the sum total of the scientific literature. (well in this case you need less papers if you can prove obvious mistakes or misconduct)
The process cannot guarantee that every single paper is True. But, if followed, it guarantees that in time it will self correct.
> Are two studies enough to “burn their credibility to the grounds”?
You're right, I've been arguing lazily. To fix that: it's more than two studies: The authors also submitted different test studies to different peer-review boards. The methodology was identical, and the variable was that the purported findings either went for, or against, the liberal worldview (for example, one found evidence of discrimination against minority groups, and another found evidence of "reverse discrimination" against straight white males). Despite equal methodological strengths, the studies that went against the liberal worldview were criticized and rejected, and those that went with it were not. [1]
This then shows up as e.g. publication bias in favor of a hypothesis [2].
Now you may read the abstract of the study from [1] and think "95% vs 50% approval rate depending on hypothesis? Well, that's not great, but if hypothesis A is true and B is false, then even if A only gets half as many studies, it should easily prevail as true."
There are two problems with that: One, A won't get half as many studies, but far less. The peer-review boards are assembled from, well, peers. On average, they share the biases of the researchers they are reviewing. If they will reject studies they dislike, and lie about why they rejected them [1, abstract of the study], what do you think the odds are that they would propose such studies (and then publish them despite getting results they dislike)? It's not one filter, but two, and publication bias will drown out the true signal.
Two, studies that show the right results will get promoted more, and if any later show up to debunk them, they'll get mostly ignored. The debunked study got promoted by CNN [3], USA Today [4], NPR [5], the Washington Post [6], and less trafficked sites like the World Economic Forum [7] and ScienceNews [8]. It was sent to the Supreme Court in an amicus brief by the American Medical Association and cited in Justice Brown Jackson's dissent [9].
The debunking was, predictably, promoted mostly by right-wing sites, and WSJ and the Economist (and also the Hill, who published both the initial study and its debunking). I'm not holding my breath for the American Medical Association to send an amicus brief arguing for repealing affirmative action, citing the debunking study.
So if you do what the "Never Trust the Science" article criticizes - follow some respectable publication like NPR, or uncritically parrot "the consensus", you'll just be reproducing the biases of NPR, that cherry-pick which studies to feature, or the peers of that field, the same peers that lied about why they rejected studies that went against consensus.
That said, in an ideal world (that still had flawed scientists), the average person (let alone the 50% below average) should probably follow your advice. They won't be able to distinguish quackery and gibberish from real arguments, and lack the self-honesty to disregard everything when they can't understand the arguments anymore.
In practice, the friend-enemy distinction wins out. People see news like "Diversity Statements Required for One-Fifth of Academic Jobs" [10] and "I Cited Their Study, So They Disavowed It" [11] and "White supremacy is a lethal public health issue that predates and contributes to COVID-19" [12], and immediately realize why studies only ever find one type of villain. So when you say
"The process cannot guarantee that every single paper is True. But, if followed, it guarantees that in time it will self correct."
They will think
"The market can stay irrational longer than I can stay solvent."
Science has entered the culture wars [13], and until people believe they're again neutral, trust will be thin.
[13] Or in some cases, like climate science, oil lobbyists have spread the (as far as I can tell false) belief they're in the culture wars, to undermine trust.
> For example, failing to control for obvious confounders in observational data is likely to produce biased results. If we like the direction of this bias, we can do less adjustment for confounders.
For example, the study showing that having a white doctor increased mortality of black babies didn't correct for birth weight - once that was done, the effect disappeared (and media interest waned): https://www.wsj.com/opinion/justice-jacksons-incredible-stat...
Something people often don't consider is the limited resources for doing science - time, money, etc.
The positive side to "bias" is intuition. This is where a bias ("I'm pretty sure it'll turn out to work like XYZ, so I'll do this experiment next, rather than getting bogged down in some other area.") massively shortcuts the amount of resources required to come to a scientific conclusion.
During my PhD, I made many such shortcuts, following my nose. If I didn't, and tried to do everything objectively, I'd still be optimising buffers, and other such things.
To be clear I'd be very much in favor of scientific studies and their data having to be publicly available.
But on any controversial area, which is most of the areas anyone cares about, there will be 2+ sides of the issue and any vetting body will be compromised to some degree for one of those sides.
That's the rub, isn't it... who watches the watchmen? In times past, journalism at least had the veil of impartiality, but modern journalism is far more of an editorial activist activity than simply answering the 6 W's of a given story.
I'm not sure it was ever actually much better... and it may just be my pessimistic Gen X nature. But I've personally seen too many misrepresentations about too many studies where the body and available data in fact don't match the headlines or the numbers themselves are deceptive in a way that is much less significant than represented.
200% the risk of X... when in sample A of 10000, 1 had X, and in sample b of the same size, 2 had X... while it's a real relative stat, the absolute values are all but meaningless in context.
Yellow journalism existed generations before you and I. The institution was always sullied by the worst and has always contained some of the most dogged pestering fact finders.
It’s not even clear that journalists of the 1960s-1980s were as impartial or brutally honest as we remember. That is most likely a halo effect from having a few highly trusted very visible personalities (eg. Walter Cronkite), but even they were slow to realize (by a decade) how much of a morass the Vietnam War was.
It was always about independence, not impartiality. Instead of having a big boss on the top issuing correct opinions, reputable news outlets gave their reporters a lot of freedom in their work. Each reporter had their own biases, and the variation within each outlet was usually greater than the variation between outlets.
Ya, I want it even bigger. All commercial claims should be accessible for your own determination. Fastest, biggest, longest, widest, shortest, most liked, doctor recommended, any empirical claim must have the data used and calculations to make the claim available for examination. Data storage is so cheap now. I don't see it as a dent to anyone's profit.
This assumes that everyone is currently sufficiently informed enough to make the same expert observations about methodology affecting bias. This is flatly untrue for the vast majority of the population.
And nobody has enough time or desire (or likely money to subscribe to the journals) to read the details of the papers and grok the nuances. Humans think in simple narratives for a reason.
We shouldn’t have blind faith in science, but we also shouldn’t have to go back to first principles and do our own version of every experiment. The repeatability crisis is a thing and we know about it. P value hacking is a thing we know about.
The problem described in the article is that we shouldn’t believe headlines or short summaries created by writers who aren’t incentivized to add the nuance. And nobody should believe a headline anyway - in addition to necessarily being lossy, for any for profit organization they are likely written by someone other than the writer and probably A/B tested for clicks.
You haven't even mentioned how LLMs can absolutely mimic any opinion anywhere at any time!
The trust in Science is about the system the produces it; not a single paper or whatever, but that's being erradicated because of the needle in haystack problem.
So even if you think going to original sources makes you safe, think again.
> Finally, if you want to simply know which Science™ you can trust, I’d recommend finding and following individuals who repeatedly demonstrate competence in statistical methods and scientific interpretation.
So like, the scientists themselves?
> If in doubt, read the study critically yourself.
I cannot believe the author manages to say this with a straight face. “Hey you average person (with maybe a college degree), go read the original academic paper yourself. Doesn’t matter that you don’t have the background, struggle with basic math (much less statistics), can’t evaluate the claims, and don’t know which questions to ask.”
The age of the polymath is long dead, we’re living in The Great Endarkenment. You trust your pilot to do their job, you trust your civil engineers with the bridge you driver over, and the mechanical engineers with the controlled explosions happening in your car, but when it comes to cutting edge scientific articles, here is where you, average Joe, will be able to know better that the experts in the field who specialized in this and do it every day.
> You trust your pilot to do their job
In this case, the "pilot" (the combined media and researcher science communication system) is deliberately steering the plane into the side of a mountain. A coin flip would do a better job. They've burned their credibility to the ground, and you're trying to repair it by invoking other professions that haven't done so.
Okay, I’ll bite. Who exactly is “they”? How have they burned their credibility to the ground? And how does reading scientific papers yourself address this issue if in your telling it was created by people who are no better than a coin flip?
Well, this study and the BBC's reporting on it (ignoring the misleading title) isn't quite as bad as worse-than-coin-flip, but the study from my other comment [1] is worse-than-coin flip - not only did they fail to adjust for birth weight, they even cut out data they didn't like [2]. So "they" varies by field, institute, and researcher.
[1] https://news.ycombinator.com/item?id=47431120
[2] https://dailycaller.com/2025/03/31/exclusive-researchers-axe... (every claim the article makes is backed up by attached FOIA'd documents, so you don't have to take the Daily Caller at their word if you don't trust them)
Okay, let’s even grant you both studies at face value based on your description.
Are two studies enough to “burn their credibility to the grounds”?
Science is a process, not individual studies. Your daily caller article is actually a good example of this. It is a replication study that disproves the original study. This is how science is supposed to work; not by hinging on one individual paper (as influencers and cranks do) but on the sum total of the scientific literature. (well in this case you need less papers if you can prove obvious mistakes or misconduct)
The process cannot guarantee that every single paper is True. But, if followed, it guarantees that in time it will self correct.
> Are two studies enough to “burn their credibility to the grounds”?
You're right, I've been arguing lazily. To fix that: it's more than two studies: The authors also submitted different test studies to different peer-review boards. The methodology was identical, and the variable was that the purported findings either went for, or against, the liberal worldview (for example, one found evidence of discrimination against minority groups, and another found evidence of "reverse discrimination" against straight white males). Despite equal methodological strengths, the studies that went against the liberal worldview were criticized and rejected, and those that went with it were not. [1]
This then shows up as e.g. publication bias in favor of a hypothesis [2].
Now you may read the abstract of the study from [1] and think "95% vs 50% approval rate depending on hypothesis? Well, that's not great, but if hypothesis A is true and B is false, then even if A only gets half as many studies, it should easily prevail as true."
There are two problems with that: One, A won't get half as many studies, but far less. The peer-review boards are assembled from, well, peers. On average, they share the biases of the researchers they are reviewing. If they will reject studies they dislike, and lie about why they rejected them [1, abstract of the study], what do you think the odds are that they would propose such studies (and then publish them despite getting results they dislike)? It's not one filter, but two, and publication bias will drown out the true signal.
Two, studies that show the right results will get promoted more, and if any later show up to debunk them, they'll get mostly ignored. The debunked study got promoted by CNN [3], USA Today [4], NPR [5], the Washington Post [6], and less trafficked sites like the World Economic Forum [7] and ScienceNews [8]. It was sent to the Supreme Court in an amicus brief by the American Medical Association and cited in Justice Brown Jackson's dissent [9].
The debunking was, predictably, promoted mostly by right-wing sites, and WSJ and the Economist (and also the Hill, who published both the initial study and its debunking). I'm not holding my breath for the American Medical Association to send an amicus brief arguing for repealing affirmative action, citing the debunking study.
So if you do what the "Never Trust the Science" article criticizes - follow some respectable publication like NPR, or uncritically parrot "the consensus", you'll just be reproducing the biases of NPR, that cherry-pick which studies to feature, or the peers of that field, the same peers that lied about why they rejected studies that went against consensus.
That said, in an ideal world (that still had flawed scientists), the average person (let alone the 50% below average) should probably follow your advice. They won't be able to distinguish quackery and gibberish from real arguments, and lack the self-honesty to disregard everything when they can't understand the arguments anymore.
In practice, the friend-enemy distinction wins out. People see news like "Diversity Statements Required for One-Fifth of Academic Jobs" [10] and "I Cited Their Study, So They Disavowed It" [11] and "White supremacy is a lethal public health issue that predates and contributes to COVID-19" [12], and immediately realize why studies only ever find one type of villain. So when you say
"The process cannot guarantee that every single paper is True. But, if followed, it guarantees that in time it will self correct."
They will think
"The market can stay irrational longer than I can stay solvent."
Science has entered the culture wars [13], and until people believe they're again neutral, trust will be thin.
[1] https://theweek.com/articles/441474/how-academias-liberal-bi... (the study the article references: https://psycnet.apa.org/record/1986-12806-001)
[2] Revisiting the Income Inequality-Crime Puzzle, https://bura.brunel.ac.uk/bitstream/2438/27988/1/FullText.pd...
[3] https://www.cnn.com/2020/08/18/health/black-babies-mortality...
[4] https://www.usatoday.com/story/news/health/2020/08/19/black-...
[5] https://www.npr.org/2020/09/16/913718630/a-key-to-black-infa...
[6] https://www.washingtonpost.com/health/black-baby-death-rate-...
[7] https://www.weforum.org/stories/2020/10/black-babies-in-the-...
[8] https://www.sciencenews.org/article/black-newborn-baby-survi...
[9] https://unherd.com/newsroom/why-did-it-take-four-years-to-de...
[10] https://freebeacon.com/campus/study-diversity-statements-req...
[11] https://manhattan.institute/article/i-cited-their-study-so-t...
[12] https://www.npr.org/sections/coronavirus-live-updates/2020/0...
[13] Or in some cases, like climate science, oil lobbyists have spread the (as far as I can tell false) belief they're in the culture wars, to undermine trust.
> For example, failing to control for obvious confounders in observational data is likely to produce biased results. If we like the direction of this bias, we can do less adjustment for confounders.
For example, the study showing that having a white doctor increased mortality of black babies didn't correct for birth weight - once that was done, the effect disappeared (and media interest waned): https://www.wsj.com/opinion/justice-jacksons-incredible-stat...
And if a result is surprising to you, you should trust it less and look into it more deeply.
And if you do so, one of two good outcomes will hopefully happen:
1. You find the result is bogus
2. You learn something new and update your internal model of the world.
This ignores a huge blind spot that humans have: confirmation bias.
There is no accurate heuristic which is a good short.
Something people often don't consider is the limited resources for doing science - time, money, etc.
The positive side to "bias" is intuition. This is where a bias ("I'm pretty sure it'll turn out to work like XYZ, so I'll do this experiment next, rather than getting bogged down in some other area.") massively shortcuts the amount of resources required to come to a scientific conclusion.
During my PhD, I made many such shortcuts, following my nose. If I didn't, and tried to do everything objectively, I'd still be optimising buffers, and other such things.
How about a big vetted database like arxiv of all hypotheses, all proposed experiments to test them, and all experimental results?
Vetted by who?
To be clear I'd be very much in favor of scientific studies and their data having to be publicly available.
But on any controversial area, which is most of the areas anyone cares about, there will be 2+ sides of the issue and any vetting body will be compromised to some degree for one of those sides.
That's the rub, isn't it... who watches the watchmen? In times past, journalism at least had the veil of impartiality, but modern journalism is far more of an editorial activist activity than simply answering the 6 W's of a given story.
I'm not sure it was ever actually much better... and it may just be my pessimistic Gen X nature. But I've personally seen too many misrepresentations about too many studies where the body and available data in fact don't match the headlines or the numbers themselves are deceptive in a way that is much less significant than represented.
200% the risk of X... when in sample A of 10000, 1 had X, and in sample b of the same size, 2 had X... while it's a real relative stat, the absolute values are all but meaningless in context.
Yellow journalism existed generations before you and I. The institution was always sullied by the worst and has always contained some of the most dogged pestering fact finders.
It’s not even clear that journalists of the 1960s-1980s were as impartial or brutally honest as we remember. That is most likely a halo effect from having a few highly trusted very visible personalities (eg. Walter Cronkite), but even they were slow to realize (by a decade) how much of a morass the Vietnam War was.
It was always about independence, not impartiality. Instead of having a big boss on the top issuing correct opinions, reputable news outlets gave their reporters a lot of freedom in their work. Each reporter had their own biases, and the variation within each outlet was usually greater than the variation between outlets.
Ya, I want it even bigger. All commercial claims should be accessible for your own determination. Fastest, biggest, longest, widest, shortest, most liked, doctor recommended, any empirical claim must have the data used and calculations to make the claim available for examination. Data storage is so cheap now. I don't see it as a dent to anyone's profit.
This would just be impractical. Nothing would ever get done. Too many potential experiments.
arxiv is an open-access journal that checks for spam. It is very much not "vetted" lol
This assumes that everyone is currently sufficiently informed enough to make the same expert observations about methodology affecting bias. This is flatly untrue for the vast majority of the population.
And nobody has enough time or desire (or likely money to subscribe to the journals) to read the details of the papers and grok the nuances. Humans think in simple narratives for a reason.
We shouldn’t have blind faith in science, but we also shouldn’t have to go back to first principles and do our own version of every experiment. The repeatability crisis is a thing and we know about it. P value hacking is a thing we know about.
The problem described in the article is that we shouldn’t believe headlines or short summaries created by writers who aren’t incentivized to add the nuance. And nobody should believe a headline anyway - in addition to necessarily being lossy, for any for profit organization they are likely written by someone other than the writer and probably A/B tested for clicks.
You haven't even mentioned how LLMs can absolutely mimic any opinion anywhere at any time!
The trust in Science is about the system the produces it; not a single paper or whatever, but that's being erradicated because of the needle in haystack problem.
So even if you think going to original sources makes you safe, think again.