It's understandable that unusual patients are seen as confounding variables in any study, especially those with small numbers of patients. Though I haven't read beyond the abstract, it also makes sense that larger studies (phase 3 or 4) should not exclude such patients, but perhaps could report results in more than one way -- including only those with the primary malady as well as those with common confounding conditions.
Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.
Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.
Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.
This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?
Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.
The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.
how do you figure? absolute SAE rate increases 2 percentage points. nothing changes about relative SAE rate. does it change anything about your choice between different health technologies? no.
A relative of mine was in the late-stages of cancer, and was not able to find ANY trials willing to accept them. The worst of those trials was out of Baltimore (MD USA), and they ghosted my relative after initial in-person consults that required by relative to drive out of state. Despite the patient's repeated and dogged outreach to the trial authors after that point, there was never an explicit "no" that would let the patient move on, just radio silence instead and that felt cruel to me.
I've personally been excluded from several depression clinical trials for having suicidal ideations, it makes me wonder just what kind of "depression" they are testing drugs on.
There are a few broad reasons this can happen. One possibility is that they want to know if the treatment causes suicidal ideation, and the effect is often small enough that people more likely to report those symptoms independent of the treatment confound the result. Another is that they don't want to have to deal with the safety protocols that come with screening in participants who have reported any history of suicidality. Another still is that higher likelihood of an active mental health crisis means that it's harder for study coordinators to determine if participants have provided informed consent.
Sometimes studies are specifically for treatment-resistant depression, and I expect those studies are more likely to screen in participants with a history of suicidality, so I would recommend keeping an eye out for those if you would like to participate in clinical trials.
Be strong, brother, there is hope. Antidepressant can be really hard to administer, they exclude particularly vulnerable people from trials because they need to be protected the most.
Because it would be unbelievably irresponsible to test drugs like that on someone experience suicide ideation. Like, they should be put in prison irresponsible.
This is only because society doesn't bear the cost of the natural outcome. If someone with suicidal ideation is excluded from trials on moral grounds and ultimately satisfies those internal cravings, nobody is at fault.
> This is only because society doesn't bear the cost of the natural outcome
Society doesn't bear the cost of someone killing themselves? That can't be what this means, but it's hard for me to read it a different way.
> If someone with suicidal ideation is excluded from trials on moral grounds and ultimately satisfies those internal cravings, nobody is at fault.
If someone with suicidal ideation is included in trials where drugs may INCREASE those ideations and they kill themselves, then the trial is at fault. You're not actually contending that they should be included anyway because they'll probably kill themselves anyway?
Or the type of depression that doesn't lead to suicide ideation because depression, in and of itself, is an incredibly broad term and not everyone that is depressed wants to die.
Abstract: "The FDA does not formally regulate representativeness, but if trials under-enroll vulnerable patients, the resulting evidence may understate harm from drugs. We study the relationship between trial participation and the risk of drug-induced adverse events for cancer medications using data from the Surveillance, Epidemiology, and End Results Program linked to Medicare claims. Initiating treatment with a cancer drug increases the risk of hospitalization due to serious adverse events (SAE) by 2 percentage points per month (a 250% increase). Heterogeneity in SAE treatment effects can be predicted by patient's comorbidities, frailty, and demographic characteristics. Patients at the 90th percentile of the risk distribution experience a 2.5 times greater increase in SAEs after treatment initiation compared to patients at the 10th percentile of the risk distribution yet are 4 times less likely to enroll in trials. The predicted SAE treatment effects for the drug's target population are 15% larger than the predicted SAE treatment effects for trial enrollees, corresponding to 1 additional induced SAE hospitalization for every 25 patients per year of treatment. We formalize conditions under which regulating representativeness of SAE risk will lead to more externally valid trials, and we discuss how our results could inform regulatory requirements."
First off it ignore the fact that if you include frail patients you’ll confound the results of the trial. So there is a good reason for it.
Second, saying “rate of SAE is higher than rate of treatment effect” is a bit silly considering these are cancer trial - without treatment there is a risk of death so most people are willing to accept SAE in order to achieve treatment effect.
Third, saying “the sickest patients saw the highest increase in SAE” seems obvious? It’s exactly what you’d expect.
First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.
Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
> First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.
Sure, but including frail outliers does not automatically mean you can generalize to the whole population. People can be frail for a wide variety of reasons. Only some of those reasons will matter for a given trial. That means the predictive power varies widely depending on which subpopulation you're looking at, and you'll never be able to enroll enough of some of the subgroups without specifically targeting them.
The results in the posted paper seem valid to me, but the conclusion seems incorrect. This seems like a paper that is restating some pretty universal statistical facts and then trying to use that to impose onerous regulations that can't and won't solve the problem. It will improve generalizability for a small fraction of the population, at a high cost.
> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Of course they do. It's a good thing we have informed consent.
> Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
If your primary claim is that data from non-frail people is not generalizable to frail people, then how can you claim that data from frail people is generalizable to non-frail people? If the trials for aspirin found that hemophiliacs should get blood clot promoting medications along with it, then should non-hemophiliacs also be taking those medications?
I'm thankful we can extract some amount of useful data from these trials without undue risk. It's always going to be a balancing act, and this article proposes putting a thumb on the scale that reduces the data without even solving the problem it's aiming at addressing.
> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
A common reason for a drug (especially a cancer drug) going to trial is because other options have already failed. For example CAR-T therapies are commonly trialed on patients with R/R (relapsed/refractory) cohorts.
> "In subjects who have early-stage disease and available therapies, the unknown benefits of first-in-human (FIH) CAR T cells may not justify the risks associated with the therapy."
But you’re stating the obvious? It’s not like physicians don’t know trials are designed this way, and for good reasons.
Frail patients confound results. A drug may work great, but you’d never know because your frail patients die for reasons unrelated to the drug.
Second is obvious as well. Doctors know there are treatment alternatives (with the same drawback to trial design).
And I already touched on your third point. The alternative to excluding frail patients is not being able to tell if the drug does anything. In many cases that means the drug isn’t approved.
Excluding frail patients has its drawbacks, but it has benefits as well. This paper acts like the benefits don’t exist.
As a lay person, huh, that's something you don't think about. But still even if that were true, I don't see how society, patients and their caretakers will accept high risk patients in trails. And wasn't there news recently of UK phasing out animal trails too?
Tangentially related, but I was surprised to learn about the lax attitude towards placebos in trials. Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos. Last I heard, there is no requirement or expectation to document placebos used, and they are often not mentioned in publications.
> Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos.
This would be called an "active placebo" and would certainly be documented.
It's common to find controlled trials against an existing drug to demonstrate that the new drug performs better in some way, or at least is equivalent with some benefit like lower toxicity or side effects. In this case, using an active comparison against another drug makes sense.
You wouldn't see a placebo-controlled trial that used an active drug but called it placebo, though. Not only would that never get past the study review, it wouldn't even benefit the study operator because it would make their medication look worse.
In some cases, if the active drug produces a very noticeable effect (e.g. psychedelics) then study operators might try to introduce another compound that produces some effect so patients in both arms feel like they've taken something. Niacin was used in the past because it produces a flushing sensation, although it's not perfect. This is all clearly documented, though.
Those are documented, but not necessarily in the paper. You can find the info at clinicaltrials.gov. Check out this current trial for breast cancer treatment by Merck Sharp & Dohme LLC for example. For the control arm, they are allowing doctors choice from a set of alternatives. Assuming the doctors are selecting control treatments to improve chance of survival, this test is comparing the new treatment to "the best known treatment for this specific cancer".
This covers the trials not being fully representative, but largely neglects why that is the case.
The paper defines a population "at high risk of drug-induced serious adverse events", which presumably means they're also the most likely people to be harmed or killed by the drug trial itself.
A lot of companies essentially cherry pick healthy patients and write insane inclusion/exclusion criteria to rule out anyone except for the ideal participant, which is why more and more research sites are negotiating payment up front for pre-screening and higher screenfail % reimbursement for into their study budgets.
Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.
Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.
This problem is actually even worse than the article identifies, because broad definitions of what a "risk" is, result in broad exclusions.
The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.
The exclusion of women from clinic trials is one of those things that makes me really angry, there's many women in my life who've been adversely affected by various medications and essentially palmed off about it, being made to feel like they're making it up when there's obviously a problem at hand.
It will be one of those things future historians of medicine will judge our time harshly for in my opinion, and rightly so.
Wonder if this is what happened to fluoroquinolones. They are likely mitochondrial toxins and some small percentage of patients get permanently harmed by them and sometimes in a severe way. Older versions were taken off the market and the current versions each have multiple black box warnings. Sadly, it seems many doctors aren’t even aware of this.
Move generally, whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.
> Are you suggesting the study operators are tampering with numbers before publishing?
No, but did you not read the posted article? Firstly, trials don't select participants unbiasedly. Secondly, many trials are not long enough for the side effects to manifest. Thirdly, I have enough real world experience.
Real world experience doesn't count on HN health articles. If it wasn't documented by a researcher paid via funding from his industry leaders, or a government official trying to fast track his hiring in the public sector for $800k a year, it basically didn't happen.
And this just goes to reinforcing the beliefs of those who are skeptical of medical research. "Trust the science" is all well and good in theory except when the scientists are telling you a selective, cherry-picked story.
It's understandable that unusual patients are seen as confounding variables in any study, especially those with small numbers of patients. Though I haven't read beyond the abstract, it also makes sense that larger studies (phase 3 or 4) should not exclude such patients, but perhaps could report results in more than one way -- including only those with the primary malady as well as those with common confounding conditions.
Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.
Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.
Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.
This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?
Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.
The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.
This could at least be done after release, but I don’t think any incentives are there, while collecting the data is incredibly difficult
It is done, in many countries there are legal requirements to report adverse events whenever they are observed upon use
https://en.wikipedia.org/wiki/Pharmacovigilance#Adverse_even...
That data goes into VAERS and FAERS. You can query it in MedWatch.
It seems like the current situation is doing a disservice to "unusual" patients (who may actually make up the majority of patients).
how do you figure? absolute SAE rate increases 2 percentage points. nothing changes about relative SAE rate. does it change anything about your choice between different health technologies? no.
A relative of mine was in the late-stages of cancer, and was not able to find ANY trials willing to accept them. The worst of those trials was out of Baltimore (MD USA), and they ghosted my relative after initial in-person consults that required by relative to drive out of state. Despite the patient's repeated and dogged outreach to the trial authors after that point, there was never an explicit "no" that would let the patient move on, just radio silence instead and that felt cruel to me.
I've personally been excluded from several depression clinical trials for having suicidal ideations, it makes me wonder just what kind of "depression" they are testing drugs on.
There are a few broad reasons this can happen. One possibility is that they want to know if the treatment causes suicidal ideation, and the effect is often small enough that people more likely to report those symptoms independent of the treatment confound the result. Another is that they don't want to have to deal with the safety protocols that come with screening in participants who have reported any history of suicidality. Another still is that higher likelihood of an active mental health crisis means that it's harder for study coordinators to determine if participants have provided informed consent.
Sometimes studies are specifically for treatment-resistant depression, and I expect those studies are more likely to screen in participants with a history of suicidality, so I would recommend keeping an eye out for those if you would like to participate in clinical trials.
Be strong, brother, there is hope. Antidepressant can be really hard to administer, they exclude particularly vulnerable people from trials because they need to be protected the most.
Because it would be unbelievably irresponsible to test drugs like that on someone experience suicide ideation. Like, they should be put in prison irresponsible.
This is only because society doesn't bear the cost of the natural outcome. If someone with suicidal ideation is excluded from trials on moral grounds and ultimately satisfies those internal cravings, nobody is at fault.
> This is only because society doesn't bear the cost of the natural outcome
Society doesn't bear the cost of someone killing themselves? That can't be what this means, but it's hard for me to read it a different way.
> If someone with suicidal ideation is excluded from trials on moral grounds and ultimately satisfies those internal cravings, nobody is at fault.
If someone with suicidal ideation is included in trials where drugs may INCREASE those ideations and they kill themselves, then the trial is at fault. You're not actually contending that they should be included anyway because they'll probably kill themselves anyway?
The type of depression that makes the sufferer lie about not having suicidal ideations
Or the type of depression that doesn't lead to suicide ideation because depression, in and of itself, is an incredibly broad term and not everyone that is depressed wants to die.
Abstract: "The FDA does not formally regulate representativeness, but if trials under-enroll vulnerable patients, the resulting evidence may understate harm from drugs. We study the relationship between trial participation and the risk of drug-induced adverse events for cancer medications using data from the Surveillance, Epidemiology, and End Results Program linked to Medicare claims. Initiating treatment with a cancer drug increases the risk of hospitalization due to serious adverse events (SAE) by 2 percentage points per month (a 250% increase). Heterogeneity in SAE treatment effects can be predicted by patient's comorbidities, frailty, and demographic characteristics. Patients at the 90th percentile of the risk distribution experience a 2.5 times greater increase in SAEs after treatment initiation compared to patients at the 10th percentile of the risk distribution yet are 4 times less likely to enroll in trials. The predicted SAE treatment effects for the drug's target population are 15% larger than the predicted SAE treatment effects for trial enrollees, corresponding to 1 additional induced SAE hospitalization for every 25 patients per year of treatment. We formalize conditions under which regulating representativeness of SAE risk will lead to more externally valid trials, and we discuss how our results could inform regulatory requirements."
This seems like an odd criticism.
First off it ignore the fact that if you include frail patients you’ll confound the results of the trial. So there is a good reason for it.
Second, saying “rate of SAE is higher than rate of treatment effect” is a bit silly considering these are cancer trial - without treatment there is a risk of death so most people are willing to accept SAE in order to achieve treatment effect.
Third, saying “the sickest patients saw the highest increase in SAE” seems obvious? It’s exactly what you’d expect.
First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.
Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
> First, ignoring frail patients means your trial isn't representative of the wider population, so it shouldn't be accepted for general use - only on people who were well-represented in the trial.
Sure, but including frail outliers does not automatically mean you can generalize to the whole population. People can be frail for a wide variety of reasons. Only some of those reasons will matter for a given trial. That means the predictive power varies widely depending on which subpopulation you're looking at, and you'll never be able to enroll enough of some of the subgroups without specifically targeting them.
The results in the posted paper seem valid to me, but the conclusion seems incorrect. This seems like a paper that is restating some pretty universal statistical facts and then trying to use that to impose onerous regulations that can't and won't solve the problem. It will improve generalizability for a small fraction of the population, at a high cost.
> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
Of course they do. It's a good thing we have informed consent.
> Third, a big part of trials is to discover and develop prevention methods for SAEs. Explicitly ignoring the people most likely to provide data valuable for the general population sounds like a pretty silly approach.
If your primary claim is that data from non-frail people is not generalizable to frail people, then how can you claim that data from frail people is generalizable to non-frail people? If the trials for aspirin found that hemophiliacs should get blood clot promoting medications along with it, then should non-hemophiliacs also be taking those medications?
I'm thankful we can extract some amount of useful data from these trials without undue risk. It's always going to be a balancing act, and this article proposes putting a thumb on the scale that reduces the data without even solving the problem it's aiming at addressing.
> Second, you're ignoring the possibility of other treatment options. It isn't always the binary life-or-death you're making it, so SAEs do matter.
A common reason for a drug (especially a cancer drug) going to trial is because other options have already failed. For example CAR-T therapies are commonly trialed on patients with R/R (relapsed/refractory) cohorts.
https://www.fda.gov/regulatory-information/search-fda-guidan...
> "In subjects who have early-stage disease and available therapies, the unknown benefits of first-in-human (FIH) CAR T cells may not justify the risks associated with the therapy."
But you’re stating the obvious? It’s not like physicians don’t know trials are designed this way, and for good reasons.
Frail patients confound results. A drug may work great, but you’d never know because your frail patients die for reasons unrelated to the drug.
Second is obvious as well. Doctors know there are treatment alternatives (with the same drawback to trial design).
And I already touched on your third point. The alternative to excluding frail patients is not being able to tell if the drug does anything. In many cases that means the drug isn’t approved.
Excluding frail patients has its drawbacks, but it has benefits as well. This paper acts like the benefits don’t exist.
As a lay person, huh, that's something you don't think about. But still even if that were true, I don't see how society, patients and their caretakers will accept high risk patients in trails. And wasn't there news recently of UK phasing out animal trails too?
I can’t find the exact MDMA study for PTSD but after reading the study participants rejection criteria for it; it seemed like few could qualify.
I saw a new procedure available in Mexico for 8k for psychedelic treatment with Ibogaine. Still schedule 1 like MDMA in USA.
It looks like there has been a few MDMA trials for ptsd even though the FDA denied more widespread testing.
https://www.science.org/content/article/fda-rejected-mdma-as...
Tangentially related, but I was surprised to learn about the lax attitude towards placebos in trials. Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos. Last I heard, there is no requirement or expectation to document placebos used, and they are often not mentioned in publications.
> Classes of drugs have expected side effects, so it's common to use medications with similar effects as placebos.
This would be called an "active placebo" and would certainly be documented.
It's common to find controlled trials against an existing drug to demonstrate that the new drug performs better in some way, or at least is equivalent with some benefit like lower toxicity or side effects. In this case, using an active comparison against another drug makes sense.
You wouldn't see a placebo-controlled trial that used an active drug but called it placebo, though. Not only would that never get past the study review, it wouldn't even benefit the study operator because it would make their medication look worse.
In some cases, if the active drug produces a very noticeable effect (e.g. psychedelics) then study operators might try to introduce another compound that produces some effect so patients in both arms feel like they've taken something. Niacin was used in the past because it produces a flushing sensation, although it's not perfect. This is all clearly documented, though.
Those are documented, but not necessarily in the paper. You can find the info at clinicaltrials.gov. Check out this current trial for breast cancer treatment by Merck Sharp & Dohme LLC for example. For the control arm, they are allowing doctors choice from a set of alternatives. Assuming the doctors are selecting control treatments to improve chance of survival, this test is comparing the new treatment to "the best known treatment for this specific cancer".
https://clinicaltrials.gov/study/NCT07060807#study-plan
You were surprised to learn this because it’s not true.
but how do you even convince high risk patients to join the trials?
Wouldn't it depend why they're high risk?
If the risk is primarily due to, or made worse by, the disease being treated, wouldn't they want to join the trial?
This covers the trials not being fully representative, but largely neglects why that is the case.
The paper defines a population "at high risk of drug-induced serious adverse events", which presumably means they're also the most likely people to be harmed or killed by the drug trial itself.
A lot of companies essentially cherry pick healthy patients and write insane inclusion/exclusion criteria to rule out anyone except for the ideal participant, which is why more and more research sites are negotiating payment up front for pre-screening and higher screenfail % reimbursement for into their study budgets.
Study design is sometimes optimized so only the "best" most enticing participants will actually be eligible, I've seen as low as 2% - 12% but frequently 50% randomization rates. Some studies also have 100 to 150 day screening period, a limited AND full screening period, etc.
Overly restrictive inclusion/exclusion criteria to super narrowly defined ideal populations hinders enrollment, causes a large burden to sites for prescreening and ends with trial results that fail to reflect real-world demographics.
Also, if they're known to be at such a high risk of adverse events, would they even be given the treatments, trial or not?
This problem is actually even worse than the article identifies, because broad definitions of what a "risk" is, result in broad exclusions.
The most pernicious of these problems is that women--yes, more than half the earth's population--are considered a high risk group because researchers fear menstrual cycles will affect test results. Until 1993 policy changes, excluding women from trials was the norm. Many trials have not been re-done to include women, and the policies don't include animal trials, so many rat studies, for example, still do not include female rats--a practice which makes later human trials more dangerous for (human) female participates.
[1] Sort of one citation: https://www.aamc.org/news/why-we-know-so-little-about-women-... There's more than this--I wrote a paper about this in college, but I don't have access to jstor now, so I'm not sure I could find the citations any more.
The exclusion of women from clinic trials is one of those things that makes me really angry, there's many women in my life who've been adversely affected by various medications and essentially palmed off about it, being made to feel like they're making it up when there's obviously a problem at hand.
It will be one of those things future historians of medicine will judge our time harshly for in my opinion, and rightly so.
Wonder if this is what happened to fluoroquinolones. They are likely mitochondrial toxins and some small percentage of patients get permanently harmed by them and sometimes in a severe way. Older versions were taken off the market and the current versions each have multiple black box warnings. Sadly, it seems many doctors aren’t even aware of this.
i would imagine to make life of the statisticians downstream easier
This was a plot in an early season of ER.
Move generally, whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.
> whenever you read the percentage of patients that are noted as having a particular side effect from a medicine, the real percentage is much higher.
The patients self-report their own side effects, then the numbers go into the paper.
Are you suggesting the study operators are tampering with numbers before publishing?
> Are you suggesting the study operators are tampering with numbers before publishing?
No, but did you not read the posted article? Firstly, trials don't select participants unbiasedly. Secondly, many trials are not long enough for the side effects to manifest. Thirdly, I have enough real world experience.
Real world experience doesn't count on HN health articles. If it wasn't documented by a researcher paid via funding from his industry leaders, or a government official trying to fast track his hiring in the public sector for $800k a year, it basically didn't happen.
This is why I encourage the reporting of any and all side-effects of any treatment to the FDA. Information withheld cannot be collected.
https://www.fda.gov/safety/medwatch-fda-safety-information-a...
And this just goes to reinforcing the beliefs of those who are skeptical of medical research. "Trust the science" is all well and good in theory except when the scientists are telling you a selective, cherry-picked story.
Strange how that line of thinking always winds up in places like "vaccines are bad" or "ivermectin cures COVID".
No relation (except in your winding mind).
It correctly observes that experts are not always right, and often incorrectly responds by turning to loud, persuasive quackery.
See also: women.