I worked for the "Father of Robotic Surgury" and once in a company-wide meting he said "The general public would be pretty happy with the average surgeons results, but they would be horrified by the below average surgeons results". Their goal was to bridge that gap with robotics.
It's not just that there are low performers, but that systems aren't always built to send them to do something else quickly.
When my mom need her second urgent brain surgery, one of the nurses, a friend of a friend, warned us that one of the surgeons on staff that night had dreadful results with basically any procedure. and that if we were stuck with him, that refusing the surgery until another one was able to come in. If the floor nurses know, the head on neurology knows, and yet, they were still OK letting them operate.
Any functioning system just has to eject surgeons that get bad outcomes. It's not as if they have to stop practicing medicine, but move to something a little less dangerous.
I agree with the sentiment. The specifics of the implementation are hard.
When it comes to bad doctors, I think a lot of people in the medical field would agree "you know one when you see one." But when you're talking about putting professional restrictions on someone, you need objective criteria.
The naive way, counting up bad outcomes, leads to a system where surgeons are incentivized to decline any case that looks technically difficult or where the patient has lots of preexisting conditions that put them at risk for complications after the surgery. We already see this to a degree in transplant surgery, where outcomes are followed closely to avoid wasting organs.
That said, I think true incompetence is pretty rare. I can't think of a single doctor I 've worked with professionally where I'd be concerned if I found out they were taking care of one of my immediate family.
I want surgeons to decline any case that looks technically difficult. A better surgeon should handle those cases. I want them to decline cases where lots of preexisting conditions that put them at risk for complications after the surgery, too. I'm worried about surgeons who never decline cases, who are eager to cut and maximize profits. We need more malpractice lawsuits so that surgeons become more willing to decline cases.
> I want surgeons to decline any case that looks technically difficult. A better surgeon should handle those cases.
Already happens all the time, although it's at the discretion of the first surgeon. Again, it's difficult to formulate objective criteria for when a surgeon should forward a case on to someone else.
I don't believe that surgeons obviously over operate.
In my career I've seen
- A case where the surgical team declined to operate for endocarditis with congestive heart failure, despite the fact that the society guidelines recommend surgery in that scenario
- A case where the surgical team declined to operate for a spinal cord injury that left a patient paralyzed from the neck down (and dependent on machines to breathe due to the paralysis affecting their diaphragm)
- A case where the surgical team declined to operate on an abscess even after a patient's blood stream infection failed to clear after two weeks of the strongest IV antibiotics
In all the first and third cases, the disease turned terminal after our surgical team declined to operate. In the second example the patient opted to die rather than live the rest of their life on a ventilator and I was left with the responsibility of arranging hospice for the patient.
I'll admit that these are extreme cases, but my point is the patients and family members in those cases likely had a very different view about whether surgeons should decline high risk surgeries as often as they do, let alone more often.
I don’t. I want the surgeon to clearly state their concerns and the risks, but the patient to decide if it’s worth the risk. After all, it’s his/her life.
I'm not a surgeon myself, but when I was in medical school the program director of our local general surgery residency told me that in terms of hand skills 90% of surgeons are more or less average, 10% are masters, and 10% are horrific. (So basically a bell curve with very thin tails.) He also said the correlation between test scores and surgical hand skills was pretty week.
How much of surgery is based on dexterity vs knowledge/attention-to-detail? I sort of assumed that most operations are basic plumbing (A connects to B) while there are a few specialized domains that require exquisite deftness.
The question is too general. Depends a lot on the kind of surgery you're doing. I guess the answer you're looking for is that anyone could be a surgeon, but not for all kinds of surgery. Also 'basic plumbing' with no room for error is not an easy thing at all.
It's more in that if you improperly connect a to b someone dies, and it's not just a snap fit like a pipe. You need to do things like suture two blood vessels together, using pliers (not the technical term), inside a dark box lit up by a tube, while looking at an upside down TV image of what's going on.
I'm just relaying what my friend who is studying to become a doctor told me, but by his account there's a wealth of techniques for each procedure or even parts of it, like tying up the dangling bits after kidney removal.
Ultimately it boils down to what a given surgeon practiced in their career.
I don't think that "Bell curve" should be interpreted strictly as a Gaussian in this context, but more literally as a curve with a shape resembling a bell.
Kurtosis[0] is a term I came across when dealing with random vibration analysis, but I understand very little if it. The idea of "moments"[1] of various orders to describe a shape of distribution is interesting in general. It sounds analogous to Fourier series, describing a shape/graph with a series of values.
> Excess kurtosis, typically compared to a value of 0, characterizes the “tailedness” of a distribution. A univariate normal distribution has an excess kurtosis of 0. Negative excess kurtosis indicates a platykurtic distribution, which doesn’t necessarily have a flat top but produces fewer or less extreme outliers than the normal distribution. For instance, the uniform distribution (ie one that is uniformly finite over some bound and zero elsewhere) is platykurtic. On the other hand, positive excess kurtosis signifies a leptokurtic distribution. The Laplace distribution, for example, has tails that decay more slowly than a Gaussian, resulting in more outliers.
(Emphasis added.)
You can't have a bell curve with thin tails, because a bell curve has standard tails by definition.
Medicine is now absurdly complex, it's far more than a person can possibly learn especially if trying to be up to date with modern research. The more you can memorise correctly and pattern match the better. Many patients are failed in the current system, most not fatally but their lives are damaged and it's not uncommon for more complex diseases to have 90% of sufferers never getting a diagnosis until they die from the disease.
Something has to change drastically in how medicine is organised because it's not working in its current iteration as the difficulty goes up and up.
Things are changing to accomodate the increasing complexity, same way as ever: specialization. There are now subsubspecialties, and 'cardiologist' or 'nephrologist' have become incomplete qualifiers. It may not look like that from the pov of outsiders, but medicine is becoming more and more secure by the day. Things were much worse before.
But now you have the problem of being too specialized - Ive seen many specialists that think a problem lies within their specialty- when it does not. And how do you deal with problems that are multi-disciplinary (problems that require multiple organ systems) when you have an army of specialists that are each fighting for their own fiefdoms?
When all you have is a hammer everything looks like a nail. Comes to mind.
Well, the model is migrating to one of hyperspecialists collaborating together. Problem is, this isn't compatible with private practice where you're operating mostly alone, and this results in what you describe. The model has to evolve, yes. Good news is, it is in fact evolving (slowly). We can't evolve faster than science anyway, and while medical science is evolving much faster than it used to, we're far from the exponential acceleration we've seen in other domains, e.g. computers.
Its the environment that compounds the complexity. Go down the list of Largest companies by revenue in the US and 8 in top 20 are related to "health" - are they running hospitals? are they pharma companies? No.
They run pharmacy benefits management, health insurance and drug distribution.
The estimate is 4-5 Trillion flows throw these firms. Which is larger than the GDP of India. So this gigantic structure has emerged that doesn't really make too much profit btw (very similar to Amazon Platform Economics) but is layer upon layer upon layer of cash flow passing through middlemen.
Drastic change requires new ideas about what do we do about all these middlemen who shape the environment on top of which everything exists.
Alas, independent middle layers have long been the US solution to avoiding monopolies. This is the whole reason car manufacturers can't sell directly to consumers, and micro breweries can't sell to consumers except for on-site purchases. Breweries in particular have to sell to distributors, who sell to stores.
Banning the middle layers here (absent other changes) just means that the companies that replace their spots in the top 20 will be vertically integrated conglomerates that manufacturer, distribute, prescribe and provide insurance (i.e. payment plans) for pharmaceutical drugs.
> Banning the middle layers here (absent other changes) just means that the companies that replace their spots in the top 20 will be vertically integrated conglomerates that manufacturer, distribute, prescribe and provide insurance (i.e. payment plans) for pharmaceutical drugs.
Except these companies are already vertically integrated, to a large degree. All the biggest insurers have their own in-house PBMs.
Revenue is an incomplete signal of the complexity and waste. It’s just a signal of the money flowing through. A “single payer” system would probably also show a huge revenue number even if the profit was <=0. There’s just a lot of money and a lot of people who are patients.
I don’t disagree that the system requires change and is extremely complex, however.
The real problem is that it’s nearly impossible to “scale” healthcare and keep it personalized, and people want personalized healthcare - because that’s shown to be more effective healthcare. Doctors can only see a limited number of patients a day, and they need to be paid some compensation commensurate with their skills and efforts. That alone makes it hard for everyone “healthy” to see a doctor often enough and for long enough to get deeply personal care. Most people realistically can pay out of pocket for preventative care. $100-200/yr for an American isn’t crazy. Even most drugs are super affordable out of pocket if the profit margins are kept low (which is started to be available, bits at a time).
The real complexity, of course, is the long-tail where a few people get cancer and car accidents and other serious conditions which swamp the costs of everything else.
I don't think $100-200 per year for preventative care is enough. I reckon $1000-$10,000 per year, depending on age, is more accurate. You should spend at least $500 per year on nutritional supplements like Vitamin D. Switzerland has a better medical system that's cheaper than our system, but it's still expensive.
Indeed, it may be the case that the middlemen aren't individually all that profitable, but if the money passes through several stages and each one skims off a few percent, you end up with the present situation where health care costs twice as much as it does in any civilized country.
> Gray, Lipner, McDonald, and Vandergrift reported that they are employees of ABIM [American Board of Internal Medicine]. Landon reported receiving consulting fees from ABIM for ongoing work during the conduct of the study.
A study that shows the board test is effective, sponsored by the board?
I might be reassured by more detailed statistics about the analysis. Even top 25% vs bottom 25% - how much actual variation in score are we talking about? What is the probability that someone scoring in the top 25% is actually in the top/bottom 25%? We imagine a big gap but that’s not necessarily true. Consider exam scores of 85 90 90 95…
With the absolute absurdity the residency process, and the focus entirely on new doctors just after that residency, I have to wonder how much of this just corresponds to whoever's lucky enough to be the kind of high-powered mutant who can survive multiple years of 80- to 100-hour week schedules designed by a man who was high on cocaine and morphine 24/7 (seriously, look it up, it's true). There are going to be a lot of people who need an extended sabattical to recover from that before they'll be effective at anything at all, which makes any kind of baseline of test scores really suspect to me.
Does the difference matter in this context, though? Medicine isn't like other professions where it's no big deal to have some fraction of the workforce be bad at their jobs. I'm not so status-quo-biased that I'd support 100 hour residencies, but I'm skeptical of reform proposals that focus on doctors' working conditions rather than patient outcomes. If some filtering process leads to better patient outcomes, I think we should retain it, even if it's quite stressful for the doctors who have to go through it.
Fair point. There's some data showing patient outcomes are worse when managed by overworked residents-in-training, but I think you're referring to outcomes post-residency. i.e. Physicians should squeeze as much training as possible into the allotted years. This is reasonable, especially for surgical specialties where procedural reps are a commodity for trainees.
I'd be more open to this line of reasoning if physician's salaries had kept pace with inflation over the last 30 years and if if we hadn't tacitly accepted a much, much lower standard of training in the form of DNPs, CRNAs and PAs who are now practicing independently in a lot of regions. You can't demand that people make extraordinary sacrifices without extraordinary compensation.
For contrast, most European countries have a much longer post-residency training process that is more humane. Caveat being that students enter medical school directly from high school and don't have student loans.
It's also worth pointing out that in the US a LOT of those 100 hours are not spent in direct patient care. They're spent doing chores ('scut') that are not directly tied to patient care. Think: Calling insurance companies for prior authorization for your supervisor or filling out FMLA paperwork for one of your supervisors' patients. As a resident you don't have the ability to say "no" to these tasks.
> i.e. Physicians should squeeze as much training as possible into the allotted years. This is reasonable, especially for surgical specialties where procedural reps are a commodity for trainees.
It's mixed, though. We don't know how much "squeezing as much training" helps or hinders future performance. We do know that sleep debt hurts retention of new knowledge and skills.
So I'm not positive whether "50% more training, but with not enough sleep during most of the interval" will result in better outcomes.
> I'd be more open to this line of reasoning if physician's salaries had kept pace with inflation over the last 30 years
Doctors in the US are artificially scarce and artificially expensive compared to the rest of the world. The artificial scarcity of residencies also contributes to the unusually harsh residency work conditions.
Doctors in the United States are paid more than doctors in Norway and Switzerland even though those countries are richer and our doctors aren't better.
Your comment sounds reasonable, but it doesn't allow for nuance.
If a hellish residency improves patient outcomes by 0.1%, at the expense of every single resident suffering twice as much as they need to (and likely leading to some stimulant addictions and deaths among the resident/doctor population), that's not a fair tradeoff.
Medical workers don't exist solely to sacrifice themselves for others; they are humans also and their needs should be weighed as important like everyone else's.
As it so happens I think some of the strain of medical residency is related to supply shortages in the health care industry. If it's not crystal clear that working 80+ hours per week is necessary to significantly improve patient outcomes, and it is clear that working 80+ hours per week makes a lot of people choose other careers (limiting supply artificially), then reform here is imperative.
Oh I'm not saying it does, the person above seemed to be suggesting that we should focus on figuring out the residency conditions that lead to the best patient outcomes, rather than improve the conditions for residents, which suggests they believe worse conditions for residents may be better for patients.
Just to point out the obvious, people doing 80 hrs/week for 2 years (lower end of residency term I believe) are going to have twice as much 'experience' as people doing 40hrs/week for 2 years.
I suspect most of us here know more hours worked doesn't directly correlate with more retention of information and best practices, but that's the thinking.
I'm arguing that even if 80hrs/week residencies was the optimal amount of pressure to turn our fledgling residents into battle-hardened physicians, if you can get 99% of the effect with 40hrs/week, maybe do that instead. And again, I'm not even suggesting this is actually the case.
The idea is that the stress and sleep deprivation are not sources of permanent impairment (even though they are), but rather a filter that selects the strongest candidates.
I don't necessarily think the relationship is "worse residency conditions predicts higher board exam scores"? It could be that residents with more time to study or whatever score higher. It could be examinees with scores close to the threshold are accounting for the association. Or maybe it is resiliency. I have no idea.
My general impression is that the evidence overall is really not supportive of harsher residencies in terms of patient outcomes. I also think that rigor does not have to mean masochism or hubris; there seems to be this assumption that any change to residencies would mean dumbing it down or making it easier, as opposed to improving things overall. I'm also a little skeptical of minor tweaks to residency that might have happened somewhere now being representative of a more wholesale restructuring.
The often unacknowledged factor in the background is that hospitals and residency locations are getting free labor with no chance of repudiation of their situation by workers. Hospitals are getting physicians whose salaries are paid for by the federal government, where those physicians are essentially unfree to move if they're unhappy. So of course there's going to be an attempt to milk them for everything. It gets whitewashed as "selflessness" and physicians are encouraged to boast about it or something, instead of calling it out as exploitation. No physician wants to make that claim, for a whole host of reasons, even if it is true.
Imagine what would happen if hospitals had to bear the costs of residency training completely, like just about any other healthcare profession, and residents were able to move freely like most employees.
I get despondent about so much in US healthcare. There's so much focus on invoice costs per se, and payment by insurers, and not enough on monopolies in service delivery, and problems with educational structures. Any attempt to address these issues is met with resistance by various groups with conflicts of interest, who aren't called out on these conflicts of interest.
Another thing about residencies constantly on my mind from other settings (institutional tracking hours in the moment versus recalled hours later) as well as personal experiences with residency in the past is that people are notoriously bad about reporting past work hours and conditions, and tend to exaggerate. I'm not saying that anyone in particular is necessarily being dishonest in describing their residency experience, but I suspect there has been drift over time in conditions that reflects a kind of biased memory of things on the part of residency directors. "I worked 120 hours a week" when that wasn't actually the case, or is distorted, then becomes residency policy for the next generation.
Sometimes I feel like the logical conclusion, given the way these discussions go sometimes, is the only one being legally able to practice is someone with an MD who has completed a residency working 140 hours a week for 6 years, with perfect board exam scores. It just doesn't add up.
> he was able to hide his addiction under a veil of eccentricity and a pyramid of residents
Which means "created an environment to allow himself to be high at work" to me. It's not impossible that he held it off at home, but I don't see why he would.
Also, he's clearly Dr. House; Ctrl-F "Leaving much"
Edit: Well, that's embarrassing. I hadn't realized that the link is to a new 2024 study on IM board scores and patient outcomes. My post is in regards to a 2023 study on USMLE scores and patient outcomes that was pretty widely discussed.
It's 45 minutes so I don't expect people to watch it, but he makes several important points, including:
- This study was performed by USMLE insiders, the only ones with access to this private data. USMLE does not share this data publicly so it's impossible to verify.
- As the USMLE makes millions of dollars from these exams, they have a clear conflict of interest.
- The differences in patient outcome are AT BEST of marginal clinical significance, which the authors of the study even state in the paper.
i did not read the study. an obvious confounding factor is that doctors with better board scores are hired into better hospitals with better patient populations, and thus better outcomes.
> The researchers compared outcomes for patients within the same hospitals who were cared for by doctors with different exam scores. This allowed the researchers to eliminate, or at least minimize, the effect of differences in patient populations, hospital resources, and other variations that might influence the odds of patient death or readmission, independent of a doctor’s performance.
This is also pretty much the easiest thing the factor out through mixed effects modeling (among other methods if required). But your statement that higher scoring physicians go to places with healthier patient populations is not correct across all disciplines. Often it can be the opposite: the best physicians go to the major hospitals (usually but not always university affiliated) located in major population centers that draw in the sickest/worst/rarest cases from the surrounding geography.
'Board exam performance was powerfully linked to patient risk of dying or hospital readmission. For example, there was an 8 percent reduction in the odds of dying within seven days of hospitalization in patients of physicians who scored in the top 25 percent on the exam, compared to the patients of physicians who scored in the bottom 25 percent on the exam, which was still a passing grade.'
Controlled hospital quality? I figure the best credentialed doctors go to the best hospitals, where patients receive a lot of other care aside from the MD.
>The researchers compared outcomes for patients within the same hospitals who were cared for by doctors with different exam scores. This allowed the researchers to eliminate, or at least minimize, the effect of differences in patient populations, hospital resources, and other variations that might influence the odds of patient death or readmission, independent of a doctor’s performance.
Did they also control across types of medicine? If the higher-scoring doctors go into types of care which are more competitive, could those practices have lower patient mortality within 7 days?
For example, maybe burn unit care is high-mortality and low-barrier, compared to sleep medicine which is low-mortality and high-barrier (I don't know how accurate this is, just providing some hypotheticals for clarity)
We (in the US) have spent years deprioritizing standardized testing for college admissions on the (public) justification that they don't reflect potential for success.
Likewise, there's been an element of testing = racial discrimination in all sorts of fields, such as:
It isn't exactly news that doctors with better test scores are better doctors, but this is additional evidence. The article doesn't touch on race, but very deliberately. To anyone on the inside, the silence is deafening.
In the U.S. med schools been matriculating many unqualified "underrepresented minority" (black, hispanic, native American, Hawaiian) medical students for a long time. This is unfair to patients and doctors, especially competent brown doctors, because it is now the case that you get a very strong signal about how how good a doctor is simply by the color of his or her skin. Which is messed up.
AAMC has the data (https://www.aamc.org/data-reports/students-residents/data/fa... , table A-18). This is after the 2023 Supreme Court decision, so the spreads are a little wider in e.g. 2022 data. MCAT scores range from a minimum of 472 to a max of 528, which is stupid and a deliberate tactic to make the differences between groups seem small. Subtracting 472 from each average score, 2024 average MCAT scores look like this for matriculants:
41.9: Asian
40.2: White
36.9: Hawaiian
34.4: Black
33.9: Hispanic
31.3: American Indian
These are very large differences which you can absolutely expect to show up in doctor performance. Everyone has to pass the same boards during / after med school, but that's just going to cut out some of the worst. Among those who pass, the unqualified minority students who were admitted to med school because of their skin color will still be concentrated at the bottom of the distribution.
Do you know what they call the guy who finished last in his med school class? "Doctor".
Was wondering how long I’d have to scroll for this. The reality is that it’s unhealthy not to be “racist” when selecting health care providers right now due to historical policies like this.
When the right takes swipes at “DEI”, going after bar lowering in medical school is very high on the list of legitimate targets for them to attack. I don’t want to care about the race of my doctor, but do gooders gave me no choice by passing so many bad doctors.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161
Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
But more generally...imagine what would have to be true for us to go from BIG differences in g-loaded test performance to small / no differences. Either people fundamentally change somehow (get smarter / dumber), people's test scores systematically differ because they e.g. got better / worse at "tests" or something, independent of their underlying knowledge of the content or abilities, or it's attrition (e.g., very many minority med students wash out, leaving only those who should have been admitted in the first place).
None of those things seem plausible to me. The little glimpse we have from the two studies above is consistent with the obvious thing happening. Things are mostly the same, though I'd bet URM have higher wash-out rates, so differences get attenuated somewhat by the time they're practicing. Of course, URM vs non-URM will sort differently into specialties and geographies so there's that...you'll see bigger or smaller differences depending on how they sorted. A good question, as well, is why the USMLE people don't split reporting by race. I bet one of the reasons is they'd get a lot of flak because there would be big disparities. And good on them (maybe!) because one reason they might care about that is they want to produce good doctors, and watering down their test won't help with that.
> The article doesn't touch on race, but very deliberately. To anyone on the inside, the silence is deafening.
??? The NPI registry doesn't indicate the race of registered providers, only their sex. Really bizarre to call a limitation of the available data "deliberate".
It's possible to put together multiple data sources. There are certain things everyone reading this will already know. It's like reporting "educational attainment" rather than g or IQ in studies...everyone knows what it implies, you just can't say it. Anyway:
1) Board scores are strongly linked to patient outcomes (this paper)
2) We already know test scores vary strongly with observable characteristics like race
3) It's a very safe bet that board scores vary with race in the same way that MCAT scores vary with race
Therefore,
4) We can have a very good idea of how good a doctor is based on observable characteristics like race
Which is a thing the article immediately, obviously, and loudly implies but of course couldn't say for fear of censorship, losing jobs, etc.
Either you want me to make conclusions based on data or you don't. If you want me to make conclusions based on any of the data you provide, then you must provide all the data necessary to make an end-to-end connection to your claim. You can't use a patchwork of studies and say things like "it's a very safe bet" and "we can have a very good idea" to "put together multiple data sources". That's not science, that's "trust me bro".
Show the actual hard data that correlates board certification exam results and race for this study. As it stands now, we can at best associate this with physician sex.
If I'm using your logic, then, without any evidence whatsoever, I can say that obviously because the correlation between MCAT scores and Step 2 scores is weakened compared to Step 1, then it's a "very safe bet" that there will be little to no correlation for Step 3 and almost entirely eliminated by the time they take the BCE.
Or I can be rigorous and not make data points up in my head to fit some worldview.
> Which is a thing the article immediately, obviously, and loudly implies but of course couldn't say for fear of censorship, losing jobs, etc.
No, it doesn't, because it can't, because they don't have any information about the races of the physicians in the study.
"Or I can be rigorous and not make data points up in my head to fit some worldview."
It seems clear to me that you're sticking your head in the sand, not me. I'm believing the thing that is dangerous to believe, not you. I believe it because it's obviously true.
"actual hard data" would be best. It would be best if we just had board scores split by race. But we don't. We do, however, have lots of other information that makes it very, very clear that there will be significant disparities by race in USMLE boards in pretty much exactly the same pattern we see in MCAT scores.
Here's the meat of it, you can look to the other comments here for the potatoes:
This: https://www.sciencedirect.com/science/article/abs/pii/S00904... Suggests MEDIAN USMLE step 1 scores for White, Asian, Hispanic/Latino, and Black applicants were 242, 242, 237, and 232. It's urology specific, and practice specific, though.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161 Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
...and this is just the result of a casual search.
The people who are best at this sort of thing are economists. They are trained to do causal inference based on patchy, far-from-perfect data. It's totally normal to come to a conclusion (even a very strong one!) using a "patchwork of studies". That's just life. You don't usually get "actual hard data". It's very clear what the pattern in the USMLE data would look like. I bet the effect size would be a little attenuated.
Your epistemic stance, which seems to be "Well we don't have perfect, incontrovertible proof, which means we must act like we don't know anything at all!" is unworkable. You don't do this, I don't do this, the world doesn't permit of this. As a rhetorical move, I can see where you're coming from. It gives you license to not think about the hard thing, and to punish those around you who might. But I'd argue that's not a way forward for us as a whole.
This is the data on entrance exams, not exit exams. Is there any data that actually shows minorities that finish med school and pass boards are any less competent?
The entire point of these programs is to make up for the lack of educational access for minorities by giving them a chance to prove themselves by admitting them with lower scores. But if they complete the same program, doesn't that mean they are just as good?
Now, in light of this study, it would be super interesting if this divide holds up in exit exam scores. But until we actually have that data, I'm not sure your claim is valid.
This: https://www.sciencedirect.com/science/article/abs/pii/S00904... Suggests MEDIAN USMLE step 1 scores for White, Asian, Hispanic/Latino, and Black applicants were 242, 242, 237, and 232. It's urology specific, and practice specific, though.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161 Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
But more generally...imagine what would have to be true for us to go from BIG differences in g-loaded test performance to small / no differences. Either people fundamentally change somehow (get smarter / dumber), people's test scores systematically differ because they e.g. got better / worse at "tests" or something, independent of their underlying knowledge of the content or abilities, or it's attrition (e.g., very many minority med students wash out, leaving only those who should have been admitted in the first place).
None of those things seem plausible to me. The little glimpse we have from the two studies above is consistent with the obvious thing happening. Things are mostly the same, though I'd bet URM have higher wash-out rates, so differences get attenuated somewhat by the time they're practicing. Of course, URM vs non-URM will sort differently into specialties and geographies so there's that...you'll see bigger or smaller differences depending on how they sorted. A good question, as well, is why the USMLE people don't split reporting by race. I bet one of the reasons is they'd get a lot of flak because there would be big disparities. And good on them (maybe!) because one reason they might care about that is they want to produce good doctors, and watering down their test won't help with that.
I don't think smart people make better docs. I'm USMLE 90+ percentile, and not particularly clever. It is however, important to be clever enough to understand what you read.
Good docs are humble, meticulous and knowledgeable. Stellar docs are excellent communicators.
The study at least proves better test taking strongly predicts outcomes, test scores are correlated with intelligence as countless studies prove. It may be the case that some non-clever people get high test scores. That doesn't dismiss the general conclusion.
No, no contradiction. I said: high USMLE score != smart. GP said: good doctor = smart. Study says: good doc = high USMLE score. As I also said: good doc = understand what you read.
The article says 'board exam' which is quite different from USMLE. So, it's established: I can't read, and I'm not especially clever. It all checks out ! :-)
if you pray to our lord and savior jesus christ you wouldn't even need surgery the spirit will come down and heal you big pharma is lying to you people
> black applicants were more than 9 times more likely to be admitted to medical school than Asians (56.4% vs. 5.9%), and more than 7 times more likely than whites (56.4% vs. 8.0%)
If the number of Asian applications is 10x the number of spots available, their admittance rate can never be higher than 10%. No “discrimination” required. Same for white applicants.
If you only have 10 black applicants and you accept 5 of them that’s a 50% admittance rate. Which looks huge and you can scaremonger about how much bite and Asian people are unfairly getting sidelined.
Until you see there were 10,000 white applicants with a 8% admittance rate, ie 800 people.
800 from 8% vs 5 from 50%.
Again without absolute numbers the percentages can be very deceiving.
Your argument doesn't work because the data already accounts for differences in GPA and MCAT scores. It’s not comparing total applicants—it’s comparing applicants with the same academic qualifications.
If admissions were race-neutral, then students with the same GPA and MCAT score should have similar acceptance rates. But the data shows black and Hispanic applicants get accepted at much higher rates than equally qualified Asian and white applicants.
Your example about total applicants (10 vs. 10,000) doesn’t apply here because the issue isn’t how many people applied, but who gets in when they have the same credentials.
There have been studies suggesting that elimination of the MCAT does little to nothing to prediction of student performance beyond the second year or so.
My prediction is the correlation is about 0.30-0.40.
As others have pointed out, there are a lot of unmeasured variables not being controlled for in this finding as well.
I'm not surprised board exam scores predict outcomes, I just think there's lots of other variables along that path from one to the other, and even more from MCAT -> board exam.
I'm just suggesting that the observed prioritization of DEI objectives at the expense of the admission process not being completely merit-based likely results in some additional deaths. I agree that there are multiple factors involved that will predict physical competency, not just those that the DEI policies adversely affect.
I’m all for diversity but that admissions gap is just racism.
You can’t have separate entrances for your establishment based on what folks look like, the group you prefer getting better service doesn’t make it equality.
It’s not the case that every single black applicant gets admitted before a single white/Asian applicant does. The point is that it’s much, much easier for a black applicant to get admitted.
A black applicant with GPA and MCAT scores in the lowest bucket still has a 56% chance of admission. That’s on par with an Asian applicant who has GPA and MCAT scores in the highest bucket.
So do you think that if the acceptance rate for high MCAT and GPA are below 100%, then the other bars should be zero? i.e, these are the only admissions criteria that should be considered?
its easy to hide data behind percentages and say 94% of the blacks who had a certain GPA where admitted. look at the raw numbers, study after study have shown improved care for colored patients and outcome better when treated by black physicians which indicates we have to have proportional numbers of black and hispanic physicians representative of their population. If whites and asians disproportionately apply to medical schools their admission rates are going to look different. The systemic advantage afforded to affluent kids by being brought up for 18+ years by highly educated parents is not level playing field.
This is a study my wife wrote regarding this exact scenario, trying to see if patients think they’re getting better care if they’re similar to the doctor (and team) treating them!
Where they randomly assign black male patients to white or black doctors, IIRC, and patients get advice on preventative care. Outcomes for black patients are better because they are more willing to take black doctors' advice. Obviously, newborns in the first study, so it's about doctor competence straight-up.
Most patients are unable to communicate their symptoms accurately enough. Which is why you need to see them in person, talk with them, and examine them. Not saying a robot couldn't perform, but certainly not a simple chatbot. Despite what some papers say.
No one is hiring based on test scores though? The bar to even get into med school is so insanely high that most people able to get in and become doctors were already upper-middle or high SES. The only point in the entire process where "DEI" matters is feeder programs for underprivileged students, the type of people who can't afford to pay for MCAT tutors etc.
I married my wife shortly before she started med school.
Scores are basically the entire name of the game. Sure you’re not hired into your attending job based on scores, but med school and residency are largely based on scores.
Resident physician hiring is strongly based on test scores, specifically the USMLE Step 1. It's true that scores in the board exams the OP discusses aren't super relevant to hiring, though.
Obviously I don’t mean “hire” in the narrow sense. We shouldn’t admit people to medical school based on DEI any more than we should hire them after medical school based on DEI.
Incompetency comes in all types, there’s no need to assume anything. In fact, you should be especially careful if your doctor is [your favorite type of person], that’s when you know your cognitive biases are working against your better judgments.
Have too few doctors already, we should set a bar for qualifications and let anyone over the bar become a doctor. The DEI bogeyman didn’t do any harm here, since the current system requires both to get over the bar AND to be randomly selected for one of N arbitrary spots.
They don't use different bars to become a doctor. Once you're in medical school, everyone passes the same tests and goes through the same process.
Show me evidence that doctors of a certain race are allowed to have lower test scores than some other races in order to pass all of the requirements to become a doctor. I don't care if they give anyone a leg up to get into medical school, we've already agreed that we should let anyone who can pass the stringent process to become a doctor should be handed a "Dr." for their name and sent out to the world, so if there weren't any artificial barriers to having unlimited doctors, then it wouldn't matter who got into medical school or how as long as they passed and became doctors eventually.
We should want more doctors, not argue about who shall become a doctor. All this fighting about the DEI boogeyman is allowing rich pricks to pick our pockets and steal our national resources for themselves.
Can you please stop breaking the site guidelines? We've asked you many times and you've still been doing it repeatedly, such as here and https://news.ycombinator.com/item?id=43062203.
I appreciate that. I was thinking of ones like these:
"Don't be snarky."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Converse curiously; don't cross-examine."
Most people (me included) underestimate the amount of provocation in their own comments (but feel it keenly in others'). To avoid running afoul of this dynamic, it's best to err on the side of making sure that you're following the rules. A sort of safety factor if you will.
One bad health department head will kill way more people than one bad doctor, and you would never guess who's Secretary of Health today, and how uniquely unqualified he is to do that job (or any other job, but that one in particular).
It's wild how the movement that purports to be pushing back on 'unqualified hires' is full of people who can't tell their ass from their elbow. They hold others to a standard that they wouldn't ever dream of meeting.
I understand that morons can be elected, and that's up to the voters, but there's no excuse for political positions that get appointed.
Doctors aren't machines, they're humans. I have not yet read the full paper, only the article, but I already see something really big and important to look out for. When I read the full thing, the question I'll be asking is "what's the likelihood that the self-esteem of doctors was directly intervened on by the exam taking process itself." How do you control for the loss in confidence that learning of your test performance gives you? How are we certain that learning your score on the board exam doesn't make you more conservative (or riskier) with how you treat patients as a psychological effect?
This appears to be an observational result, so I'm genuinely perplexed by the reception here. I genuinely thought this comment shows a healthy amount of curiosity and asking important questions. Asking "what control group did this study use?" is usually well-received here.
Yeah but the patient is just a biological machine. This machine can easily be divided into organs and apportioned among specialists. The machine is easily understood by a corpus of research and laboratory experimentation.
. Many inputs can be placed in the machine by physicians, and the outputs are known. The biological machines can easily be isolated from environment, or monitored with high technology, and assigned numbers in databases to be processed in data centers.
Value is extracted from the biological machines mostly from government and 3rd party sources, so there is no real need to rely on machines having a means or will of their own.
There is no compelling reason to treat humans any different from automobiles for the purposes of medicine and medical treatment. In fact humans are less genetically diverse than motor vehicles, and A new model year will always produce a bumper crop of lemons to work on.
The common misconception of someone with a hard science education.
> Many inputs can be placed in the machine by physicians, and the outputs are known. The biological machines can easily be isolated from environment, or monitored with high technology, and assigned numbers in databases to be processed in data centers.
We aren't even close to that level of understanding.
And still, the model works. Lives are saved. We might save many more with a fully integrated non-simplified approach, but it’s not necessary to keep seeing growth in positive outcomes.
I worked for the "Father of Robotic Surgury" and once in a company-wide meting he said "The general public would be pretty happy with the average surgeons results, but they would be horrified by the below average surgeons results". Their goal was to bridge that gap with robotics.
It's not just that there are low performers, but that systems aren't always built to send them to do something else quickly.
When my mom need her second urgent brain surgery, one of the nurses, a friend of a friend, warned us that one of the surgeons on staff that night had dreadful results with basically any procedure. and that if we were stuck with him, that refusing the surgery until another one was able to come in. If the floor nurses know, the head on neurology knows, and yet, they were still OK letting them operate.
Any functioning system just has to eject surgeons that get bad outcomes. It's not as if they have to stop practicing medicine, but move to something a little less dangerous.
The problem is we don't exactly have a surplus of surgeons. The choice could be between a below average surgeon and no surgeon.
Part of this is the AMA cartel artificially limiting the supply of residencies.
I agree with the sentiment. The specifics of the implementation are hard.
When it comes to bad doctors, I think a lot of people in the medical field would agree "you know one when you see one." But when you're talking about putting professional restrictions on someone, you need objective criteria.
The naive way, counting up bad outcomes, leads to a system where surgeons are incentivized to decline any case that looks technically difficult or where the patient has lots of preexisting conditions that put them at risk for complications after the surgery. We already see this to a degree in transplant surgery, where outcomes are followed closely to avoid wasting organs.
That said, I think true incompetence is pretty rare. I can't think of a single doctor I 've worked with professionally where I'd be concerned if I found out they were taking care of one of my immediate family.
I want surgeons to decline any case that looks technically difficult. A better surgeon should handle those cases. I want them to decline cases where lots of preexisting conditions that put them at risk for complications after the surgery, too. I'm worried about surgeons who never decline cases, who are eager to cut and maximize profits. We need more malpractice lawsuits so that surgeons become more willing to decline cases.
> I want surgeons to decline any case that looks technically difficult. A better surgeon should handle those cases.
Already happens all the time, although it's at the discretion of the first surgeon. Again, it's difficult to formulate objective criteria for when a surgeon should forward a case on to someone else.
I don't believe that surgeons obviously over operate.
In my career I've seen
- A case where the surgical team declined to operate for endocarditis with congestive heart failure, despite the fact that the society guidelines recommend surgery in that scenario
- A case where the surgical team declined to operate for a spinal cord injury that left a patient paralyzed from the neck down (and dependent on machines to breathe due to the paralysis affecting their diaphragm)
- A case where the surgical team declined to operate on an abscess even after a patient's blood stream infection failed to clear after two weeks of the strongest IV antibiotics
In all the first and third cases, the disease turned terminal after our surgical team declined to operate. In the second example the patient opted to die rather than live the rest of their life on a ventilator and I was left with the responsibility of arranging hospice for the patient.
I'll admit that these are extreme cases, but my point is the patients and family members in those cases likely had a very different view about whether surgeons should decline high risk surgeries as often as they do, let alone more often.
I don’t. I want the surgeon to clearly state their concerns and the risks, but the patient to decide if it’s worth the risk. After all, it’s his/her life.
I'm not a surgeon myself, but when I was in medical school the program director of our local general surgery residency told me that in terms of hand skills 90% of surgeons are more or less average, 10% are masters, and 10% are horrific. (So basically a bell curve with very thin tails.) He also said the correlation between test scores and surgical hand skills was pretty week.
How much of surgery is based on dexterity vs knowledge/attention-to-detail? I sort of assumed that most operations are basic plumbing (A connects to B) while there are a few specialized domains that require exquisite deftness.
The question is too general. Depends a lot on the kind of surgery you're doing. I guess the answer you're looking for is that anyone could be a surgeon, but not for all kinds of surgery. Also 'basic plumbing' with no room for error is not an easy thing at all.
It's more in that if you improperly connect a to b someone dies, and it's not just a snap fit like a pipe. You need to do things like suture two blood vessels together, using pliers (not the technical term), inside a dark box lit up by a tube, while looking at an upside down TV image of what's going on.
It doesn’t have to be upside down; you can just rotate the camera to any angle you like.
I'm just relaying what my friend who is studying to become a doctor told me, but by his account there's a wealth of techniques for each procedure or even parts of it, like tying up the dangling bits after kidney removal.
Ultimately it boils down to what a given surgeon practiced in their career.
Most complex or endoscopic surgeries are (dependent on dexterity)
>correlation between test scores and surgical hand skills was pretty week
even if hand skills is orthogonal to test scores, test scores could still be highly correlated with outcomes
That adds up to 110%…
He's a surgeon, not a mathematician.
Lol literal first four words of the comment said he's not a surgeon
He’s a brick layer, not a linguist.
Which must make it the most MD comment I've seen in a long time. It's hilarious!
That's just the normal level of performance people expect of their doctors, 24 hours a day, 8 days a week.
> (So basically a bell curve with very thin tails.)
Is this a concept that exists? I thought "thin tailed" and "fat tailed" were defined by contrast to a normal distribution.
I don't think that "Bell curve" should be interpreted strictly as a Gaussian in this context, but more literally as a curve with a shape resembling a bell.
Kurtosis[0] is a term I came across when dealing with random vibration analysis, but I understand very little if it. The idea of "moments"[1] of various orders to describe a shape of distribution is interesting in general. It sounds analogous to Fourier series, describing a shape/graph with a series of values.
[0]https://en.m.wikipedia.org/wiki/Kurtosis
[1]https://en.m.wikipedia.org/wiki/Moment_(mathematics)
That says exactly the same thing I just said:
> Excess kurtosis, typically compared to a value of 0, characterizes the “tailedness” of a distribution. A univariate normal distribution has an excess kurtosis of 0. Negative excess kurtosis indicates a platykurtic distribution, which doesn’t necessarily have a flat top but produces fewer or less extreme outliers than the normal distribution. For instance, the uniform distribution (ie one that is uniformly finite over some bound and zero elsewhere) is platykurtic. On the other hand, positive excess kurtosis signifies a leptokurtic distribution. The Laplace distribution, for example, has tails that decay more slowly than a Gaussian, resulting in more outliers.
(Emphasis added.)
You can't have a bell curve with thin tails, because a bell curve has standard tails by definition.
I wasn't trying to correct you. Just responding to your comment asking if this is "a concept that exists".
I'm pretty sure, yes? Cauchy distribution and student-t have fatter tails than a standard normal distribution.
From a doc, it was just a figure of speech. Most docs have no idea what a statistical distribution is.
Like bridging it in this sense?
https://en.wikipedia.org/wiki/Brooklyn_Bridge#Culture
Medicine is now absurdly complex, it's far more than a person can possibly learn especially if trying to be up to date with modern research. The more you can memorise correctly and pattern match the better. Many patients are failed in the current system, most not fatally but their lives are damaged and it's not uncommon for more complex diseases to have 90% of sufferers never getting a diagnosis until they die from the disease.
Something has to change drastically in how medicine is organised because it's not working in its current iteration as the difficulty goes up and up.
Things are changing to accomodate the increasing complexity, same way as ever: specialization. There are now subsubspecialties, and 'cardiologist' or 'nephrologist' have become incomplete qualifiers. It may not look like that from the pov of outsiders, but medicine is becoming more and more secure by the day. Things were much worse before.
But now you have the problem of being too specialized - Ive seen many specialists that think a problem lies within their specialty- when it does not. And how do you deal with problems that are multi-disciplinary (problems that require multiple organ systems) when you have an army of specialists that are each fighting for their own fiefdoms? When all you have is a hammer everything looks like a nail. Comes to mind.
Well, the model is migrating to one of hyperspecialists collaborating together. Problem is, this isn't compatible with private practice where you're operating mostly alone, and this results in what you describe. The model has to evolve, yes. Good news is, it is in fact evolving (slowly). We can't evolve faster than science anyway, and while medical science is evolving much faster than it used to, we're far from the exponential acceleration we've seen in other domains, e.g. computers.
if something's wrong, I just don't go to the doctor anymore...
Since you're still alive, it seems the wrong things aren't severe enough to kill you. Good for you!
Based on hours of past experience, the solution to this particular problem seems to be to give all the doctors a cane and a bottle of Vicodin.
Its the environment that compounds the complexity. Go down the list of Largest companies by revenue in the US and 8 in top 20 are related to "health" - are they running hospitals? are they pharma companies? No.
They run pharmacy benefits management, health insurance and drug distribution.
The estimate is 4-5 Trillion flows throw these firms. Which is larger than the GDP of India. So this gigantic structure has emerged that doesn't really make too much profit btw (very similar to Amazon Platform Economics) but is layer upon layer upon layer of cash flow passing through middlemen.
Drastic change requires new ideas about what do we do about all these middlemen who shape the environment on top of which everything exists.
The biggest problem with the US health system? Complexity.
It's impossible to fix overly complex systems.
Simplify, simplify, simplify, and then the fixes become trivial.
In the US case, that means banning most of the middle-layers.
Alas, independent middle layers have long been the US solution to avoiding monopolies. This is the whole reason car manufacturers can't sell directly to consumers, and micro breweries can't sell to consumers except for on-site purchases. Breweries in particular have to sell to distributors, who sell to stores.
Banning the middle layers here (absent other changes) just means that the companies that replace their spots in the top 20 will be vertically integrated conglomerates that manufacturer, distribute, prescribe and provide insurance (i.e. payment plans) for pharmaceutical drugs.
> Banning the middle layers here (absent other changes) just means that the companies that replace their spots in the top 20 will be vertically integrated conglomerates that manufacturer, distribute, prescribe and provide insurance (i.e. payment plans) for pharmaceutical drugs.
Except these companies are already vertically integrated, to a large degree. All the biggest insurers have their own in-house PBMs.
CVS (the parent company of Aetna) has Caremark.
Cigna has Express Scripts.
Anthem (fine, Elevance) has CarelonRx.
UnitedHealth Group has Optum.
Revenue is an incomplete signal of the complexity and waste. It’s just a signal of the money flowing through. A “single payer” system would probably also show a huge revenue number even if the profit was <=0. There’s just a lot of money and a lot of people who are patients.
I don’t disagree that the system requires change and is extremely complex, however.
The real problem is that it’s nearly impossible to “scale” healthcare and keep it personalized, and people want personalized healthcare - because that’s shown to be more effective healthcare. Doctors can only see a limited number of patients a day, and they need to be paid some compensation commensurate with their skills and efforts. That alone makes it hard for everyone “healthy” to see a doctor often enough and for long enough to get deeply personal care. Most people realistically can pay out of pocket for preventative care. $100-200/yr for an American isn’t crazy. Even most drugs are super affordable out of pocket if the profit margins are kept low (which is started to be available, bits at a time).
The real complexity, of course, is the long-tail where a few people get cancer and car accidents and other serious conditions which swamp the costs of everything else.
I don't think $100-200 per year for preventative care is enough. I reckon $1000-$10,000 per year, depending on age, is more accurate. You should spend at least $500 per year on nutritional supplements like Vitamin D. Switzerland has a better medical system that's cheaper than our system, but it's still expensive.
Indeed, it may be the case that the middlemen aren't individually all that profitable, but if the money passes through several stages and each one skims off a few percent, you end up with the present situation where health care costs twice as much as it does in any civilized country.
> Gray, Lipner, McDonald, and Vandergrift reported that they are employees of ABIM [American Board of Internal Medicine]. Landon reported receiving consulting fees from ABIM for ongoing work during the conduct of the study.
A study that shows the board test is effective, sponsored by the board?
I might be reassured by more detailed statistics about the analysis. Even top 25% vs bottom 25% - how much actual variation in score are we talking about? What is the probability that someone scoring in the top 25% is actually in the top/bottom 25%? We imagine a big gap but that’s not necessarily true. Consider exam scores of 85 90 90 95…
Reminds me of the old joke:
What do you call the person that came bottom of their class in Med School?
Doctor.
[flagged]
[flagged]
[flagged]
With the absolute absurdity the residency process, and the focus entirely on new doctors just after that residency, I have to wonder how much of this just corresponds to whoever's lucky enough to be the kind of high-powered mutant who can survive multiple years of 80- to 100-hour week schedules designed by a man who was high on cocaine and morphine 24/7 (seriously, look it up, it's true). There are going to be a lot of people who need an extended sabattical to recover from that before they'll be effective at anything at all, which makes any kind of baseline of test scores really suspect to me.
Yeah I wondered how much of this is accounted for by some general resiliency thing or circumstances during residency or something along those lines.
Does the difference matter in this context, though? Medicine isn't like other professions where it's no big deal to have some fraction of the workforce be bad at their jobs. I'm not so status-quo-biased that I'd support 100 hour residencies, but I'm skeptical of reform proposals that focus on doctors' working conditions rather than patient outcomes. If some filtering process leads to better patient outcomes, I think we should retain it, even if it's quite stressful for the doctors who have to go through it.
Fair point. There's some data showing patient outcomes are worse when managed by overworked residents-in-training, but I think you're referring to outcomes post-residency. i.e. Physicians should squeeze as much training as possible into the allotted years. This is reasonable, especially for surgical specialties where procedural reps are a commodity for trainees.
I'd be more open to this line of reasoning if physician's salaries had kept pace with inflation over the last 30 years and if if we hadn't tacitly accepted a much, much lower standard of training in the form of DNPs, CRNAs and PAs who are now practicing independently in a lot of regions. You can't demand that people make extraordinary sacrifices without extraordinary compensation.
For contrast, most European countries have a much longer post-residency training process that is more humane. Caveat being that students enter medical school directly from high school and don't have student loans.
It's also worth pointing out that in the US a LOT of those 100 hours are not spent in direct patient care. They're spent doing chores ('scut') that are not directly tied to patient care. Think: Calling insurance companies for prior authorization for your supervisor or filling out FMLA paperwork for one of your supervisors' patients. As a resident you don't have the ability to say "no" to these tasks.
> i.e. Physicians should squeeze as much training as possible into the allotted years. This is reasonable, especially for surgical specialties where procedural reps are a commodity for trainees.
It's mixed, though. We don't know how much "squeezing as much training" helps or hinders future performance. We do know that sleep debt hurts retention of new knowledge and skills.
So I'm not positive whether "50% more training, but with not enough sleep during most of the interval" will result in better outcomes.
> I'd be more open to this line of reasoning if physician's salaries had kept pace with inflation over the last 30 years
Doctors in the US are artificially scarce and artificially expensive compared to the rest of the world. The artificial scarcity of residencies also contributes to the unusually harsh residency work conditions.
Doctors in the United States are paid more than doctors in Norway and Switzerland even though those countries are richer and our doctors aren't better.
Your comment sounds reasonable, but it doesn't allow for nuance.
If a hellish residency improves patient outcomes by 0.1%, at the expense of every single resident suffering twice as much as they need to (and likely leading to some stimulant addictions and deaths among the resident/doctor population), that's not a fair tradeoff.
Medical workers don't exist solely to sacrifice themselves for others; they are humans also and their needs should be weighed as important like everyone else's.
As it so happens I think some of the strain of medical residency is related to supply shortages in the health care industry. If it's not crystal clear that working 80+ hours per week is necessary to significantly improve patient outcomes, and it is clear that working 80+ hours per week makes a lot of people choose other careers (limiting supply artificially), then reform here is imperative.
Am I missing something here? How could a hellish residency—with all the stress and sleep deprivation that implies—possibly improve patient outcome?
Apart from the bad real-time cognitive effects, long-term memory retention is dependent on regular, sustained sleep.
Oh I'm not saying it does, the person above seemed to be suggesting that we should focus on figuring out the residency conditions that lead to the best patient outcomes, rather than improve the conditions for residents, which suggests they believe worse conditions for residents may be better for patients.
Just to point out the obvious, people doing 80 hrs/week for 2 years (lower end of residency term I believe) are going to have twice as much 'experience' as people doing 40hrs/week for 2 years.
I suspect most of us here know more hours worked doesn't directly correlate with more retention of information and best practices, but that's the thinking.
I'm arguing that even if 80hrs/week residencies was the optimal amount of pressure to turn our fledgling residents into battle-hardened physicians, if you can get 99% of the effect with 40hrs/week, maybe do that instead. And again, I'm not even suggesting this is actually the case.
One of the guys that founded the modern medical education system was a coke head:
https://magazine.columbia.edu/article/cocaine-addict-who-cha...
The Mayans used coca leaves to get more work out of their people.
I guess medical residency is kind of like a hazing ritual. Today's doctor's are like I went though it, why can't you?
The idea is that the stress and sleep deprivation are not sources of permanent impairment (even though they are), but rather a filter that selects the strongest candidates.
I don't necessarily think the relationship is "worse residency conditions predicts higher board exam scores"? It could be that residents with more time to study or whatever score higher. It could be examinees with scores close to the threshold are accounting for the association. Or maybe it is resiliency. I have no idea.
My general impression is that the evidence overall is really not supportive of harsher residencies in terms of patient outcomes. I also think that rigor does not have to mean masochism or hubris; there seems to be this assumption that any change to residencies would mean dumbing it down or making it easier, as opposed to improving things overall. I'm also a little skeptical of minor tweaks to residency that might have happened somewhere now being representative of a more wholesale restructuring.
The often unacknowledged factor in the background is that hospitals and residency locations are getting free labor with no chance of repudiation of their situation by workers. Hospitals are getting physicians whose salaries are paid for by the federal government, where those physicians are essentially unfree to move if they're unhappy. So of course there's going to be an attempt to milk them for everything. It gets whitewashed as "selflessness" and physicians are encouraged to boast about it or something, instead of calling it out as exploitation. No physician wants to make that claim, for a whole host of reasons, even if it is true.
Imagine what would happen if hospitals had to bear the costs of residency training completely, like just about any other healthcare profession, and residents were able to move freely like most employees.
I get despondent about so much in US healthcare. There's so much focus on invoice costs per se, and payment by insurers, and not enough on monopolies in service delivery, and problems with educational structures. Any attempt to address these issues is met with resistance by various groups with conflicts of interest, who aren't called out on these conflicts of interest.
Another thing about residencies constantly on my mind from other settings (institutional tracking hours in the moment versus recalled hours later) as well as personal experiences with residency in the past is that people are notoriously bad about reporting past work hours and conditions, and tend to exaggerate. I'm not saying that anyone in particular is necessarily being dishonest in describing their residency experience, but I suspect there has been drift over time in conditions that reflects a kind of biased memory of things on the part of residency directors. "I worked 120 hours a week" when that wasn't actually the case, or is distorted, then becomes residency policy for the next generation.
Sometimes I feel like the logical conclusion, given the way these discussions go sometimes, is the only one being legally able to practice is someone with an MD who has completed a residency working 140 hours a week for 6 years, with perfect board exam scores. It just doesn't add up.
Well how much of it is just initiation rituals and accidents of history? How fast do effective new practices propagate throughout the industry?
Please tell me more about the man who was high on cocaine and morphine 24/7.
Probably referring to this?
https://pmc.ncbi.nlm.nih.gov/articles/PMC7828946/
This doesn’t show that he was “high on cocaine and morphine 24/7” as the relevant commenter suggested; just that he struggled with addiction
It does say
> he was able to hide his addiction under a veil of eccentricity and a pyramid of residents
Which means "created an environment to allow himself to be high at work" to me. It's not impossible that he held it off at home, but I don't see why he would.
Also, he's clearly Dr. House; Ctrl-F "Leaving much"
Edit: Well, that's embarrassing. I hadn't realized that the link is to a new 2024 study on IM board scores and patient outcomes. My post is in regards to a 2023 study on USMLE scores and patient outcomes that was pretty widely discussed.
Healthcare worker here. Sheriffofsodium did a great video poking holes at this study: https://youtu.be/JKS9Y-nCnKs?si=VPsUNSoepltbg4Hu
It's 45 minutes so I don't expect people to watch it, but he makes several important points, including:
- This study was performed by USMLE insiders, the only ones with access to this private data. USMLE does not share this data publicly so it's impossible to verify.
- As the USMLE makes millions of dollars from these exams, they have a clear conflict of interest.
- The differences in patient outcome are AT BEST of marginal clinical significance, which the authors of the study even state in the paper.
There is better scientific evidence that female surgeons have better patient outcomes on average: https://pubmed.ncbi.nlm.nih.gov/37647075/
the OP is referring to a different study about Board not USMLE
i did not read the study. an obvious confounding factor is that doctors with better board scores are hired into better hospitals with better patient populations, and thus better outcomes.
This was controlled for as stated in the article:
> The researchers compared outcomes for patients within the same hospitals who were cared for by doctors with different exam scores. This allowed the researchers to eliminate, or at least minimize, the effect of differences in patient populations, hospital resources, and other variations that might influence the odds of patient death or readmission, independent of a doctor’s performance.
The people who ran the study also thought of this and controlled for it.
This is also pretty much the easiest thing the factor out through mixed effects modeling (among other methods if required). But your statement that higher scoring physicians go to places with healthier patient populations is not correct across all disciplines. Often it can be the opposite: the best physicians go to the major hospitals (usually but not always university affiliated) located in major population centers that draw in the sickest/worst/rarest cases from the surrounding geography.
'Board exam performance was powerfully linked to patient risk of dying or hospital readmission. For example, there was an 8 percent reduction in the odds of dying within seven days of hospitalization in patients of physicians who scored in the top 25 percent on the exam, compared to the patients of physicians who scored in the bottom 25 percent on the exam, which was still a passing grade.'
Controlled hospital quality? I figure the best credentialed doctors go to the best hospitals, where patients receive a lot of other care aside from the MD.
From the article:
>The researchers compared outcomes for patients within the same hospitals who were cared for by doctors with different exam scores. This allowed the researchers to eliminate, or at least minimize, the effect of differences in patient populations, hospital resources, and other variations that might influence the odds of patient death or readmission, independent of a doctor’s performance.
This is the most important question in the thread.
Did they also control across types of medicine? If the higher-scoring doctors go into types of care which are more competitive, could those practices have lower patient mortality within 7 days?
For example, maybe burn unit care is high-mortality and low-barrier, compared to sleep medicine which is low-mortality and high-barrier (I don't know how accurate this is, just providing some hypotheticals for clarity)
Is this surprising?
High exam scores are an indication of discipline and good prioritisation - factors that evidently reflect on the physician's professional performance.
We (in the US) have spent years deprioritizing standardized testing for college admissions on the (public) justification that they don't reflect potential for success.
Likewise, there's been an element of testing = racial discrimination in all sorts of fields, such as:
https://fairtest.org/article/legal-attack-biased-firefighter...
And:
https://www.theguardian.com/world/2009/jun/29/connecticut-fi...
The fact that this study alleges a direct link between exam scores and performance is itself bucking the zeitgeist.
It's evidence that those exams are doing something right.
Whether it's surprising or not, it's up to you. But it's something that should be measured once in a while.
It isn't exactly news that doctors with better test scores are better doctors, but this is additional evidence. The article doesn't touch on race, but very deliberately. To anyone on the inside, the silence is deafening.
In the U.S. med schools been matriculating many unqualified "underrepresented minority" (black, hispanic, native American, Hawaiian) medical students for a long time. This is unfair to patients and doctors, especially competent brown doctors, because it is now the case that you get a very strong signal about how how good a doctor is simply by the color of his or her skin. Which is messed up.
AAMC has the data (https://www.aamc.org/data-reports/students-residents/data/fa... , table A-18). This is after the 2023 Supreme Court decision, so the spreads are a little wider in e.g. 2022 data. MCAT scores range from a minimum of 472 to a max of 528, which is stupid and a deliberate tactic to make the differences between groups seem small. Subtracting 472 from each average score, 2024 average MCAT scores look like this for matriculants:
41.9: Asian
40.2: White
36.9: Hawaiian
34.4: Black
33.9: Hispanic
31.3: American Indian
These are very large differences which you can absolutely expect to show up in doctor performance. Everyone has to pass the same boards during / after med school, but that's just going to cut out some of the worst. Among those who pass, the unqualified minority students who were admitted to med school because of their skin color will still be concentrated at the bottom of the distribution.
Do you know what they call the guy who finished last in his med school class? "Doctor".
Was wondering how long I’d have to scroll for this. The reality is that it’s unhealthy not to be “racist” when selecting health care providers right now due to historical policies like this.
When the right takes swipes at “DEI”, going after bar lowering in medical school is very high on the list of legitimate targets for them to attack. I don’t want to care about the race of my doctor, but do gooders gave me no choice by passing so many bad doctors.
Did they pass bad doctors? The first post referenced entrance exams but cited no data about those that actually complete their medical training.
You could probably back out at least some bounds from the data here: https://www.aamc.org/data-reports/students-residents/report/...
This: https://www.sciencedirect.com/science/article/abs/pii/S00904... Suggests MEDIAN USMLE step 1 scores for White, Asian, Hispanic/Latino, and Black applicants were 242, 242, 237, and 232. It's urology specific, and practice specific, though.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161 Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
But more generally...imagine what would have to be true for us to go from BIG differences in g-loaded test performance to small / no differences. Either people fundamentally change somehow (get smarter / dumber), people's test scores systematically differ because they e.g. got better / worse at "tests" or something, independent of their underlying knowledge of the content or abilities, or it's attrition (e.g., very many minority med students wash out, leaving only those who should have been admitted in the first place).
None of those things seem plausible to me. The little glimpse we have from the two studies above is consistent with the obvious thing happening. Things are mostly the same, though I'd bet URM have higher wash-out rates, so differences get attenuated somewhat by the time they're practicing. Of course, URM vs non-URM will sort differently into specialties and geographies so there's that...you'll see bigger or smaller differences depending on how they sorted. A good question, as well, is why the USMLE people don't split reporting by race. I bet one of the reasons is they'd get a lot of flak because there would be big disparities. And good on them (maybe!) because one reason they might care about that is they want to produce good doctors, and watering down their test won't help with that.
> The article doesn't touch on race, but very deliberately. To anyone on the inside, the silence is deafening.
??? The NPI registry doesn't indicate the race of registered providers, only their sex. Really bizarre to call a limitation of the available data "deliberate".
It's possible to put together multiple data sources. There are certain things everyone reading this will already know. It's like reporting "educational attainment" rather than g or IQ in studies...everyone knows what it implies, you just can't say it. Anyway:
1) Board scores are strongly linked to patient outcomes (this paper)
2) We already know test scores vary strongly with observable characteristics like race
3) It's a very safe bet that board scores vary with race in the same way that MCAT scores vary with race
Therefore,
4) We can have a very good idea of how good a doctor is based on observable characteristics like race
Which is a thing the article immediately, obviously, and loudly implies but of course couldn't say for fear of censorship, losing jobs, etc.
Either you want me to make conclusions based on data or you don't. If you want me to make conclusions based on any of the data you provide, then you must provide all the data necessary to make an end-to-end connection to your claim. You can't use a patchwork of studies and say things like "it's a very safe bet" and "we can have a very good idea" to "put together multiple data sources". That's not science, that's "trust me bro".
Show the actual hard data that correlates board certification exam results and race for this study. As it stands now, we can at best associate this with physician sex.
If I'm using your logic, then, without any evidence whatsoever, I can say that obviously because the correlation between MCAT scores and Step 2 scores is weakened compared to Step 1, then it's a "very safe bet" that there will be little to no correlation for Step 3 and almost entirely eliminated by the time they take the BCE.
Or I can be rigorous and not make data points up in my head to fit some worldview.
> Which is a thing the article immediately, obviously, and loudly implies but of course couldn't say for fear of censorship, losing jobs, etc.
No, it doesn't, because it can't, because they don't have any information about the races of the physicians in the study.
I appreciate you engaging.
"Or I can be rigorous and not make data points up in my head to fit some worldview."
It seems clear to me that you're sticking your head in the sand, not me. I'm believing the thing that is dangerous to believe, not you. I believe it because it's obviously true.
"actual hard data" would be best. It would be best if we just had board scores split by race. But we don't. We do, however, have lots of other information that makes it very, very clear that there will be significant disparities by race in USMLE boards in pretty much exactly the same pattern we see in MCAT scores.
Here's the meat of it, you can look to the other comments here for the potatoes:
This: https://www.sciencedirect.com/science/article/abs/pii/S00904... Suggests MEDIAN USMLE step 1 scores for White, Asian, Hispanic/Latino, and Black applicants were 242, 242, 237, and 232. It's urology specific, and practice specific, though.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161 Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
...and this is just the result of a casual search.
The people who are best at this sort of thing are economists. They are trained to do causal inference based on patchy, far-from-perfect data. It's totally normal to come to a conclusion (even a very strong one!) using a "patchwork of studies". That's just life. You don't usually get "actual hard data". It's very clear what the pattern in the USMLE data would look like. I bet the effect size would be a little attenuated.
Your epistemic stance, which seems to be "Well we don't have perfect, incontrovertible proof, which means we must act like we don't know anything at all!" is unworkable. You don't do this, I don't do this, the world doesn't permit of this. As a rhetorical move, I can see where you're coming from. It gives you license to not think about the hard thing, and to punish those around you who might. But I'd argue that's not a way forward for us as a whole.
This is the data on entrance exams, not exit exams. Is there any data that actually shows minorities that finish med school and pass boards are any less competent?
The entire point of these programs is to make up for the lack of educational access for minorities by giving them a chance to prove themselves by admitting them with lower scores. But if they complete the same program, doesn't that mean they are just as good?
Now, in light of this study, it would be super interesting if this divide holds up in exit exam scores. But until we actually have that data, I'm not sure your claim is valid.
As above:
You could probably back out at least some bounds from the data here: https://www.aamc.org/data-reports/students-residents/report/...
This: https://www.sciencedirect.com/science/article/abs/pii/S00904... Suggests MEDIAN USMLE step 1 scores for White, Asian, Hispanic/Latino, and Black applicants were 242, 242, 237, and 232. It's urology specific, and practice specific, though.
This: https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.161 Says The mean (±SD) USMLE step 1 score was significantly greater among non-[Black or Hispanic] applicants as compared to URiM applicants (223.7 ± 19.4 vs 216.1 ± 18.4, P < .01, two-sample t-test). This is at a specific medical school.
But more generally...imagine what would have to be true for us to go from BIG differences in g-loaded test performance to small / no differences. Either people fundamentally change somehow (get smarter / dumber), people's test scores systematically differ because they e.g. got better / worse at "tests" or something, independent of their underlying knowledge of the content or abilities, or it's attrition (e.g., very many minority med students wash out, leaving only those who should have been admitted in the first place).
None of those things seem plausible to me. The little glimpse we have from the two studies above is consistent with the obvious thing happening. Things are mostly the same, though I'd bet URM have higher wash-out rates, so differences get attenuated somewhat by the time they're practicing. Of course, URM vs non-URM will sort differently into specialties and geographies so there's that...you'll see bigger or smaller differences depending on how they sorted. A good question, as well, is why the USMLE people don't split reporting by race. I bet one of the reasons is they'd get a lot of flak because there would be big disparities. And good on them (maybe!) because one reason they might care about that is they want to produce good doctors, and watering down their test won't help with that.
I wonder how much this is simply. Smart people do better on tests. Smart people make better doctors.
I don't think smart people make better docs. I'm USMLE 90+ percentile, and not particularly clever. It is however, important to be clever enough to understand what you read.
Good docs are humble, meticulous and knowledgeable. Stellar docs are excellent communicators.
The study at least proves better test taking strongly predicts outcomes, test scores are correlated with intelligence as countless studies prove. It may be the case that some non-clever people get high test scores. That doesn't dismiss the general conclusion.
No, no contradiction. I said: high USMLE score != smart. GP said: good doctor = smart. Study says: good doc = high USMLE score. As I also said: good doc = understand what you read.
The article says 'board exam' which is quite different from USMLE. So, it's established: I can't read, and I'm not especially clever. It all checks out ! :-)
if you pray to our lord and savior jesus christ you wouldn't even need surgery the spirit will come down and heal you big pharma is lying to you people
jesus is the only medicine
Reducing standards to meet DEI requirements is therefore a killer practice:
https://www.aei.org/carpe-diem/new-chart-illustrates-graphic...
I’m a skeptical of the interpretations. All we have are percentages without knowing the size of each group.
Among other things, it reeks of of Simpson’s Paradox.
https://en.m.wikipedia.org/wiki/Simpson's_paradox
What relevance would the group size have to any of this, and how would this possibly be a result of the Simpson's Paradox?
> black applicants were more than 9 times more likely to be admitted to medical school than Asians (56.4% vs. 5.9%), and more than 7 times more likely than whites (56.4% vs. 8.0%)
If the number of Asian applications is 10x the number of spots available, their admittance rate can never be higher than 10%. No “discrimination” required. Same for white applicants.
If you only have 10 black applicants and you accept 5 of them that’s a 50% admittance rate. Which looks huge and you can scaremonger about how much bite and Asian people are unfairly getting sidelined.
Until you see there were 10,000 white applicants with a 8% admittance rate, ie 800 people.
800 from 8% vs 5 from 50%.
Again without absolute numbers the percentages can be very deceiving.
Your argument doesn't work because the data already accounts for differences in GPA and MCAT scores. It’s not comparing total applicants—it’s comparing applicants with the same academic qualifications.
If admissions were race-neutral, then students with the same GPA and MCAT score should have similar acceptance rates. But the data shows black and Hispanic applicants get accepted at much higher rates than equally qualified Asian and white applicants.
Your example about total applicants (10 vs. 10,000) doesn’t apply here because the issue isn’t how many people applied, but who gets in when they have the same credentials.
MCAT != board exam, for one thing.
There have been studies suggesting that elimination of the MCAT does little to nothing to prediction of student performance beyond the second year or so.
I would be willing to place money on there being a very high correlation between MCAT results and board exam results.
My prediction is the correlation is about 0.30-0.40.
As others have pointed out, there are a lot of unmeasured variables not being controlled for in this finding as well.
I'm not surprised board exam scores predict outcomes, I just think there's lots of other variables along that path from one to the other, and even more from MCAT -> board exam.
I'm just suggesting that the observed prioritization of DEI objectives at the expense of the admission process not being completely merit-based likely results in some additional deaths. I agree that there are multiple factors involved that will predict physical competency, not just those that the DEI policies adversely affect.
I’m all for diversity but that admissions gap is just racism.
You can’t have separate entrances for your establishment based on what folks look like, the group you prefer getting better service doesn’t make it equality.
I know right? Black applicants with high MCAT scores were rejected in favor of white applicants with low MCAT scores! Just unbelievable.
Unbelievable because it’s the opposite of what the link shows?
96% acceptance rate for black candidates with high MCAT scores, but a nonzero acceptance rate for white candidates with low scores.
Maybe there are other factors, and they're correlated with the buckets being used here?
It’s not the case that every single black applicant gets admitted before a single white/Asian applicant does. The point is that it’s much, much easier for a black applicant to get admitted.
A black applicant with GPA and MCAT scores in the lowest bucket still has a 56% chance of admission. That’s on par with an Asian applicant who has GPA and MCAT scores in the highest bucket.
So do you think that if the acceptance rate for high MCAT and GPA are below 100%, then the other bars should be zero? i.e, these are the only admissions criteria that should be considered?
[dead]
its easy to hide data behind percentages and say 94% of the blacks who had a certain GPA where admitted. look at the raw numbers, study after study have shown improved care for colored patients and outcome better when treated by black physicians which indicates we have to have proportional numbers of black and hispanic physicians representative of their population. If whites and asians disproportionately apply to medical schools their admission rates are going to look different. The systemic advantage afforded to affluent kids by being brought up for 18+ years by highly educated parents is not level playing field.
This is a study my wife wrote regarding this exact scenario, trying to see if patients think they’re getting better care if they’re similar to the doctor (and team) treating them!
Forgot the link: https://pubmed.ncbi.nlm.nih.gov/37801560/
This study has nothing to do with the claim being made by the grandparent comment.
"study after study have shown improved care for colored patients and outcome better when treated by black physicians"
This is false. You're probably getting this idea second hand from this study: https://www.pnas.org/doi/abs/10.1073/pnas.1913405117
Probably because it was famously misused by SC Justice Ketanji Brown Jackson, who got it wildly wrong https://statmodeling.stat.columbia.edu/2024/06/14/statistics...
Anyway, that study is bogus: https://www.pnas.org/doi/abs/10.1073/pnas.2415159121
The only evidence for your claim that I know of is an NBER paper https://www.nber.org/bah/2018no4/does-doctor-race-affect-hea...
Where they randomly assign black male patients to white or black doctors, IIRC, and patients get advice on preventative care. Outcomes for black patients are better because they are more willing to take black doctors' advice. Obviously, newborns in the first study, so it's about doctor competence straight-up.
A chatbot can also score very highly on these tests. What do you think the survival rate of ChatGPT’s patients will be?
Probably pretty high in diagnosis at least?
Most patients are unable to communicate their symptoms accurately enough. Which is why you need to see them in person, talk with them, and examine them. Not saying a robot couldn't perform, but certainly not a simple chatbot. Despite what some papers say.
[flagged]
No one is hiring based on test scores though? The bar to even get into med school is so insanely high that most people able to get in and become doctors were already upper-middle or high SES. The only point in the entire process where "DEI" matters is feeder programs for underprivileged students, the type of people who can't afford to pay for MCAT tutors etc.
I married my wife shortly before she started med school.
Scores are basically the entire name of the game. Sure you’re not hired into your attending job based on scores, but med school and residency are largely based on scores.
Resident physician hiring is strongly based on test scores, specifically the USMLE Step 1. It's true that scores in the board exams the OP discusses aren't super relevant to hiring, though.
It is step 2 now, given that step 1 is now pass/fail. But yes step 2 is the single most important factor in residency match
It very much depends on the specialty, too.
Obviously I don’t mean “hire” in the narrow sense. We shouldn’t admit people to medical school based on DEI any more than we should hire them after medical school based on DEI.
Incompetency comes in all types, there’s no need to assume anything. In fact, you should be especially careful if your doctor is [your favorite type of person], that’s when you know your cognitive biases are working against your better judgments.
Have too few doctors already, we should set a bar for qualifications and let anyone over the bar become a doctor. The DEI bogeyman didn’t do any harm here, since the current system requires both to get over the bar AND to be randomly selected for one of N arbitrary spots.
> we should set a bar for qualifications and let anyone over the bar become a doctor
Absolutely. Let’s stop using different bars for different races.
They don't use different bars to become a doctor. Once you're in medical school, everyone passes the same tests and goes through the same process.
Show me evidence that doctors of a certain race are allowed to have lower test scores than some other races in order to pass all of the requirements to become a doctor. I don't care if they give anyone a leg up to get into medical school, we've already agreed that we should let anyone who can pass the stringent process to become a doctor should be handed a "Dr." for their name and sent out to the world, so if there weren't any artificial barriers to having unlimited doctors, then it wouldn't matter who got into medical school or how as long as they passed and became doctors eventually.
We should want more doctors, not argue about who shall become a doctor. All this fighting about the DEI boogeyman is allowing rich pricks to pick our pockets and steal our national resources for themselves.
[flagged]
Can you please stop breaking the site guidelines? We've asked you many times and you've still been doing it repeatedly, such as here and https://news.ycombinator.com/item?id=43062203.
This is not cool:
https://news.ycombinator.com/item?id=42661453 (Jan 2025)
https://news.ycombinator.com/item?id=42526674 (Dec 2024)
https://news.ycombinator.com/item?id=38225621 (Nov 2023)
https://news.ycombinator.com/item?id=37358816 (Sept 2023)
https://news.ycombinator.com/item?id=36994995 (Aug 2023)
https://news.ycombinator.com/item?id=35646889 (April 2023)
I don't want to ban you, but if you keep this up we're going to have to. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules, that would be good.
It’s not at all clear to me that this comment breaks the guidelines.
I appreciate that. I was thinking of ones like these:
"Don't be snarky."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Converse curiously; don't cross-examine."
Most people (me included) underestimate the amount of provocation in their own comments (but feel it keenly in others'). To avoid running afoul of this dynamic, it's best to err on the side of making sure that you're following the rules. A sort of safety factor if you will.
> Maybe in fields that matter
Fields that matter? Like, say, politics?
One bad health department head will kill way more people than one bad doctor, and you would never guess who's Secretary of Health today, and how uniquely unqualified he is to do that job (or any other job, but that one in particular).
It's wild how the movement that purports to be pushing back on 'unqualified hires' is full of people who can't tell their ass from their elbow. They hold others to a standard that they wouldn't ever dream of meeting.
I understand that morons can be elected, and that's up to the voters, but there's no excuse for political positions that get appointed.
Doctors aren't machines, they're humans. I have not yet read the full paper, only the article, but I already see something really big and important to look out for. When I read the full thing, the question I'll be asking is "what's the likelihood that the self-esteem of doctors was directly intervened on by the exam taking process itself." How do you control for the loss in confidence that learning of your test performance gives you? How are we certain that learning your score on the board exam doesn't make you more conservative (or riskier) with how you treat patients as a psychological effect?
This appears to be an observational result, so I'm genuinely perplexed by the reception here. I genuinely thought this comment shows a healthy amount of curiosity and asking important questions. Asking "what control group did this study use?" is usually well-received here.
Soon they will be!
Yeah but the patient is just a biological machine. This machine can easily be divided into organs and apportioned among specialists. The machine is easily understood by a corpus of research and laboratory experimentation.
. Many inputs can be placed in the machine by physicians, and the outputs are known. The biological machines can easily be isolated from environment, or monitored with high technology, and assigned numbers in databases to be processed in data centers.
Value is extracted from the biological machines mostly from government and 3rd party sources, so there is no real need to rely on machines having a means or will of their own.
There is no compelling reason to treat humans any different from automobiles for the purposes of medicine and medical treatment. In fact humans are less genetically diverse than motor vehicles, and A new model year will always produce a bumper crop of lemons to work on.
The common misconception of someone with a hard science education.
> Many inputs can be placed in the machine by physicians, and the outputs are known. The biological machines can easily be isolated from environment, or monitored with high technology, and assigned numbers in databases to be processed in data centers.
We aren't even close to that level of understanding.
And still, the model works. Lives are saved. We might save many more with a fully integrated non-simplified approach, but it’s not necessary to keep seeing growth in positive outcomes.
loss of confidence? lol what?