aabajian 2 days ago

I'm an interventional radiologist with a master's in computer science. People outside radiology don't get why AI hasn't taken over.

Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.

Will radiologists be replaced? Almost certainly the answer is no.

Why not? Medical risk. Unless the law changes, a radiologist will have to sign off on each imaging report. So say you have an AI that reads images primarily and writes pristine reports. The bottleneck will still be the time it takes for the radiologist to look at the images and validate the automated report. Today, radiologist read very quickly, with a private practice rads averaging maybe 60-100 studies per day (XRs, ultrasounds, MRIs, CTs, nuclear medicine studies, mammograms, etc). This is near the limit of what a human being can reasonably do. Yes, there will be slight gains at not having to dictate anything, but still having to validate everything takes nearly as much time.

Now, I'm sure there's a cavalier radiologist out htere who would just click "sign, sign, sign..." but you know there's a malpractice attorney just waiting for that lawsuit.

  • kbos87 2 days ago

    This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame. The article cites AI systems that the FDA already has cleared to operate without a physicians' validation.

    • tw04 2 days ago

      > This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame.

      Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.

      Let me know when the bean counters sign off on fleets in the millions of vehicles.

      • varenc 2 days ago

        > Waymo has what? 700 cars on the road ...

        They have over 2000 on the road and are growing: https://techcrunch.com/2025/08/31/techcrunch-mobility-a-new-...

        Of course there's 200M+ personal vehicles registered in the US.

        • kevin_thibedeau a day ago

          Operating in carefully selected urban locations with no severe weather. They are nowhere close to general purpose FSD.

          • newyankee a day ago

            Self driving cars in 2025 in USA is like Solar PV in China in 2010, it will take a while, but give them time to learn, adapt and expand.

            • darkwater a day ago

              And where the solar panel were in China in 2000? Because self-driving cars on public roads in USA have been a WIP for 10 years at least.

        • thr0w__4w4y 20 hours ago

          Yes and I would swear that 1700 of those 2000 must be in Westwood (near UCLA in Los Angeles). I was stopped for a couple minutes waiting for a friend to come out and I counted 7 Waymos driving past me in 60 seconds. Truth be told they seemed to be driving better than the meatbags around them.

      • hnaccount_rng 2 days ago

        You also have Mercedes taking responsibility for their traffic-jam-on-highways autopilot. But yeah. It's those two examples so far (not sure what exactly the state of Tesla is. But.. yeah, not going to spend the time to find out either)

    • CSSer 2 days ago

      I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.

      I have to admit if my life were on the line I might be that Karen.

      • rogerrogerr 2 days ago

        A bad discovery probably means your exam will be read by someone qualified, like the surgeon/doctor tasked with correcting it.

        False negatives are far more problematic.

        • CSSer 2 days ago

          Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?

          • malfist a day ago

            The problem is, you don't know beforehand if it's a hard case or not.

            A hard to spot tumor is an easy negative result with high confidence by an AI

        • borroka a day ago

          But since we don't know where those false negatives are, we want radiologists.

          I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.

          The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.

        • gervwyk a day ago

          I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.

        • close04 a day ago

          From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.

          If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.

      • captainkrtek 2 days ago

        Id be more concerned about the false negative. My report says nothing found? Sounds great, do I bother getting a 2nd opinion?

        • jayknight 2 days ago

          You pay extra for a doctor's opinion. Probably not covered by insurance.

          • ares623 2 days ago

            That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.

            • Workaccount2 21 hours ago

              People will flock to "AI medical" insurance that costs $50/mo and lets you see whatever AI specialist you want whenever you want.

              • captainkrtek 2 hours ago

                Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.

            • falcor84 a day ago

              Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.

            • CSSer 2 days ago

              I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.

          • sokoloff a day ago

            It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)

            If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.

      • mike_ivanov 2 days ago

        Self-care is being Karen since when?

        • CSSer 2 days ago

          It's not. I was trying to evoke a world where it's become so common place that you're a nuisance if you're one of those people who questions it.

          • incone123 a day ago

            Need to work on the comedic delivery in written form because you just came off as leaning on a stereotype

        • GLdRH 2 days ago

          "Cancer? Me? I'd like to speak to your manager!"

          • ponector a day ago

            In reality it's always a good decision to seek a second independent assessment in case of diagnosis of severe illness.

            People makes mistakes all the time, you don't want to be the one affected by their mistake.

    • alexpotato a day ago

      This is essentially what's happened with airliners.

      Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.

      Yet we STILL have pilots as a "last line of defense" in case something goes wrong.

      • frenchman_in_ny a day ago

        No - planes cannot "land themselves with zero human intervention" (...). A CAT III autoland on commercial airliners requires a ton of manual setting of systems and certificated aircraft and runways in order to "land themselves" [0][1].

        I'm not fully up to speed on the Autonomi / Garmin Autoland implementation found today on Cirrus and other aircraft -- but it's not for "everyday" use for landings.

        [0] https://pilotinstitute.com/can-an-airplane-land-itself/

        [1] https://askthepilot.com/questionanswers/automation-myths/

        • rkomorn a day ago

          Not only that but they are even less capable of taking off on their own (see the work done by Airbus' ATTOL project [0] on what some of the more recent successes are).

          So I'm not sure what "planes can land on their own" gets us anyway even if autopilot on modern airliners can do an awful lot on their own (including following flight plans in ways that are more advanced than before).

          The Garmin Autoland basically announces "my pilot is incapacitated and the plane is going to land itself at <insert a nearby runway>" without asking for landing clearance (which is very cool in and of itself but nowhere near what anyone would consider autonomous).

          [0] https://www.youtube.com/watch?v=9TIBeso4abU (among other things, but this video is arguably the most fun one)

          Edit: and yes maybe the "pilots are basically superfluous now" misconception is a pet peeve for me (and I'm guessing parent as well)

          • psunavy03 a day ago

            Taking off on their own is one thing. Being able to properly handle a high-speed abort is another, given that is one of the most dangerous emergency procedures in aviation.

            • rkomorn a day ago

              Agreed. I had to actually reject a takeoff in a C172 on a somewhat short runway and that was already enough stress.

              • psunavy03 16 hours ago

                Having flown military jets . . . I'm thankful I only ever had to high-speed abort in the simulator. It's sporty, even with a tailhook and long-field arresting gear. The nightmare scenario was a dual high-speed abort during a formation takeoff. First one to the arresting gear loses, and has to pass it up for the one behind.

                There's no other regime of flight where you're asking the aircraft to go from "I want to do this" to "I want to do the exact opposite of that" in a matter of seconds, and the physics is not in your favor.

          • namibj a day ago

            How's that not autonomous? The landing is fully automated. The clearance/talking isn't, but we know that's about the easiest part to automate it's just that the incentives aren't quite there.

            • rkomorn a day ago

              It's not autonomous because it is rote automation.

              It does not have logic to deal with unforeseen situations (with some exceptions of handling collision avoidance advisories). Automating ATC, clearance, etc, is also not currently realistic (let alone "the easiest part") because ATC doesn't know what an airliner's constraints may be in terms of fuel capacity, company procedures for the aircraft, etc, so it can't just remotely instruct it to say "fly this route / hold for this long / etc".

              Heck, even the current autolands need the pilot to control the aircraft when the speed drops low enough that the rudder is no longer effective because the nose gear is usually not autopilot-controllable (which is a TIL for me). So that means the aircraft can't vacate the runway, let alone taxi to the gate.

              I think airliners and modern autopilot and flight computers are amazing systems but they are just not "autonomous" by any stretch.

              Edit: oh, sorry, maybe you were only asking about the Garmin Autoland not being autonomous, not airliner autoland. Most of this still applies, though.

              • frenchman_in_ny a day ago

                There's still a human in the loop with Garmin Autoland -- someone has to press the button. If you're flying solo and become incapacitated, the plane isn't going to land itself.

                • rkomorn a day ago

                  Right. None of this works without humans. :)

      • victorbjorklund a day ago

        One difference there would be that the cost of the pilots is tiny vs the rest that goes into a flight. But I would bet that the cost of the doctor is a bigger % of the process of getting an x-ray.

    • trueismywork a day ago

      At the end of day, there's a decision needs to be made and decisions have consequences. And in our current society, there are only one way we know about how to make sure that the decision is taken with sufficient humanity: by putting a human to be responsible for making that decision.

    • aprilthird2021 a day ago

      The FDA can clear whatever they want. A malpractice lawyer WILL sue and WILL win whenever an AI mistake slips through and no human was in the loop to fix the issue.

      It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it

      • fkyoureadthedoc a day ago

        What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper. If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.

        • lurk2 a day ago

          > What's the difference in the lawsuit scenario if a doctor messes up?

          Scale. Doctors and taxi drivers represent several points of limited liability, whereas an AI would be treating (and thus liable for) all patients. If a hospital treats one hundred patients with ten doctors, and one doctor is negligent, then his patients might sue him; some patients seeing other doctors might sue the hospital if they see his hiring as indicative of broader institutional neglect, but they’d have to prove this in a lawsuit. If this happened with a software-based classifier being used at every major hospital, you’re talking about a class action lawsuit including every possible person who was ever misdiagnosed by the software; it’s a much more obvious candidate for a class action because the software company has more money and it was the same thing happening every time, whereas a doctor’s neglect or incompetence is not necessarily indicative of broader neglect or incompetence at an institutional level.

          > If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.

          To make a fair comparison you’d have to look at how many more people are getting successful interventions due to the AI decreasing the cost of diagnosis.

      • the_real_cher a day ago

        yeah but at some point the technology will be sufficient and it will be cheaper to pay the rare $2 million malpractice suit then a team of $500,000/yr radiologists

        theres an MBA salivating over that presntation

    • hliyan a day ago

      Very questionable reasoning: using a traffic analogy to argue against medical reality.

    • UltraSane 2 days ago

      Tesla still hasn't accepted liability for crashes caused by FSD. They in fact fight any such claims in court very vigorously.

      • otterley a day ago

        They have settled out of court in every single case. None has gone to trial. This suggests that the company is afraid not only of the amount of damages that could be awarded by a jury, but also legal precedent that holds them or other manufacturers liable for injuries caused by FSD failures.

      • avh02 a day ago

        Tesla isn't the north star here

    • constantcrying a day ago

      Medicine does not work like traffic. There is no reason for a human to care whether the other car is being driven by a machine.

      Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.

      When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.

      • victorbjorklund a day ago

        You often don't speak to the radiologist anyway. Lots of radiologist work remotely and don't meet and speak with every patient.

      • philipallstar a day ago

        There's some sort of category error here. Not every doctor is that type of doctor. A radiologist could be a remote interpretation service staffed by humans or by AI, just as sending off blood for a blood test is done in a laboratory.

      • FireBeyond a day ago

        > There is no reason for a human to care whether the other car is being driven by a machine.

        What? If I don't trust the machine or the software running it, absolutely I do, if I have to share the road with that car, as its mistakes are quite capable of killing me.

        (Yes, I can die in other accidents too. But saying "there's no reason for me to care if the cars around me are filled with people sleeping while FSD tries to solve driving" is not accurate.)

      • ACCount37 a day ago

        So, you need a moral support human? Like a big plushie, but more alive?

        • kashunstva a day ago

          You know, for most humans, empathy is a thing; all the more so when facing known or suspected health situations. Good on those who have transcended that need. I guess.

        • constantcrying a day ago

          What is the point of the snark? If you are going to find out that you are dying within a year, do you want to get that as an E-Mail?

          • ACCount37 a day ago

            The point is: I don't see "emotional support" as a vital part of the job of a radiologist.

  • stavros a day ago

    I'm not going to comment on whether AI is better than human radiologists or not, but if it is, what will happen is this:

    Radiologists will validate the results but either find themselves clicking "approve, approve, approve" all day, or disagree and find they were wrong (since our hypothesis is that the AI is better than a human). Eventually, this will be common knowledge in the field, hospitals will decide to save on costs and just skip the humans altogether, lobby, and get the law changed.

  • Workaccount2 a day ago

    What about the patient that doesn't want to pay $6,000 to go from 99.9% accuracy to 99.95% accuracy?

    • newyankee a day ago

      This is exactly the tradeoff that works in healthcare of poor countries, mostly because the alternative is no healthcare

    • kccqzy a day ago

      I don't think the legal framework even allows the patient to make that trade off. Can a patient choose 99.9% accuracy instead of 99.95% accuracy and also waive the right to a malpractice lawsuit?

    • ninetyninenine a day ago

      You know the crazy thing about this? For this application I think it’s similar to spam. AI can easily be trained to be better than a human.

      And it’s definitely not a 0.05 percent difference. AI will perform better by a long shot.

      Two reasons for this.

      1. The AI is trained on better data. If the radiologist makes a mistake that mistake is identified later and then the training data can be flagged.

      2. No human indeterminism. AI doesn’t get stressed or tired. This alone even without 1. above will make AI beat humans.

      Let’s say 1. was applied but that only applies for consistent mistakes that humans make. Consistent mistakes are eventually flagged and shows up as a pattern in training data and the AI can learn it even though humans themselves never actually notice the pattern. Humans just know that the radiologists opinion was wrong because a different outcome happened, we don’t even have to know why it was wrong and many times we can’t know… just flagging the data is enough for the AI to ingest the pattern.

      Inconsistent mistakes comes from number 2. If humans make mistakes that are due to stress the training data reflecting those mistakes will be minuscule in size and also random without pattern. The average majority case of the training data will smooth these issues out and the model will remain consistent. Right? A marker that follows a certain pattern shows up 60 times in the data but one time it’s marked incorrectly because of human error… this will be smoothed out.

      Overall it will be a statistical anomaly that defies intuition. Similar to how flying in planes is safer than driving. ML models in radiology and spam will beat humans.

      I think we are under this delusion that all humans are better than ML but this is simply not true. You can thank LLMs for spreading this wrong intuition.

    • hbd-investor a day ago

      I think its the other way around AI would certainly have better accuracy than a human, AI can see things pixel by pixel.

      You can take a 4k photo of anything, change one pixel to pure white and a human wouldn't be able to find this pixel by looking at the picture with their eyes. A machine on the other hand would be able to do it immediately and effortlessly.

      Machine vision is literally superhuman, For example Military camo can easily fool human eyes. But a machine can see through it clear as day. Because they can tell the difference between

      Black Hex #000000 RGB 0, 0, 0 CMYK 0, 0, 0, 100

      and

      Jet Black Hex #343434 RGB 52, 52, 52 CMYK 0, 0, 0, 80

  • seesthruya a day ago

    I'm a diagnostic radiologist with 20 years clinical experience, and I have been programming computers since 1979. I need to challenge one of your core assumptions.

    > Can AI read diagnostic images better than a radiologist? Almost certainly the answer is (or will be) yes.

    I'm sorry, but I disagree, and I think you are making a wild assumption here. I am up to date on the latest AI products in radiology, use several of the, and none of them are even in the ballpark on this. That vast majority are non-contributory.

    It is my strong belief that there is an almost infinite variation in both human anatomy and pathology. Given this variation, I believe that in order for your above assumption to be correct, the development of "AGI" will need to happen.

    When I interpret a study I am not just matching patterns of pixels on the screen with my memory. I am thinking, puzzling, gathering and synthesizing new information. Every day I see something I have never seen before, and maybe no one has ever seen before. Things that can't and don't exist in a training data set.

    I'm on the back end of my career now and I am financially secure. I mention that because people will assume I'm a greedy and ignorant Luddite doctor trying to protect my way of life. On the contrary, if someone developed a good replacement for what I don, I would gladly lay down my microphone and move on.

    But I don't think we are there yet, in fact I don't think we're even close.

    • sokoloff a day ago

      Can a human reliably carefully study for hours on end imaging from screening tests (think of a future world where whole-body MRI scanning for asymptomatic people becomes affordable and routine thanks to AI processing) and not miss subtle anomalies?

      I can easily imagine that humans are better at really digging deeply and reasoning carefully about anomalies that they notice.

      I doubt they're nearly as good as computers at detecting subtle changes on screens where 99% of images have nothing worrisome and the priors are "nothing is suspicious".

      I don't want to equate radiologists with TSA screeners, but the false negative rate for TSA screening of carryon bags is incredibly high. I think there's an analog here about the ability of humans to maintain sustained focus on tedious tasks.

      • bonsai_spool a day ago

        > Can a human reliably carefully study for hours on end imaging from screening tests

        This is actually very common in radiology where some positions have shifts of 8-12 hours, where one isn't done until all the studies on the list have been read.

        > think of a future world where whole-body MRI scanning for asymptomatic people becomes affordable and routine thanks to AI processing) and not miss subtle anomalies?

        The bottleneck in MRI is not reading but instead the very long acquisition times paired with the unavailability of the expensive machinery.

        If we charitably assume that you're thinking of CT scans, some studies on indiscriminate imaging indicate that most findings will be false positives:

        https://pmc.ncbi.nlm.nih.gov/articles/PMC6850647/

    • hliyan a day ago

      Do any of these models know how to say "I don't know"? This is one of my biggest worries about these models.

    • fkyoureadthedoc a day ago

      > When I interpret a study I am not just matching patterns of pixels on the screen with my memory.

      Seems like an over simplification, but let's say it's just true. Wouldn't you rather spend your time on novel problems that you haven't seen before? Some ML system identifies easy/common ones that it has high confidence in, leaving the interesting ones for you?

      • seesthruya a day ago

        Yes, that would be ideal, if we could build such a system. I think we cannot with current tech.

    • aabajian a day ago

      Your belief is held by many, many radiologists. One thing I like to highlight is that LLMs and LVMs are much more advanced than any model in the past. In particular, they do not require specific training data to contain a diagnosis. They don't even require specific modality data to make inferences.

      Think about how you learned anatomy. You probably looked at Netter drawings or Grey's long before you ever saw a CT or MRI. You probably knew the English word "laceration" before you saw a liver lac. You probably knew what a ground glass bathroom window looked like before the term was used to describe lung findings.

      LLMs/LVMs ingest a huge amount of training data, more than humans can appreciate, and learn connections between that data. I can ask these models to render an elephant in outer space with a hematoma on its snout in the style of a CT scan. Surely, there is no such image in the training set, yet the model knows what I want from the enormous number of associations in its network.

      Also, the word "finite" has a very specific definition in mathematics. It's a natural human fallacy to equate very large with infinite. And the variation in images is finite. Given a 16-bit, 512 x 512 x 100 slice CT scan, you're looking at 2^16 * 26214400 possible images. Very large, but still finite.

      Of course, the reality is way, way smaller. As a human, you can't even look at the entire grayscale spectrum. We just say, < -500 Hounsfield units (HU), that's air, -200 < fat < 0, bone/metal > 100, etc. A gifted radiologist can maybe distinguish 100 different tissue types based on the HU. So, instead of 2^16 pixel values, you have...100. That's 100 * 26214400 = 262,440,000 possible CT scans. That's a realistic upper-limit on how many different CT scans there could possibly be. So, let's pre-draft 260 million reports and just pick the one that fits best at inference time. The amount you'd have to change would be miniscule.

      • epcoa 3 hours ago

        Maybe I’m misunderstanding what you’re calculating, but this math seems wildly off. Sincerely don’t understand an alternate numerical point being made.

        > Given a 16-bit, 512 x 512 x 100 slice CT scan, you're looking at 2^16 * 26214400

        65536^(512*512) or 65536 multiplied by itself 262144 times for each image. An enormous number. Whether or not assume replacement (duplicates) is moot.

        > That's 100 * 26214400 = 262,440,000

        There are 100^(512*512) 512x512 100-level grayscale images alone or 100 to the 262144 power - 100 multiplied 262144 times. Again how you paring down a massive combinatoric space to a reasonable 262 mil?

      • seesthruya a day ago

        Hi aabajian, thanks for replying!

        I might quibble with your math a little. Most CTs have more than 100 images, in fact as you know stroke protocols have thousands. And many scans are reconstructed with different kernels, i.e. soft tissue, bone, lung. So maybe your number is a little low.

        Still your point is a good one, that there is probably a finite number of imaging presentations possible. Let's pre-dictate them all! That's a lot of RVUs, where do I sign up ;-)

        Now, consider this point. Two identical scans can have different "correct" interpretations.

        How is that possible? To simplify things, consider an x-ray of a pediatric wrist. Is it fractured? Well, that depends. Where does it hurt? How old are they? What happened? What does the other wrist look like? Where did they grow up?

        This may seems like an artificial example but I promise you it is not. There can be identical x-rays, and one is fractured and one is not.

        So add this example to the training data set. Now do this for hundreds or thousands of other "corner cases". Does that head CT show acute blood, or is that just a small focus of gyriform dystrophic calcification? Etc.

        I guess my point it, you may end up being right. But I don't think we are particularly close, and LLMs might not get us there.

        • themantalope a day ago

          Haha, I’m also an IR with AI research experience.

          My view is much more in line with yours and this interpretation.

          Another point - I think many people (including other clinicians) have a sense that radiology is a practice of clear cut findings and descriptions, when in practice it’s anything but.

          At another level beyond the imaging appearance and clinical interpretation is the fact that our reports are also interpreted at a professional and “political” level.

          I can imagine a busy neurosurgeon running a good practice calling the hospital CEO to discuss unforgiving interpretations of post op scans from the AI bot……

          • seesthruya a day ago

            > I can imagine a busy neurosurgeon running a good practice calling the hospital CEO to discuss unforgiving interpretations of post op scans from the AI bot……

            I have fielded these phone calls, lol, and would absolutely love to see ChatGPT handle this.

    • Workaccount2 21 hours ago

      Johns Hopkins has an in house AI unit where they train their own AI's to do imaging analysis. In fact this center made the rounds a few months ago in an NYT story about AI in radiology.

      What was left out was that these "cutting edge" AI imaging models were old school CNNs from the mid 2010's, running on local computers. It seems only right now is the idea of using transformers (what LLMs are) is being explored.

      In that sense, we still do not know what a purpose build "ChatGPT of radiology" would be capable of, but if we use the data point of comparing AI from 2015 to AI of 2025, the step up in ability is enormous.

    • ACCount37 a day ago

      "Latest products" and "state of the art" are two very, very different classes of systems. If anything medical has reached the state of a "product", you can safely assume that it's somewhere between 5 and 50 years behind what's being attempted in the labs.

      And in AI tech, even "5 years ago" is a different era.

      In year 2025, we have those massive multimodal reasoning LLMs that can crossreference data from different images, text and more. If the kind of effort and expertise that went into general purpose GPT-5 went into a more specialized medical AI, where would its capabilities top out?

    • dataflow a day ago

      > Every day I see something I have never seen before, and maybe no one has ever seen before.

      Do you have any typical examples of this you could try to explain to us laymen, so we get a feel for what this looks like? I feel like it's hard for laymen to imagine how you could be seeing new things outside a pattern every day (or week}.

    • polski-g a day ago

      AI can detect a Black person vs a White person via their chest x-rays. Radiologists say there is no difference. Turns out they're wrong. https://www.nibib.nih.gov/news-events/newsroom/study-finds-a...

      That being said, there are no radiologists available to hire at any price: https://x.com/ScottTruhlar/status/1951370887577706915

      • seesthruya a day ago

        THERE ARE NO RADIOLOGISTS AVAILABLE TO HIRE AT ANY PRICE!!!

        True, and very frustrating. Imaging volume is going parabolic and we cannot keep up! I am offering full partnership on day one with no buy-in for new hires. My group is in the top 1% of radiology income. I can't find anyone to hire, I can only steal people from other groups.

  • cogman10 2 days ago

    Doesn't most of the stuff a radiologist does get double checked anyways by the doctor that orders the scan in the first place? I guess not a more typical screening scan like a mammogram. However, for anything else like a CT, MRI, Xray, etc. I expect the doctor/NP that ordered it in the first place will want to take a look at the image itself and not just the report on the image.

    • n8henrie 2 days ago

      As an ER doc I look at a lot of my own studies, because I'm often using my interpretation to guide real-time management (making decisions that can't wait for a radiologist). I've gotten much better over time, and I would speculate that I'm one of the better doctors in my small hospital at reading my own X-rays, CTs, and ultrasounds.

      I am nowhere near as good as our worst radiologist (who is, frankly... not great). It's not even close.

      • seesthruya a day ago

        As a working diagnostic radiologist in a busy private practice serving several hospitals, this has been my experience as well.

        We have some excellent ER physicians, and several who are very good at looking at their own xrays. They also have the benefit of directly examining the patient, "it hurts HERE", while I am in my basement. Several times a year they catch something I miss!

        But when it comes to the hard stuff, and particularly cross-sectional imaging, they are simply not trained for it.

      • sharkweek 2 days ago

        I’m fascinated. What makes a great radiologist so much better than the average?

        • incone123 a day ago

          Calling the edge cases correctly, I would think.

          I hurt my arm a while back and the ER guy didn't spot the radial head fracture, but the specialist did. No big deal since the treatment was the same either way.

        • lostlogin a day ago

          Im not the OP and I’m an MR tech.

          I rate techs against non-radiology trained physicians in terms of identifying pathology. However techs aren’t anywhere near the ability of a radiologist.

          Persuading junior techs not to scan each other and decide the diagnosis is a reoccurring problem, and it comes up too often.

          These techs are trained and are good. I have too many stories about things techs have missed which a radiologist has immediately spotted.

        • 71bw a day ago

          You're specifically trained to look at the scans, and not to do 75 other things as well, only to use scans to aid your whatever you're doing.

    • nutjob2 2 days ago

      A primary physician (or NP) isn't in a position to validate the judgement of a specialist. Even if they had the training and skill (doubtful), responsibility goes up, not down. It's all a question of who is liable when things go wrong.

    • Spooky23 2 days ago

      Not meaningfully. Beyond basics like a large tumor, a bone break, etc, there’s alot too it.

      • coderatlarge 2 days ago

        my pcp doesn’t even have the tools to view an mri even though part of a hospital system.

        • Spooky23 a day ago

          That’s an issue with that practice. I had the tools to view MRIs in my laptop.

  • teleforce a day ago

    >People outside radiology don't get why AI hasn't taken over

    AI will probably never taking over, what we really need is AI working in tandem with radiologist and complementing their work to help with their busy schedule (or limited number of radiologist).

    The OP title can also be changed to "Demand for human cardiologist is at an all-time high", and is still be true.

    For example in CVDs detection cardiologist need to diagnose the patient properly, and if the patient not happy with the diagnostic he can get a second opinion from another cardiologist, but cardiologist number is very limited even more limited than radiologist.

    For most of the countries in the world, only several hundreds to several thousands registered cardiologist per country, making the ratio about 1:100,000 cardiologist to population ratio.

    People expecting cardiologist to go through their ECG readings but do you know that reading ECG is very cumbersome. Let's say you have 5 minutes ECG signals for the minimum requirement for AFib detection as per guideline. The standard ECG is 12-lead resulting in 12 x 5 x 60 = 3600 beats even for the minimum 5 minutes durations requirements (assuming 1 minute ECG equals to 60 beats). Then of course we have Holter ECG with typical 24-hour readings that increase the duration considerably and that's why almost all Holter reading now is automated. But current ECG automated detection has very low accuracy because their accuracy of their detection methods (statistics/AI/ML) are bounded by the beat detection algorithm for example the venerable Pan-Tompkins for the fiducial time-domain approach [1].

    The cardiologist will rather spent their time for more interesting activities like teaching future cardiologists, performing expensive procedures like ICD or pacemaker, or having their once in a blue moon holidays instead of reading monotonous patients' ECGs.

    I think this is why ECG reading automation with AI/ML is necessary to complement the cardiologist but the trick is to increase the sensitivity part of the accuracy to very high value preferably 100% so the missing potential patients is minimized for the expert and cardiologist in the loop exercise.

    [1] Pan–Tompkins algorithm:

    https://en.wikipedia.org/wiki/Pan%E2%80%93Tompkins_algorithm

    • eMPee584 a day ago

      As in.. for small durations of "never" ? ..

    • the_real_cher a day ago

      This seems like a task an A.I. would be really good at or even just a standard algorithm.

  • Wololooo a day ago

    Did the need raise through the use of silicon X ray detectors that improved the handling of images and reduced the time needed to get done imaging meaning that it made it faster, cheaper and less cumbersome, increasing the number of requests for X ray imaging?

  • missedthecue a day ago

    So you're telling me the reason an extremely expensive yet totally redundant cost in the healthcare infrastructure will remain in place is because of regulatory capture?

    You're probably right.

  • newyankee 2 days ago

    But this indicates lack of incentives to reduce healthcare costs by optimisation. If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.

    I feel that human processes have inertia and for lack of a better word, gatekeepers feel that new, novel approaches should be adopted slowly and which is why we are not seeing the impact, yet. Once a country with the right incentive structure (e.g. China ) can show that it can outperform and help improve the overall experience I am sure things will change.

    While 10 years progress is a lot in ML, AI , in more traditional fields it probably is a blip to change this institutional inertia which will change generation by generation. All that is needed is an external actor to take the risk and show a step change improvement. Having experienced how healthcare in US I feel people are only scared to take on bold challenges

    • dmbche 2 days ago

      Three things explain this. First, while models beat humans on benchmarks, the standardized tests designed to measure AI performance, they struggle to replicate this performance in hospital conditions. Most tools can only diagnose abnormalities that are common in training data, and models often don’t work as well outside of their test conditions. Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians

      From the article

      • Marazan a day ago

        Another key extract from the article

        > The performance of a tool can drop as much as 20 percentage points when it is tested out of sample, on data from other hospitals. In one study, a pneumonia detection model trained on chest X-rays from a single hospital performed substantially worse when tested at a different hospital.

        That screams of over fitting to the training data.

        • SirHumphrey a day ago

          Because that is literally happening. I did a bit of work developing some radiological models and sample size for healthy vs malignant is usually 4 to 1. Then you modify the error function so that it makes malignants more significant (you are quite often working with datasets as low as 500 images, so 80/20 training validation split means you are left with 80 examples of malignant) which means that as soon as you take a realistic sample where one specific condition maybe appears in 1/100 or 1/1000 the false positives make your model practically useless.

          Of course SOTA models are much better, but getting medical data is quite difficult and expensive so there is not a lot of them.

    • Spooky23 2 days ago

      Remember, the AI doesn’t create anything, so you add risk potentially to the patient outcome and perhaps make advancement more difficult.

      My late wife had to have a stent placed in a vein in her brain to relieve cranial pressure. We had to travel to to New York for an interventional radiologist and team to fish a 7 inch stent and balloon from her thigh up.

      At the time, we had to travel to NYC, and the doctor was one of a half dozen who could do the procedure in the US. Who’s going to train the future physician the skills needed to develop the procedure?

      For stuff like this, I feel like AI is potentially going to erase certain human knowledge.

      • chii 2 days ago

        > Who’s going to train the future physician the skills needed to develop the procedure?

        i would presume that AI taking over won't erase the physical work, which would mean existing training regimes will continue to exist.

        Until one day, an AI robot is capable of performing such a procedure, which would then mean the human job becomes obsolete. Like a horse-drawn coach driver - that "job" is gone today, but nobody misses it.

        • sungam a day ago

          Performing the procedure requires a high level of skill in interpreting scans (angiograms) in real time.

        • Spooky23 a day ago

          Yeah there’s no more drivers out there, bro. Lol.

    • rtpg 2 days ago

      The assumption is that more productive AI + humans leads to cost reductions.

      But if everyone involved has a profit motive, you end up cutting at those cost reductions. "We'll save you 100 bucks, so give us 50", done at the AI model level, the AI model repackager, the software suite that the hospital is using, the system integrators that manage the software suite installation for the hospital, the reseller of the integrator's services through some consultancy firm, etc etc.

      There are so many layers involved, and each layer is so used to trying to take a slice, and we're talking about a good level of individualization in places that aren't fully public a la NHS, that the "vultures" (so to speak) are all there ready to take their cut.

      Maybe anathema to say on this site, but de-agglomeration really seems to have killed just trying to make things better for the love of the game.

      • rayiner 2 days ago

        Nobody has a profit motive since doctors get their bills paid per procedure and health insurers have a profit cap.

        • rtpg 2 days ago

          Consider that the profit cap is a percentage, so increased costs in fact increase the amount of profits to be scooped up. So health insurers that would like to see more cash are incentivized to have costs increase!

          I also think that the profit cap percentage is not something that applies across the board to every single player in the healthcare space.

        • tptacek 17 hours ago

          Wait, explain. The insurer thing, I get: they're capped. The doctors seem definitely to have a profit motive!

    • erentz 2 days ago

      From real world experience as a patient that has had a lot go wrong over the last decade. The problem isn’t lack of automation, it’s structural issues affecting cost.

      Just as one example a chest CT would’ve cost $450 if done cash. It costed an insurer over $1200 done via insurance. And that was after multiple appeals and reviews involving time from people at the insurance company and the providers office including the doctor himself. The low hanging fruit in American healthcare costs is the stuff like that.

      • john01dav a day ago

        Calling that "low hanging fruit" isn't accurate, because entrenched and powerful interests benefit from it being kept that way. That extra $750 is valuable to the capitalist that gets it. The jobs to process those appeals and reviews are valuable to the employees who do them. Deleting all of this overnight will fuck these people over to varying degrees, and it could even have macroeconomic implications.

        With that said, although it will not be easy, this shit needs to change. Health care in the United States is unacceptably expensive and of poorer quality than it needs to be.

    • dkarl 2 days ago

      Risks in traditional medicine are standardized by standardized training and credentialing. We haven't established ways to evaluate the risks of transferring diagnostic responsibility to AIs.

      > All that is needed is an external actor to take the risk and show a step change improvement

      Who's going to benefit? Doctors might prioritize the security of their livelihood over access to care. Capital will certainly prioritize the bottom line over life and death[0].

      The cynical take is that for the time being, doctors will hold back progress, until capital finds a way to pay them off. Then capital will control AI and control diagnosis, letting them decide who is sick and what kind of care they need.

      The optimistic take is that doctors maintain control but embrace AI and use it to increase the standard of care, but like you point out, the pace of that might be generational instead of keeping pace with technological progress.

      [0] https://www.nbcnews.com/news/us-news/death-rates-rose-hospit...

      • spongebobstoes 2 days ago

        Having paid $300 for a 10 minute doctor visit, in which I was confidently diagnosed incorrectly, it will not take much for me to minimize my doctor visits and take care into my own hands whenever possible.

        I will benefit from medical AI. There will soon come a point where I will pay a premium for my medical care to be reviewed by an AI, not the other way around.

        • prime_ursid a day ago

          If you’d trust generative AI over a physician, go in wide-eyed knowing that you’re still placing your trust in some group of people. You just don’t have an individual to blame if something goes wrong, but rather the entire supply chain that brings the model and its inference. Every link in that chain can shrug their shoulders and point to someone else.

          This may be acceptable to you as an individual, but it’s not to me.

        • dkarl 2 days ago

          You might pay for a great AI diagnosis, but what matters is the diagnosis recognized by whoever pays for care. If you depend on insurance to pay for care, you're at the mercy of whatever AI they recognize. If you depend on a socialized medical care plan, you're at the mercy of whatever AI is approved by them.

          Paying for AI diagnosis on your own will only be helpful if you can shoulder the costs of treatment on your own.

          • murukesh_s 2 days ago

            At least you can dodge false diagnosis which is important especially when it can cause irreversible damage to your body

            • dns_snek a day ago

              Under the assumption that AI has perfect accuracy. Perhaps you dodged the correct diagnosis and get to die 6 months later due to the lack of treatment. Might as well flip a coin.

              • ACCount37 a day ago

                Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.

                Which is often a very, very low bar.

                What do you call a doctor who was last in his class in medical school? A doctor.

                • dns_snek a day ago

                  > Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.

                  They made an absolute statement claiming that AI will "at least" let them dodge false diagnosis, that implies a diagnostic false positive rate of ~0%. Otherwise how can you possibly be so confident that you "dodged" anything? You still need a second opinion (or third).

                  If a doctor diagnosed you with cancer and AI said that you're healthy, would you conclude that the diagnosis was false and skip treatment? It's easy to make frivolous statements like these when your life isn't on the line.

                  > What do you call a doctor who was last in his class in medical school? A doctor.

                  How original, they must've passed medical school, certification, and years of specialization by pure luck.

                  Do you ask to see every doctor's report card before deciding to go with the AI or do you just assume they're all idiots?

                • hobs a day ago

                  And what's the bar for people making machine learning algos? What do you call a random person off the street? A programmer.

    • lumost 2 days ago

      Part of the challenge is that machines are significantly different. The radiologist’s statement that an object measured from two different machines is the same and has not changed in size is in large part judgement. Building a model which can replicate this judgement likely involves building a model which can solve all common computer vision tasks, has the full medical knowledge of an expert radiologist, and has been painstakingly calibrated against thousands of real radiologists in hospital conditions.

    • scheme271 2 days ago

      The article points out that the AI + humans approach gives poorer results. Humans end up deferring to or just accepting the AI output without double checking. So corner cases, and situations where the AI doesn't do well just end up going through the system.

      • zippyman55 2 days ago

        This is what I worry about - when someone gets a little lazy and leans too heavily on the tool. Perhaps their skills diminish over time. It seems AI could be used to review results after an analysis. That would be ok to me, but not before.

    • numpad0 2 days ago

      Or, maybe artifacts justify prices less so than amounts of souls bothered will. Robotic medical diagnosis could save costs, but it could suppress customers' appetite too, in which case, like you said, commercial healthcare providers would not be incentivized to offer it.

    • conartist6 2 days ago

      "AI" literally could not care if you live or die.

      That's more than a problem of inertia

    • hnaccount_rng 2 days ago

      I think the one thing we will find out with the AI/Chatbot/LLM boom is: Most economic activity is already reasonably close to a local optimum. Either you find a way to change the whole process (and thereby eliminate steps completely) or you won't gain much.

      That's true for AI-slop-in-the-media (most of the internet was already lowest effort garbage, which just got that tiny bit cheaper) and probably also in medicine (a slight increase in false negatives will be much, much more expensive than speeding up doctors by 50% for image interpretation). Once you get to the point where some other doctor is willing (and able) to take on the responsibility of that radiologist, then you can eliminate that kind of doctor (but still not her work. Just the additional human-human communication)

    • aprilthird2021 a day ago

      > If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.

      But it doesn't lead to increased throughput because there needs to be human validation when people's lives are on the line.

      Planes fly themselves these days, it doesn't increase the "throughout" or eliminate the need for a qualified pilot (and even a copilot!)

    • BolexNOLA 2 days ago

      If we were serious about reducing healthcare cost by optimization then we would be banning private equity from acquiring hospitals.

      • hattmall 2 days ago

        What is there to indicate "we" or anyone is serious about reducing healthcare costs? The only thing that will reduce costs is competitive pressure. The last major healthcare reform in the US was incredibly anti-competitive and designed with a goal of significantly raising costs but transferring those costs to the government. How could healthcare costs ever go down when the ONLY way for insurers to increase profits is for costs to go up as their profit is capped at a percentage of expenses.

        • opo a day ago

          >...The only thing that will reduce costs is competitive pressure.

          Unfortunately, just yesterday there were a surprising amount of people who seemed to argue that increased competition would at best have no effect, and at worst, would actually increase prices:

          https://news.ycombinator.com/item?id=45372442

        • BolexNOLA 2 days ago

          > What is there to indicate "we" or anyone is serious about reducing healthcare costs?

          I agree, we clearly aren’t. That’s my point.

    • orochimaaru 2 days ago

      I mean the company providing the AI is free to assume malpractice insurance. If that happens then there is definitely a chance.

      If statistically their error rate is better or around what a human does then their insurance is a factor of how many radiologists they intend to replace.

  • borroka a day ago

    However, it is also because, in matters of life or death, as a diagnosis from a radiologist can be, we often seek a second opinion, perhaps even a third.

    But we don't ask a second opinion to an "algorithm", we want a person, in front of us, telling us what is going on.

    AI is and will be used in the foreseeable future as a tool by radiologists, but radiologists, at least for some more years, will keep their jobs.

  • epcoa 2 days ago

    For real though how close are we to a product that takes an order for an ED or inpatient CT A/P, protocols it then reads the images and can read the chart and spits out a dictated report without any human intervention that ends up usable as is even 90% of the time.

    • layoric 2 days ago

      Right, the last 10% will be expensive or you accept a potential 10% failure rate.

      • epcoa 2 days ago

        Maybe I should have said 5%. 90% was a made up threshold. How close are we to even a basic “level 5”, ED doc puts in order for indication: “concern for sepsis, lol”, rad tech does their thing and a finished read appears, with no additional human involved except for maybe a review but not even 50% of the time is any addendum needed.

  • scythe 2 days ago

    I'm moderately amused that as an interventional radiologist, you didn't bother to mention that IRs do actual procedures and don't just WFH. When I was doing my DxMP residency there was a joke among the radiology residents that IRs had slotted into the cushiest field of medicine and then flopped the landing by choosing the only subfield that requires physical work.

    • aabajian 2 days ago

      Well I do enjoy procedures. As for diagnostics, it’s very different when you come from a CS background.

      On a basic level, software exists to expedite repetitive human tasks. Diagnostic radiology is an extremely repetitive human task. When I read diagnostics, there’s a voice in the back of my head saying, “I should be writing code to automate this rather than dictating it myself.”

  • wiz21c a day ago

    Provided enough political will (and you know that this can be correlated to many factors, like lobbying), laws can be changed.

  • cyrillite a day ago

    I am actively researching this friction and others like it. I would love it if you happened to have recommendations for literature that 3rd parties can use to corroborate your experience (I’ve found some, but this is harder to uncover than I expected as I’m not in the field)

  • OptionOfT 2 days ago

    Is there a risk that radiologists miss stuff because they get a pre-written report by AI that pushes them in a certain direction?

    • thyristan a day ago

      Maybe, but there is already that risk of some influence from other doctors, patients, nurses and general circumstances.

      When an X-Ray is ordered, there is usually a suspected diagnosis in the order like "suspect sprain, pls exclude fracture", "suspect lung cancer". Patients will complain about symptoms or give the impression of a certain illness. Things like that already place a bias on the evaluation a radiologist does, but they are trained to look past that and be objective. No idea how often they succeed.

  • pgreenwood a day ago

    Add to that that the demand for imaging is not fixed. Even if somehow imaging became a lot cheaper to do with AI, then likely we would just get more imaging done instead of having fewer radiologists.

  • mettamage 2 days ago

    So you studied like 8 years of med school [1] and 2 years of CS? Damn! That’s a lot.

    [1] I don’t know the US system so it’s just a guess

    • aabajian 2 days ago

      4 years undergrad (CS, math, bio)

      4 years med school

      2 years computer science

      6 years of residency (intern year, 4 years of DR, 1 year of IR)

      16 years...

      • arethuza a day ago

        Why do you need the first 4 years undergrad - in places like the UK you can go straight to study medicine from secondary school at age ~18?

        • chromatin a day ago

          The belief is -- and it is one that I share -- that this makes for more well rounded, human physicians.

          Additionally, a greater depth of thinking leads to better diagnosticians, and physician-scientists as well (IMO).

          Now, all of this is predicated on the traditional model of the University education, not the glorified jobs training program that it has slowly become.

          • hnfong a day ago

            Cynically, it's also a way for the US system to gatekeep "poor" people from entering professions like medicine and law because of the extra tuition fees (and opportunity time-cost) needed to complete their studies.

            • chromatin a day ago

              I am a natural skeptic, but in this case I think it is just an accident of history how different systems developed.

              FWIW, although this is not well known, many medical schools offer combined BA/MD degrees, ranging from 4-8 years:

              https://students-residents.aamc.org/medical-school-admission...

              When I went 20 years ago, my school did not require a bachelor's degree and would admit exceptional students after 2 years of undergraduate coursework. However I think this has now gone away everywhere due to AAMC criteria

              • FireBeyond a day ago

                In Australia, Medicine was/is typically an undergrad degree.

                In the mid-90s my school started offering a Bachelor of Biomedical Science which was targeted at two audiences - people who wanted to go into Medicine from a research, not clinical perspective, and people who wanted to practice medicine in the US (specifically because it was so laborious for people to get credentialed in the US with a foreign medical degree, that people were starting to say "I will do my pre-med in Australia, and then just go to a US medical school").

          • FireBeyond a day ago

            When I was in Australia and applying to study medicine (late 90s):

            Course acceptance is initially driven by academic performance, and ranked scoring.

            To get into Medicine at Monash and Melbourne Universities, you'd need a TER (Tertiary Entrance Ranking) of 99.8 (i.e. top 0.2% of students). This number was derived by course demand and capacity.

            But, during my time, Monash was known for having a supplementary interview process with panel and individual interviews - the interview group was composed of faculty, practicing physicians not affiliated with the university, psychologists, and lay community members - specifically with the goal of looking for those well-rounded individuals.

            It should also be noted that though "undergrad", there's little difference in the roadmap. Indeed when I was applying, the MBBS degree (Bachelor of Medicine and Surgery) was a six-year undergrad (soon revised to five), with similar post grad residency and other requirements for licensure and unrestricted practice.

    • seesthruya a day ago

      Me:

      4 years undergrad - major and minor not important, met the pre-med requirements 2 year grad school (got a master's degree, not required, but I was having fun) 4 years medical school 5 years radiology residency

  • doctorpangloss 2 days ago

    > Unless the law changes...

    That's it?

    I don't know. Doesn't sound like a very big obstacle to me. But I don't think AI will replace radiologists even if there was a law that said like, "blah blah blah, automated reports, can't be sued, blah blah." I personally think the consulting work they do is really valuable and very difficult to automate, we would be in an AGI world where radiologists get replaced, which seems unlikely.

    The bigger picture is that we are pretty much obligated to treat people medically, which is a good thing, so there is a lot more interest in automating healthcare than say, law, where spending isn't really compulsory.

    • Muromec 2 days ago

      > I don't know. Doesn't sound like a very big obstacle to me

      A lot of things are one law amendment away from happening and they aren’t happening. This could well become another mask mandate, which while being reasonable in itself, rubs people wrong way just enough to become a sacred issue.

      • marcosdumay 2 days ago

        Very few things the general public wants stays just one law amendment away for long. And almost all of those things are for the benefit of powerful people.

  • smrtinsert 2 days ago

    At some point medical equipment is certified in some way for use. Could the same happen for imaging AIs?

    • scheme271 2 days ago

      The article mentions a system for diabetic retinopathy diagnosis that is certified and has liability coverage. It sounds like it's the only one where that occurs. For everything else, malpractice insurance explicitly excludes any AI assisted diagnosis.

      • epcoa 2 days ago

        Malpractice insurance tends to exclude the diabetic retinopathy one too.. the vendor has to provide insurance.

    • speakfreely 2 days ago

      But the equipment is operated by a person, and the diagnostic report has to be signed off by a person, who has a malpractice insurance policy for personal injury attorneys to go after.

      The system is designed a nanny-state fashion: there's no way to release practitioners from liability in exchange for less expensive treatments. I doubt this will change until healthcare pricing hits an extremely expensive breaking point.

  • notmyjob a day ago

    “Unless the law changes”

    Famous last words.

kklisura 2 days ago

When Tesla demoed (via video) self-driving in 2016 with a claim "The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself" and then when they unveiled Semi in 2017 - I tweeted out and honestly thought that trucking industry is changed forever and it doesn't make sense to be starting in trucking industry. It's almost end of 2025 and either nothing out of it or just a small part of it panned out.

I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all.

  • christianqchung 2 days ago

    Look at Waymo, not Robotaxi. Waymo is essentially the self driving vision I had as a kid, and ridership is growing exponentially as they expand. It's also very safe if you believe their statistics[0]. I think there's a saying about overestimating stuff in the short term and underestimating stuff in the long term that seems to apply here, though the radiologist narrative was definitely wrong.

    [0] https://waymo.com/safety/impact/

    • hnav 2 days ago

      Even though the gulf between Waymo and the next runner up is huge, it too isn't quite ready for primetime IMO. Waymos still suffer from erratic behavior at pickup/dropoff, around pedestrians, badly marked roads and generally jam on the brakes at the first sign of any ambiguity. As much as I appreciate the safety-first approach (table stakes really, they'd get their license pulled if they ever caused a fatality) I am frequently frustrated as both a cyclist and driver whenever I have to share a lane with a Waymo. The equivalent of a Waymo radiologist would be a model that has a high false-positive and infinitesimal false-negative rate which would act as a first line of screening and reduce the burden on humans.

      • code_biologist 2 days ago

        I've seen a lot of young people (teens especially) cross active streets or cross in front of Waymos on scooters knowing that they'll stop. I try not to do anything too egregious, but I myself have begun using Waymo's conservative behavior as a good way to merge into ultra high density traffic when I'm in a car, or to cross busy streets when they only have a "yield to pedestrian" crosswalk rather than a full crosswalk. The way you blip a Waymo to pay attention and yield is beginning to move into the intersection, lol.

        I always wonder if honking at a Waymo does anything. A Waymo stopped for a (very slow) pickup on a very busy one lane street near me, and it could have pulled out of traffic if it had gone about 100 feet further. The 50-ish year old lady behind it laid on her horn for about 30 seconds. Surreal experience, and I'm still not sure if her honking made a difference.

        I like Waymos though. Uber is in trouble.

        • hnav 2 days ago

          Simultaneously, Waymo is adopting more human-like behavior like creeping at red lights and cutting in front of timid drivers as it jockeys for position.

          I still think that Google isn't capable of scaling a rideshare program because it sucks at interfacing with customers. I suspect that Uber's long-term strategy of "take the money out of investors' and drivers' pockets to capture the market until automation gets there" might still come to fruition (see Austin and Atlanta), just perhaps not with Uber's ownership of the technology.

          On the other hand Google has been hard at work trying to make its way into cars via Android automotive so I totally see it resigning to just providing a reference sensor-suite and a car "Operating System" to manufacturers who want a turnkey smart-car with L3 self-driving

          • potato3732842 2 days ago

            >Simultaneously, Waymo is adopting more human-like behavior like creeping at red lights and cutting in front of timid drivers as it jockeys for position.

            So before it was a 16yo in a driver's ed car. Now it's an 18yo with a license.

            I'm gonna be so proud of them when it does something flagrantly illegal but any "decent driver who gets it" would have done in context.

      • nebula8804 2 days ago

        I honestly don't think we will have a clear answer to this question anytime soon. People will be in their camps and thats that.

        Just to clarify, have you ridden in a Waymo? It didn't seem entirely clear if you just experienced living with Waymo or have ridden in it.

        I tried it a few times in LA. What an amazing magical experience. I do agree with most of your assertions. It is just a super careful driver but it does not have the full common sense that a driver in a hectic city like LA has. Sometimes you gotta be more 'human' and that means having the intuition to discard the rules in the heat of the moment (ex. being conscious of how cyclists think instead of just blindly following the rules carefully, this is cultural and computers dont do 'culture').

        • hnav 2 days ago

          Waymo has replaced my (infrequent) use of Uber/Lyft in 80% of cases ever since they opened to the public via waitlist. The product is pretty good most of the time, I just think the odd long-tail behaviors become a guarantee as you scale up.

        • nutjob2 2 days ago

          You have to consider that the AVs have their every move recorded. Even a human wouldn't drive more aggressively under those circumstances.

          Probably what will happen in the longer term is that rules of the road will be slightly different for AVs to allow for their different performance.

      • bsder 2 days ago

        > Waymos still suffer from erratic behavior at pickup/dropoff, around pedestrians, badly marked roads and generally jam on the brakes at the first sign of any ambiguity.

        As do most of the ridesharing drivers I interact with nowadays, sadly.

        The difference is that Waymo has a trajectory that is getting better while human rideshare drivers have a trajectory that is getting worse.

        • captainkrtek 2 days ago

          Society accepts that humans make mistakes and considers it unavoidable, but there exists a much higher bar expected of computers/automation/etc. even if a waymo is objectively safer in terms of incidents per miles driven, one fatality makes headlines and adds scrutiny about “was it avoidable?”, whereas humans we just shrug.

          I think the theme of this extends to all areas where we are placing technology to make decisions, but also where no human is accountable for the decision.

          • JumpCrisscross 2 days ago

            > there exists a much higher bar expected of computers/automation/etc. even if a waymo is objectively safer in terms of incidents per miles driven, one fatality makes headlines and adds scrutiny about “was it avoidable?”

            This doesn’t seem to be happening. One, there are shockingly few fatalities. Two, we’ve sort of accepted the tradeoff.

          • bsder 2 days ago

            > Society accepts that humans make mistakes and considers it unavoidable, but there exists a much higher bar expected of computers/automation/etc.

            There are a horde of bicyclists and pedestrians who disagree with you and are hoping that automated cars take over because humans are so terrible.

            There are a horde of insurance companies who disagree with you and are waiting to throw money to prove their point.

            When automated driving gets objectively better than humans, there will be a bunch of groups who actively benefit and will help push it forward.

          • nebula8804 2 days ago

            Society only cares about the individual and no one else. If Uber/Lyft continue to enshittify with drivers driving garbage broken down cars, drivers with no standards (ie. having just smoked weed) and ever rising rates, eventually people will prefer the Waymos.

    • steveklabnik 2 days ago

      I am a long time skeptic of self-driving cars. However, Waymo has changed that for me.

      I spend a lot of time as a pedestrian in Austin, and they are far safer than your usual Austin driver, and they also follow the law more often.

      I always accept them when I call an Uber as well, and it's been a similar experience as a passenger.

      I kinda hate what the Tesla stuff has done, because it makes it easier to dismiss those who are moving more slowly and focusing on safety and trust.

      • calvinmorrison 2 days ago

        Yeah we don't need to compare robots to the best driver or human, just the average, for an improvement.

        However, like railroad safety is expensive heavily regulated, self driving car companies have the same issue.

        Decentralized driving decentralizes risk.

        so when I have my _own_ robot to do it, it'll be easy and cheap.

        • dns_snek a day ago

          > Yeah we don't need to compare robots to the best driver or human, just the average, for an improvement.

          Sure, in theory. In practice, nobody is going to give up control on the basis that the machine is "slightly better than average". Those who consider the safety data when making their decision will demand a system that's just as good as the best human drivers in most aspects.

          And speaking of Waymo, let's not forget that they only operate in a handful of places. Their safety data doesn't generalize outside of those areas.

          • SketchySeaBeast a day ago

            > And speaking of Waymo, let's not forget that they only operate in a handful of places. Their safety data doesn't generalize outside of those areas.

            Yeah, I'm curious in seeing how they function in environments that get snow.

    • pkdpic 2 days ago

      I agree with both comments here. I wonder what the plausibility of fully autonomous trucking is in the next 10-30 years...

      Is there any saying that exists about overestimating stuff in the near term and long term but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.

      • basisword 2 days ago

        I remember Bill Gates said: "We overestimate what we can do in one year and underestimate what we can do in ten years".

        • jampekka 2 days ago

          Not Musk. He promised full autonomy within 3 years about 10 years ago.

          https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

          • nebula8804 2 days ago

            Musk and Gates have very different philosophies.

            Gates seems more calm and collected having gone through the trauma of almost losing his empire.

            Musk is a loose cannon having never suffered the consequences of his actions (ie. early Gates and Jobs) and so he sometimes gets things right but will eventually crash and burn having not had the fortune of failing and maturing early on in his career(he is now past the midpoint of his career with not enough buffer to recover).

            They are both dangerous in their own ways.

      • 1718627440 2 days ago

        If it were about the costs for employees, you could ship it with the railway. That simply isn't the reason.

      • omnicognate 2 days ago

        > ... but underestimating stuff in the midterm? Ie flying car dreams in the 50s etc.

        We still don't have flying cars 70 years later, and they don't look any more imminent than they did then. I think the lesson there is more "not every dream eventually gets made a reality".

    • m0llusk 2 days ago

      Waymo is very impressive, but also demonstrates limitations of these systems. Waymo vehicles are still getting caught performing unsafe driving maneuvers, they get stuck alleys in numbers, and responders have trouble getting them to acknowledge restricted areas. I am very supportive of this technology, but also highly skeptical as long as these vehicles are directly causing problems for me personally. Driving is more than a technical challenge, it involves social communication skills that automated vehicles do not yet have.

    • qnleigh a day ago

      I've seen a similar quote attribute to Bill Gates;

      "We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten."

      I think about this quote a lot these days, especially while reading Hacker News. On one hand, AI doesn't seem to be having the productivity and economic impacts that were predicted, but on the other, LLMs are getting gold medals at the Math Olympiads. It's like the ground is shifting beneath our feet, but it's still too slow to be perceptible.

    • trebligdivad 2 days ago

      Waymo still have the ability to remotely deal with locations the AI has problems; I'd love to know what type of percentage of trips need to do that now. Having that escape together with only doing tested areas makes their job a LOT easier. (Not that it's bad - it's a great thing and I wish for it here!)

    • basisword 2 days ago

      It's limited to a few specific markets though. My bet is they aren't going to be able to roll it out widely easily. Probably need to do years of tests in each location to figure out the nuances of the places.

      • christianqchung 2 days ago

        Yeah, I have no idea if Waymo will ever be a rural thing honestly, mostly for economic reasons. I'm skeptical it would get serious suburban usage this decade too. But for major cities where less than 80% of people own cars, test time doesn't seem to be making a difference. They've been expanding in Austin and Atlanta, seemingly with less prep time than Phoenix and San Fran.

        • red_rech 2 days ago

          Atlanta seems to be a bit contradictory to some of your other thoughts.

          The city itself is relatively small. A vast majority of area population lives distributed across the MSA, and it can create hellish traffic. I remember growing up thinking 1+ hour commutes were just a fact of life for everyone commuting from the suburbs.

          Not sure what car ownership looks like, and I haven’t been in years, but I’d imagine it’s still much more than just 20%

          • christianqchung a day ago

            > Not sure what car ownership looks like, and I haven’t been in years, but I’d imagine it’s still much more than just 20%

            I said "less than 80% car ownership", not "80% do not own a car". Technically these are not mutually exclusive but I think you read it as the second one. I haven't really found much analysis about how public transit interfaces with self driving cars honestly.

          • dingnuts 2 days ago

            Austin is also a car city, everyone has a car there. Public transit in Austin is a joke, and Waymo can't get on the highway so it's only useful for getting back to your hotel from Rainey Street, and maybe back to your dorm from the Drag, but nobody is using Waymo to commute from Round Rock

        • shagie 2 days ago

          They keep expanding in places where it doesn't snow.

          They've got testing facilities in Detroit ( https://mcity.umich.edu/what-we-do/mcity-test-facility/ ) ... but I want to see it work while it is snowing or after it has snowed in the upper midwest.

          https://youtu.be/YvcfpO1k1fc?si=hONzbMEv22jvTLFS - has suggestions that they're starting testing.

          If AI driving only works in California, New Mexico, Arizona, and Texas... that's not terribly useful for the rest of the country.

          • everforward 2 days ago

            If I were in charge of Waymo, I’d roll out in snowy places last. The odds of a “couldn’t be avoided” accident is much higher in snow/ice. I’d want an abundance of safety data in other places to show that the cars are still safe, and it was the snow instead of the tech that caused the accident.

          • iamdelirium 2 days ago

            They're testing in Denver and NYC so its coming.

          • nebula8804 2 days ago

            Define the rest of the country?

            If you refer to rural areas, thats 1/7 of the population and ~10% of GDP. They can be tossed aside like they are in other avenues.

        • dyauspitr 2 days ago

          I could see it taking off in the suburbs/rural areas if they start having a franchise model when it’s more mature.

      • foota 2 days ago

        I saw this timeline a while ago: https://www.reddit.com/r/waymo/s/mSm0E3yYTY that shows their timeline in each city. Shows Atlanta at just over a year. I think once they've handled similar cities it gets easier and easier to add new ones.

    • newyankee 2 days ago

      Honestly, once a traffic island city (like Singapore) or some other small nation state adopts self driving only within its limits and shows that it is much easier when all are self driving I think the opposition to the change will slowly reduce.

      Rain, Snow etc. are still challenges but needs a bold bet in a place that wants to show how futuristic it is. The components are in place (Waymo cars), what is needed is high enough labor cost to justify the adoption.

    • pfdietz 2 days ago

      > a saying about overestimating stuff in the short term and underestimating stuff in the long term

      This is exactly what came to my mind also.

  • innanet-worker 2 days ago

    well part of the reason why you may have felt mislead by that video is because it was staged so i wouldn't feel that bad.

    https://www.reuters.com/technology/tesla-video-promoting-sel...

    for me i have been riding in waymos the last year and have been very pleased with the results. i think we WANT this technology to move faster but the some of the challenges at the edges take a lot of time and resources to solve, but not fundamentally unsolvable.

    • lomase 2 days ago

      Waymo is a 21 year old company that only operates on a small part of the US after $10 billions of funding.

      • dingnuts 2 days ago

        it's also widely believed that the cars are remotely operated, not autonomous.

        they are likely semi autonomous, which is still cool, but I wish they'd be honest about it

        • YeGoblynQueenne 2 days ago

          They are:

          Much like phone-a-friend, when the Waymo vehicle encounters a particular situation on the road, the autonomous driver can reach out to a human fleet response agent for additional information to contextualize its environment. The Waymo Driver does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times. As the Waymo Driver waits for input from fleet response, and even after receiving it, the Waymo Driver continues using available information to inform its decisions. This is important because, given the dynamic conditions on the road, the environment around the car can change, which either remedies the situation or influences how the Waymo Driver should proceed. In fact, the vast majority of such situations are resolved, without assistance, by the Waymo Driver.

          https://waymo.com/blog/2024/05/fleet-response/

          Although I think they overstate the extent to which the Waymo Driver is capable of independent decisions. So, honest, ish, I guess.

        • treespace8 2 days ago

          After learning that the Amazon Go store was power by hundreds of people watching video because the AI could not handle it was a real eye opener for me.

          Is this why Waymo is slow to expand, not enough remote drivers?

          Maybe that is where we need to be focused, better remote driving?

          • mandevil 2 days ago

            Waymo does not believe that remote drivers are responsive enough to be able to safely operate. Safety drivers communicate with the self-driving system, and can set waypoints etc. for the navigation system, but the delays inherent make it unsafe, is what the Waymo people say publicly at least.

            The reason that Waymo is slow to expand is that they have to carefully and extensively LiDAR map every single road of their operating area before they can open up service in an area. Then while operating they simply do a difference algo on what each LiDAR sees at the moment and the truth data they have stored, and boom, anything that can potentially move pops right out. It works, it just takes a lot of prep- and a lot of people to keep on top of things too. For example, while my kid's school was doing construction they refused to drop off in the parking lot, but when the construction ended they became willing. So there must be a human who is monitoring construction zones across the metro area, and marking up on their internal maps when areas are off limits.

          • thewebguyd 2 days ago

            > Maybe that is where we need to be focused, better remote driving?

            I think maybe we can and should focus on both. Better remote driving can be extended into other equipment operations as well - remote control of excavators and other construction equipment. Imagine road construction, or building projects, being able to be done remotely while we wait for better automation to develop.

            • mulmen 2 days ago

              This is an interesting idea. What are the expected benefits? Off the top of my head:

              * Saves on commute or travel time.

              * Job sites no longer need to provide housing for workers.

              * Allows the vehicles to stay in operation continuously, currently they shut down for breaks.

              * With automation multiple vehicles could be operated at once.

              The biggest benefits seem to be in resource extraction but I believe the vehicles there are already highly automated. At least the haul trucks.

  • whatever1 2 days ago

    No it’s just machine learning was always awesome for the 98% of the cases. We got fooled that we can easily deal with the remaining 2%.

    • jgeada 2 days ago

      It is the usual complexity rule of software: solving 80% of the problem is usually pretty easy at only takes about 50% of the estimated effort, it is the remaining 20% that takes up the remaining 90% of estimated effort (thus the usual schedule overruns).

      The interesting thing is that there are problems for which this rule applies recursively. Of the remaining 20%, most of it is easier than the remaining 20% of what is left.

      Most software ships without dealing with that remaining 20%, and largely that is OK; it is not OK for safety critical systems though.

  • dreamcompiler 2 days ago

    Where we're too optimistic is with technology that demos impressively, but which has 10,000 potentially-fatal edge cases. Self-driving cars and radiology interpretation are both in this category.

    When there are relatively few dangerous edge cases, technology often works better than we expect. TikTok's recommendation algorithm and Shazam are in this category.

  • CodesInChaos 2 days ago

    > thought that trucking industry is changed forever

    What I find really crazy is that most trains are still driven by humans.

    • blueside 2 days ago

      only 2 people (engineer and conductor) for an entire train that is over a mile long seems about right to me though

    • BurningFrog 2 days ago

      Much of that is about union power more than tech maturity.

    • 1718627440 2 days ago

      Most work is actually in oversight and getting the train to run when parts fail. When running millions of machines 24/7 there is always a failing part. Also understanding gesticulation humans and running wildlife is not yet (fully) automatable.

      • mulmen 2 days ago

        How does a human conductor stop a train from hitting a deer? Do they spot it from 3 miles away?

        The THSR I rode solved the wildlife problems with a big windshield wiper. Not sure what else there is to do. It’s a train.

        • 1718627440 2 days ago

          The goto is to scare the animal or human away with the horn.

        • dboreham 2 days ago

          At least one train has crashed with fatalities due to hitting a cow.

          • BenjiWiebe a day ago

            Can you provide a reference?

            That's difficult to believe. Was this a diesel locomotive pulling a freight train or was it something smaller/lighter?

            • 1718627440 2 hours ago

              Not your correspondent, but trains are quite easy to derail, because they work by sliding orthogonality over the rail, and because otherwise there are way worse crashes.

              The cow might not have caused the fatalities directly, but derailment, and a fast train crashing unbound through the landscape has a lot of kinetic energy.

    • tra3 2 days ago

      I think it's the matter of scale. Way more truck drivers than locomotive engineers.

  • jsight 2 days ago

    A lot of people in the industry really underestimated the difficulty in getting self driving cars be effective. It is relatively easy to get a staged demo together, but getting a trustworthy product out there is really hard.

    We've seen this with all of the players, with many dropping out due to the challenges.

    Having said that, there are several that are fielded right now, with varying degrees of autonomy. Obviously Waymo has been operating in small-ish geofences for a while, but they are managed >200% annual growth readily. Zoox just started offering fully autonomous drives in Vegas.

    And even Tesla is offering a service, albeit with safety monitors/drivers. Tesla Semi isn't autonomous at all, but appears ready to go into volume production next year too.

    Your prediction will look a lot better by 2030.

  • Gareth321 a day ago

    Things happen slowly, then all at once. Many people think ChatGPT appeared out of nowhere a couple of years ago. In reality it was steadily improving for 8 years. Before then, LLMs were being developed for Word2Vec. Before then, Yoshua Bengio and colleagues proposed the first neural probabilistic language model, and introducing distributed word representations (precursors to embeddings). Before then we had Statistical NLP took hold, with n-gram models, hidden Markov models, and later phrase-based machine translation. Before that we had work on natural language processing (NLP) which began with symbolic AI and rule-based systems (e.g., ELIZA, 1966).

    These are all stepping stones, and eventually the technology is mature enough to productise. You would be shocked by how good Tesla FSD is right now. It can easily take you on a cross country trip with almost zero human interactions.

  • phkahler 2 days ago

    I realized long ago that full unattended self driving requires AGI. I think Elon finally figured that out. So now LLMs are going to evolve into AGI any moment. Um no. Tesla (and others) have effectively been working on AGI for 10 years with no luck

    • zulban 2 days ago

      > I realized long ago that full unattended self driving requires AGI.

      Yikes.

      I recommend you take some introductory courses on AI and theory of computation.

      • GOD_Over_Djinn 2 days ago

        You should either elaborate on your argument, or at least provide further reading that clarifies your point of contention. This kind of low effort nerd-sniping contributes nothing.

        • zulban 2 days ago

          Responding to ridiculous uncited wild comments doesn't require a phd thesis paper, my friend.

        • jeremyjh 2 days ago

          GP's statement is completely unsupported, the burden is on them.

          • gf000 2 days ago

            It's commonly brought up saying, and I don't think it's too far from the truth.

            Driving under every condition requires a very deep level of understanding of the word. Sure, you can get to like 60% by a simple robot vacuum logic, and to like 90% with what e.g. Waymo does. But the remaining 10% is crazy complex.

            What about a plastic bag floating around on a highway? The car can see it, but is it an obstacle to avoid? Should it slam the brakes? And there are a bunch of other extreme examples (what about a hilly road on a Greek island where people just honk to notify the other side that they are coming, without seeing them?)

      • dboreham 2 days ago

        That comment isn't going to age well.

    • nutjob2 2 days ago

      > I realized long ago that full unattended self driving requires AGI.

      You can do 99% of it without AGI, but you do need it for the last 1%.

      Unfortunately, the same is true for AGI.

    • bsder 2 days ago

      > I realized long ago that full unattended self driving requires AGI.

      Not even close.

      The vast majority of people have a small number of local routes completely memorized and do station keeping in between on the big freeways.

      You can see this when signage changes on some local route and absolute chaos ensues until all the locals re-memorize the route.

      Once Waymo has memorized all those local routes (admittedly a big task), it's done.

    • tekno45 2 days ago

      So waymo has AGI?

      • gf000 2 days ago

        They deliberately (and smartly) set their working limits to what they can solve - known city, always decent weather conditions. And they still added a way for a remote operator to solve certain situations.

        So no, they don't have AGI and there is a lot to reach "working under every condition everywhere" levels of self-driving.

  • philwelch 2 days ago

    For trucking I think self driving can be, in the short term, an opportunity for owner-operators. An owner-operator of a conventional truck can only drive one truck at a time, but you could have multiple self driving trucks in a convoy led by a truck manned by the owner-operator. And there might be an even greater opportunity for this in Europe thanks to the low capacity of European freight rail compared to North America.

    • mandevil 2 days ago

      I used to think this sort of thing too. Then a few years ago I worked with a SWE who had experience in the trucking industry. His take was that most trucking companies are too small scale to benefit from this. The median trucking operation is basically run by the owner's wife in a notebook or spreadsheet- and so their ability to get the benefits of leader/follower mileage like that just doesn't exist. He thought that maybe the very largest operators- Walmart and Amazon- could benefit from this, but he thought that no one else could.

      This was why he went into industrial robotics instead, where it was clear that the finances could work out today.

      • philwelch 2 days ago

        Yeah, I guess the addressable market of “truck owners who can afford to buy another truck but not hire another driver” might be smaller than I thought.

    • rcpt 2 days ago

      Trucks are harder. The weight changes a lot, they are off grid for huge stretches, mistakes are more consequential.

  • ponector a day ago

    >> it doesn't make sense to be starting in trucking industry

    Still true as work conditions are harsh, schedule as well, responsibilities and fines are high but payment is not.

  • 1vuio0pswjnm7 2 days ago

    "I think we all have become hyper-optimistic on technology. We want this tech to work and we want it to change the world in some fundamental way, but either things are moving very slowly or not at all."

    Who is "we"? The people who hype "AI"?

  • goatlover 2 days ago

    It's also like nobody learns from the previous hype cycles. Short term overly optimistic predications followed by disillusionment and then long term benefits which deliver on some of the early promises.

    For some reason, enthusiasts always think this time is different.

  • dyauspitr 2 days ago

    Waymo has worked out. I’ve taken one so many times now I don’t even think about it. If Waymo can pull this off in NYC I believe it will absolutely be capable of long distance trucking not that far in the future.

    • sarchertech 2 days ago

      Trucks are orders of magnitude more dangerous. I wouldn’t be surprised if Waymo is decades away from being able to operate a long haul truck on the open interstate.

      • fragmede 2 days ago

        Given that Aurora Innovation is running driverless semi-trucks on commercial routes between Dallas, Houston, and Texas as of right now, that would be surprising, yes, but for different reasons.

        • sarchertech a day ago

          Aurora also runs with a CDL licensed safety driver in the driver’s seat and they operate only in very carefully planned routes in restricted conditions.

  • tictacttoe 2 days ago

    Meanwhile, it’s my feeling that technology is moving insanely fast but people are just impatient. You move the bar and the expectations move with it. I think part of the problem is that the market rewards execs who set expectations beyond reality. If the market was better at rewarding outcomes not promises, you’d see more reasonable product pitches.

    • achierius 2 days ago

      How have expectations moved on self driving cars? Yes, we're finally getting there, but adoption is still tiny relative to the population and the cars that work best (Waymo) are still humongously expensive + not available for consumer purchase.

  • moate 2 days ago

    This story (the demand for Radiologists) really shows a very important thing about AI: It's great when it has training data, and bad at weird edge cases.

    Gee, seems like about the worst fucking thing in the world for diagnostics if you ask me, but what do I know, my degree is in sandwiches and pudding.

  • thenaturalist 2 days ago

    This is such a stereotypical SF / US based perspective.

    Easy to forget the rest of the world does not and never has ticked this way.

    Don't get me wrong, optimism and thinking of the future are great qualities we direly need in this world on the one hand.

    On the other, you can't outsmart physics.

    We've conquered the purely digital realm in the past 20 years.

    We're already in the early years of the next phase were the digital will become ever more multi-modal and make more inroads into the physical world.

    So many people bring an old mindset to a new context, where maring of errors, cost of mistakes or optimizing the last 20% of a process is just so vastly different than a bit of HTML, JS and backend infra.

  • reaperducer 2 days ago

    It's almost end of 2025 and either nothing out of it or just a small part of it panned out.

    The truck part seems closer than the car part.

    There are several driverless semis running between Dallas, Houston, and San Antonio every day. Fully driverless. No human in the cab at all.

    Though, trucking is an easier to solve problem since the routes are known, the roads are wide, and in the event of a closure, someone can navigate the detour remotely.

  • GoatInGrey 2 days ago

    The universe has a way with being disappointing. This isn't to say that life is terrible and we should have no optimism. Rather, that things generally work out for the better, but usually not in the way we'd prefer them to.

  • squigz 2 days ago

    Fundamental change does indeed happen very slowly. But it does happen.

  • DarkNova6 2 days ago

    It's not about optimism. It is well established in the industry that Tesla's hardware-stack gives them 98% accuracy at the very most. But those voices are drowned by the marketing bravado.

    In the case of Musk it has worked out. His lies have earned him a fortune and now he asks Tesla to pay him out with a casual 1 trillion paycheck.

    • robotresearcher 2 days ago

      What does ‘accuracy’ mean here?

      • DarkNova6 a day ago

        To correctly assess the state of the world. Since Tesla exclusively uses visual sensors, they are massively limited in how accurate it can ever be… or safe.

        But hey, costs are lower that way.

pjdesno 2 days ago

The best story I heard about machine learning and radiology was when folks were racing to try to detect COVID in lung X-rays.

As I recall, one group had fairly good success, but eventually someone figured out that their data set had images from a low-COVID hospital and a high-COVID hospital, and the lettering on the images used different fonts. The ML model was detecting the font, not the COVID.

[a bit of googling later...]

Here's a link to what I think was the debunking study: https://www.nature.com/articles/s42256-021-00338-7

If you're not at a university, try searching for "AI for radiographic COVID-19 detection selects shortcuts over signal" and you'll probably be able to find an open-access copy.

  • zahlman 2 days ago

    I remember a claim that someone was trying to use an ML model to detect COVID by analyzing the sound of the patient coughing.

    I couldn't for the life of me understand how this was supposed to work. If the coughing of COVID patients (as opposed to patients with other respiratory illnesses) actually sounds meaningfully different in a statistically meaningful way (and why did they suppose that it would? Phlegm is phlegm, surely), surely a human listener would have been able to figure it out easily.

    • lblume 2 days ago

      That doesn't really follow. NN models have been able to pick up on noisier and more subtle patterns than humans for a long time, so this type of research is definitely worth a short in my opinion. The pattern might also not be noticeable to a human at all, e.g. "this linear combination of frequency values in the Fourier space exceeds a specific threshold".

  • CamperBob2 2 days ago

    Anecdotes like this are informative as far as they go, but they don't say anything at all about the technique itself. Like your story about the fonts used for labeling, essentially all of the drawbacks cited by the article come down to inadequate or inappropriate training methods and data. Fix that, which will not be hard from a purely-technical standpoint, and you will indeed be able to replace radiologists.

    Sorry, but in the absence of general limiting principles that rule out such a scenario, that's how it's going to shake out. Visual models are too good at exactly this type of work.

    • jmhmd 2 days ago

      The issue is that in medicine, much like automobiles, unexpected failure modes may be catastrophic to individual people. “Fixing” failure modes like the above comment is not difficult from a technical standpoint, that’s true, but you can only fix it once you’ve identified it, and at that point you may have a dead person/people. That’s why AI in medicine and self driving cars are so unlike AI for programming or writing and move comparatively at a snails pace.

      • CamperBob2 2 days ago

        Yet self-driving cars are already competitive with human drivers, safety-wise, given responsible engineering and deployment practices.

        Like medicine, self-driving is more of a seemingly-unsolvable political problem than a seemingly-unsolvable technical one. It's not entirely clear how we'll get there from here, but it will be solved. Would you put money on humans still driving themselves around 25-50 years from now? I wouldn't.

        These stories about AI failures are similar to calling for banning radiation therapy machines because of the Therac-25. We can point and laugh at things like the labeling screwup that pjdesno mentioned -- and we should! -- but such cases are not a sound basis for policymaking.

        • sarchertech 2 days ago

          > Yet self-driving cars are already competitive with human drivers, safety-wise, given responsible engineering and deployment practices.

          Are they? Self driving cars only operate in a much safer subset of conditions that humans do. They have remote operators who will take over if a situation arises outside of the normal operating parameters. That or they will just pull over and stop.

        • lomase 2 days ago

          Telsa told everybody 10 years ago self driving cars were a reality.

          Waymo claims to have it. Some hackernews comenters too, I started to belive those are Waymo employees or stock owners.

          Apart from that I know nobody that has even use or even seen a self driving car.

          Self-driving cars are not a thing so you can't say they are more realible than humans.

          • CamperBob2 a day ago

            I've never been in a self-driving car myself, but your position verges on moon-landing denial. They most certainly do exist, and have for a while.

            Yes, they still need human backup on occasion, usually to deal with illegal situations caused by other humans. That's definitely the hard part, since it can't be handwaved away as a "simple" technical problem.

            AI in radiology faces no such challenges, other than legal and ethical access to training data and clinical trials. Which admittedly can't be handwaved away either.

    • 1718627440 2 days ago

      When it weren't for the font it might be anomalies in the image taking or even in the encoder software. You can never really be sure, what exactly the ML is detecting.

      • osrec 2 days ago

        Exactly. A marginally higher image ISO at one location vs a lower ISO at another could potentially have a similar effect, and it would be quite difficult to detect.

      • Avalaxy 2 days ago

        Why not? That's what Grad-CAM is for right?

        • 1718627440 2 days ago

          What if the ML takes the conclusion exactly from the right pixels, but the cause is a rasterization issue.

      • CamperBob2 2 days ago

        You can give it the same tests the human radiologists take in school.

        They do take tests, don't they?

        They don't all score 100% every time, do they?

        • 1718627440 2 days ago

          The point here is that the radiologists has a concept of knowing which light patterns are sensible to draw conclusions from and which not, because the radiologist has a concept of real world 3D objects.

          • CamperBob2 2 days ago

            Sure. It's just not a valid point. Even if it's valid today, it won't be by next week.

djoldman 2 days ago

> Three things explain this. First,... Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.

Everything else besides the above in TFA is extraneous. Machine learning models could have absolute perfect performance at zero cost, and the above would make it so that radiologists are not going to be "replaced" by ML models anytime soon.

  • roncesvalles 2 days ago

    I only came to this thread to say that this is completely untrue:

    >Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.

    The vast majority of radiologists do nothing other than: come in (or increasingly, stay at home), sit down at a computer, consume a series of medical images while dictating their findings, and then go home.

    If there existed some oracle AI that can always accurately diagnose findings from medical images, this job literally doesn't need to exist. It's the equivalent of a person staring at CCTV footage to keep count of how many people are in a room.

    • luma 2 days ago

      Agreed, I'm not sure where the OP from TFA is working but around here, radiologists have all been bought out and rolled into Radiology As A Service organizations. They work from home or at an office, never at a clinic, and have zero interactions with the patient. They perform diagnosis on whatever modality is presented and electronically file their work into their EMR. I work with a couple such orgs on remote access and am familiar with others, it might just be a selection bias on my side but TFA does not reflect my first-hand experience in this area.

      • quadragenarian 2 days ago

        Interesting - living near a large city, all of the radiologists I know work for hospitals, spending more of their day in the hospital reading room versus home, including performing procedures, even as diagnostic radiologists.

        I think it may be selection bias.

      • hn_throwaway_99 2 days ago

        > They work from home or at an office, never at a clinic, and have zero interactions with the patient.

        Generalizing this to all radiologists is just as wrong as the original article saying that radiologists don't spend the majority of their time reading images. Yes, some diagnostic radiologists can purely read and interpret images and file their results electronically (often remotely through PACS systems). But the vast majority of radiology clinics where I live have a radiologist on-site, and as one example, results for suspicious mammograms where I live in Texas are always given by a radiologist.

        And as the other comment said, many radiologists who spend the majority of their time reading images also perform a number of procedures (e.g. stereotactic biopsies).

      • red_rech a day ago

        Holy shit why did I waste my time in tech.

        I could have just gone to med school and never deal with layoffs, RTO, etc.

    • sarchertech 2 days ago

      My wife is an ER doctor. I asked her and she said she talks to the radiologists all the time.

      I also recently had surgery and the surgeon talked to the radiologist to discuss my MRI before operating.

      • roncesvalles a day ago

        I'd clarify if her "all the time" means a couple of times a week. For 99.9% of cases an ER doctor would just read what the radiologist wrote in the document.

        It's sort of like saying "sometimes a cab driver talks to passengers and suggests a nice restaurant nearby, so you can't automate it away with a self-driving cab."

        • sarchertech a day ago

          She said that all the time means more than 1 out of 100 reads but less than 5. It also takes longer for them to discuss a read than it does for them to do the read.

          She also said that she frequently talks to the them before ordering scans to consult on what imaging she’s going to order.

          > It's sort of like saying "sometimes a cab driver talks to passengers and suggests a nice restaurant nearby, so you can't automate it away with a self-driving cab."

          It’s more like if 3/100 kids who took a robot taxi died, suffered injury, had to undergo unnecessary invasive testing, or were unnecessarily admitted to the hospital.

        • FireBeyond a day ago

          Not an ER physician, but as a paramedic that spent a lot of time in the ER, it depends. Code 3 trauma/medical calls would generally have portable XR brought to the ER room, waiting for our arrival with the patient. In those cases, the XR is taken in the room, not in the DI (diagnostic imaging) wing, and generally the interaction flow will be "XR sent by wifi to radiologist elsewhere, who will then call the ER room and review the imaging live, or very quickly thereafter (i.e. minutes)", because of the emergent need, versus waiting for report dictation/transcription.

    • daxfohl 2 days ago

      Are these the ones making 500K? Sounds like more of an assistance job than an MD.

      • quadragenarian 2 days ago

        Radiologists are often the ones who are the "brains" of medical diagnosis. The primary care or ER physician gets the patient scanned, and the radiologist scrolls through hundreds if not thousands of images, building a mental model of the insides of the patient's body and then based on the tens of thousands of cases they've reviewed in the past, as well as deep and intimate human anatomical knowledge, attempts to synthesize a medical diagnosis. A human's life and wellness can hinge on an accurate diagnosis from a radiologist.

        Does that sounds like an assistance's job?

        • daxfohl 2 days ago

          Makes sense. Knowing nothing about it, I was picturing a tech sitting at home looking at pictures saying "yup, there's a spot", "nope, no spot here".

          • 1718627440 2 days ago

            For this job a decade of studies would be a bit wasteful though.

            • daxfohl a day ago

              Right, which is why I asked.

    • justlikereddit 2 days ago

      >consume a series of medical images while dictating their findings, and then go home.

      In the same fashion as construction worker just shows up, "performs a series of construction tasks", then go home. We just need to make a machine that performs "construction tasks" and we can build cities, railways and road networks for nothing but the cost of the materials!

      Perhaps this minor degree of oversimplification is why the demise of radiologists have been so frequently predicted?

    • dmbche 2 days ago

      Saw radiologists at a recent visit in a hospital.

      Do you have some kind of source? This seems unlikely.

  • addcommitpush 2 days ago

    If they had absolute perfect performance at zero cost, you would not need a radiologist.

    The current "workflow" is primary care physician (or specialist) -> radiology tech that actually does the measurement thing -> radiologist for interpretation/diagnosis -> primary care physician (or specialist) for treatment.

    If you have perfect diagnosis, it could be primary care physician (or specialist) -> radiology tech -> ML model for interpretation -> primary care physician (or specialist.

    • MengerSponge 2 days ago

      If we're talking utopian visions, we can do better than dreaming of transforming unstructured data into actionable business insights. Let's talk about what is meaningfully possible: Who assumes legal liability? The ML vendor?

      PCPs don't have the training and aren't paid enough for that exposure.

    • bilbo0s 2 days ago

      Nope.

      To understand why, you would really need to take a good read of the average PCP's malpractice policy.

      The policy for a specialist would be even more strict.

      You would need to change insurance policies before your workflow was even possible from a liability perspective.

      Basically, the insurer wants, "a throat to choke", so to speak. Handing up a model to them isn't going to cut it anymore than handing up Hitachi's awesome new whiz-bang proton therapy machine would. They want their pound of flesh.

      • philwelch 2 days ago

        Let’s suppose I go to the doctor and get tested for HIV. There isn’t a specialist staring at my blood through a microscope looking for HIV viruses, they put my blood in a machine and the machine tells them, positive or negative. There is a false positive rate and a false negative rate for the test. There’s no fundamental reason you couldn’t put a CT scan into a machine the same way.

        • matheusmoreira 2 days ago

          Pretty much everything has false positives and false negatives. Everything can be reduced to this.

          Human radiologists have them. They can miss things: false negative. They can misdiagnose things: false positive.

          Interviews have them. A person can do well, be hired and turn out to be bad employee: false positive. A person who would have been a good employee can do badly due to situational factors and not get hired: false negative.

          The justice system has them. An innocent person can be judged guilty: false positive. A guilty person can be judged innocent: false negative.

          All policy decisions are about balancing out the false negatives against the false positives.

          Medical practice is generally obsessed with stamping out false negatives: sucks to be you if you're the doctor who straight up missed something. False positives are avoided as much as possible by defensive wording that avoids outright affirming things. You never say the patient has the disease, you merely suggest that this finding could mean that the patient has the disease.

          Hiring is expensive and firing even more so depending on jurisdiction, so corporations want to minimize false positives as much as humanly possible. If they ever hire anyone, they want to be sure it's absolutely the right person for them. They don't really care that they might miss out on good people.

          There are all sorts of political groups trying to tip the balance of justice in favor of false negatives or false positivies. Some would rather see guilty go free than watch a single innocent be punished by mistake. Others don't care about innocents at all. I could cite some but it'd no doubt lead to controversy.

      • addcommitpush 2 days ago

        In that scenario, the "throat to choke" would be the primary care physician. We won't think of it as an "ML radiologist", just as getting some kind of physical test done and bringing it to the doctor for interpretation.

        If you're getting a blood test, the pipeline might be primary care physician -> lab with a nurse to draw blood and machines to measure blood stuff -> primary care physician to interpret the test results. There is no blood-test-ologist (hematologist?) step, unlike radiology.

        Anyway, "there's going to be radiologists around for insurance reasons only but they don't bring anything else to patient care" is a very different proposition from "there's going to be radiologists around for insurance reasons _and_ because the job is mostly talking to patients and fellow clinicians".

      • bigfudge 2 days ago

        Doesnt this become the developer? Or perhaps a specialist insurer who develops expertise and experience to indemnify them?

        • bilbo0s 2 days ago

          Oh that could indeed happen in that hypothetical timeline. But in that timeline the developer would be paying the malpractice premium.

          And it would be the developer's throat that gets choked when something goes awry.

          I'm betting developers will want to take on neither the cost of insurance, nor the increased risk of liability.

      • tnel77 2 days ago

        They didn’t say there wouldn’t need to be change related to insurance. They obviously mean that, change included, a perfect model would move to their described workflow (or something similar).

        HackerNews is often too quick to reply with a “well actually” that they miss the overall point.

  • stonemetal12 2 days ago

    >Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians.

    How often do they talk to patients? Every time I have ever had an x-ray, I have never talked to a radiologist. Fellow clinicians? Train the xray tech up a bit more.

    If the mote is 'talking to people' that is a mote that doesn't need an MD, or at least not a full specialization MD. ML could kill radiologist MD, radiologist could become the job title of a nurse or x-ray tech specialized in talking to people about the output.

    • bilbo0s 2 days ago

      Train the xray tech up a bit more.

      That's fine. But then the xray tech becomes the radiologist, and that becomes the point in the workflow that the insurer digs out the malpractice premiums.

      In essence, your xray techs would become remarkably expensive. Someone is talking to the clinicians about the results. That person, whatever you call them, is going to be paying the premiums.

    • sarchertech 2 days ago

      I don’t think they talk to patients all that often but my wife is an ER doctor and she says she talks to them all the time.

  • vel0city 2 days ago

    As a patient I don't think I've ever even talked to any radiologist that actually analyzed my imaging. Most of the times my family or I have had imaging done the imaging is handled by a tech who just knows how to operate the machines while the actual diagnostic work gets farmed out to remote radiologists who type up an analysis. I don't even think the other doctors I actually see ever directly talk to those radiologists.

    Is this uncommon in the rest of the US?

    • theOGognf 2 days ago

      No, that is the norm. Radiologists speak with their colleagues the most, and patients rarely

    • stevenbedrick 2 days ago

      It really depends on the specifics of the clinical situation; for a lot of outpatient radiology scenarios the patient and radiologist don't directly interact, but things can be different in an inpatient setting and then of course there are surgical and interventional radiology scenarios.

lomase 2 days ago

In 2016, Geoffrey Hinton – computer scientist and Turing Award winner – declared that ‘people should stop training radiologists now’.

If we had followed every AI evengelist sugestion the world would have collapsed.

  • jonas21 2 days ago

    People love to bring this up, and it was a silly thing to say -- particularly since he didn't seem to understand that radiologists only spend a small part of their time reading scans.

    But he said it in the context of a Q&A session that happened to be recorded. Unless you're a skilled politician who can give answers without actually saying anything, you're going to say silly things once in a while in unscripted settings.

    Besides that, I'd hardly call Geoffrey Hinton an AI evangelist. He's more on the AI doomer side of the fence.

    • catoc 2 days ago

      No, this was not an off-hand remark. He made a whole story comparing the profession to the coyote from road runner “they’ve already run of the cliff but don’t even realize it”. It was callous, and showed a total ignorance of the fact that medicine might be more than pixel classification.

    • nick__m 2 days ago

      Radiologists, here, mostly sit at home, read scan and dictate reports. They rarely talk to other doctors and talking to a patient is beyond them. They are some of the specialists with the best salary.

      With interventional radiologists and radio-oncologists it's different but were talking about radiologists here...

      • duchef 2 days ago

        I'm a radiologist and spend 50% of my time either talking to patients or other clinicians.

        • nick__m 2 days ago

          You practice in Québec ? If so I am quite surprised, because my wife had a lot of scans and we never met a radiologists who wasn't a radio-oncologist. And her oncologist never talked with the radiologists either. The communication between them was always through written demands and reports. And the situation is similar between her neurologist and the radiologists.

          By the way, even if I sound dismissive I have great respect for the skills required by your profession. Reading an IRM is really hard when you have the radiologist report in hand and to my untrained eyes it's impossible without it!

          And since you talk to patients frequently, I have an even greater respect of you as a radiologist.

      • sarchertech 2 days ago

        My wife’s an ER doctor and she talks to radiologists all the time.

        I also recently had surgery and the surgeon consulted with the radiologist that read my MRI before operating.

        • nick__m 2 days ago

          Then it's an organizational problem (or choice) in the specific hospital where my wife is treated/followed and I apologize to all radiologists that actually talk to peoples in a professional capacity!

          Or maybe it's related to socialized Healthcare because in the article there is a breakdown of the time spent by a radiologists in Vancouver and talking to patients isn't part of it.

    • billisonline 2 days ago

      I would argue an "AI doomer" is a negatively charged type of evangelist. What the doomer and the positive evangelist have in common is a massive overestimation of (current-gen) AI's capabilities.

    • lomase 2 days ago

      Many of us have changed opinions after seeing how the tech does not scale.

      At the time? I would say he was a AI evangelist.

      • queuebert 2 days ago

        The tech scales, but accessing the training data is a real problem. It's not like scraping the whole internet. And most of it is unlabeled.

        • lanstin 2 days ago

          I think in general this lack affects almost all areas of human endeavor. All my speech teaching my kids how to think clearly, to young software engineers about how to build software in a some giant ass bureaucracy, how to debug some tricky problem, none of that sort of discovering truth one step at a time or teaching new stuff is in blogs or anything outside the moment.

          When I do write something up, it is usually very finalized at that time; the process of getting to that point is not recorded.

          The models maybe need more naturalistic data and more data from working things out.

        • lomase 2 days ago

          If you need more data to scale, and there is no data, it literally can't scale.

          Scale is not always about trougput. You can be constrained by many things, in this case, data.

          • queuebert a day ago

            I was unclear. There is data, but it is expensive to access, so the value proposition is often not there without some beneficent entity.

    • GoatInGrey 2 days ago

      It's the power of confidence and credentials in action. Which is why you should, when possible, look at the underlying logic and not just the conclusion derived from it. As this catches a lot of fluff that would otherwise be Trojan-Horsed into your worldview.

  • lnenad 2 days ago

    Let's assume the last person that entered their radiologist training started then and the training lasts 5 years. At the end of their training the year is 2021 and they are around 31. So that means they will practice medicine for cca 30 years which would put the calendar at around 2051. I'd wager in 25 years we'd get there so I think his opinion still has a large percentage of being correct.

    • lm28469 2 days ago

      And if it doesn't work out ?

      People can't tell what they'll eat next sunday but they'll predict AGI and singualrity in 25 years. It's comfy because 25 years seems like a lot of time, it isn't.

      https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

      > I'd wager in 25 years we'd get there so I think his opinion still has a large percentage of being correct.

      What percent, and which maths and facts let you calculate it ? The only percent you can be sure about is that it's 100% wishful thinking

      • lnenad a day ago

        I mean it's an opinion (mine) so maybe feel free to disagree with me without going overboard.

        > It's comfy because 25 years seems like a lot of time, it isn't.

        I don't know how old you are but life 25 years ago from a tech perspective was *very* different.

        • ruszki a day ago

          Different, but nobody could predict what would happen next. We know now how different it was, but we didn’t know back then how it would be different now. There were people/companies who were right, and more people who weren’t. I had good predictions, and bad predictions. I didn’t understand why people didn’t use their phone already like how they use smartphones now. You could do everything what you can do now (except things which were discovered since then, mainly ML stuffs). Browse the internet (it was always interesting how people didn’t know what was WAP), listen to music, read books, play games, run random apps (there was waaaay more freedom regarding this back then by default, people just didn’t know how). But still, we needed smartphones. That was the thing which crossed the line for normies, and for most of them only more than 5 years after iPhone was released. My prediction of convergence would have failed without the modern smartphone, which I couldn’t foresee. It was pure luck. We needed a breakthrough.

          That doesn’t mean that you can’t predict anything with high certainty. You just don’t know whether the status quo will be disturbed. And when you need a status quo disturbance for your prediction, you’re in pure luck category. When your prediction requires lack of status quo changes, then your prediction is safer. And of course sorter the term the better. When ChatGPT came out, Cursor and Claude Code could be predicted, I predicted them. Because no changes in status quo was required and it was a short term prediction. But if there would have been a new breakthrough, then those wouldn’t have been created. When they predicted fully self driving cars, or less people checking X-rays, you needed a status quo change: legal first, but in case of general, fully self driving cars, even technical breakthroughs. Good luck with that.

    • sarchertech 2 days ago

      Let’s say we do manage to develop a model that can replace radiologists in 20 years. But we stop training them today. What happens 15 years from now when we don’t have nearly enough radiologists.

      • lnenad a day ago

        I'm not saying it's a good idea to think like that, I'm just saying I'd wager he's right on thinking that AI will be in a good position in 20+ years.

    • nenenejej 2 days ago

      Radiologists can retrain to do something else adjacent surely? Not like they'll suddenly be like an 18 year old with no degree trying to find a job.

      • GoatInGrey 2 days ago

        Why do we assume that radiologists would have literally 0% involvement in the radiology workflow?

        I could see the assumption that one radiologist supervises a group of automated radiology machines (like a worker in an automated factory). Maybe assume that they'd be delegated to an auditing role. But that they'd go completely extinct? There's no evidence of, even historically, a service being consumed that has zero human intervention.

        • robotresearcher 2 days ago

          Alarm clocks. Elevators. ATMs. Laundry. Chess opponent. Watching a movie. …

    • timeon 2 days ago

      > I'd wager

      Maybe don't?

    • padjo 2 days ago

      So all the current radiologists are going to live until 2051?

      • nenenejej 2 days ago

        Even Marie Curie would have.

    • lomase 2 days ago

      Why you write 2021? It clearly says 2016.

      I mean if you change the data to fit your argument you will always make it look correct.

      Lets assume we stop in 2016 like he said, where do we get the 1000 radiologist the US needs a year?

      • stonemetal12 2 days ago

        > the training lasts 5 years. At the end of their training the year is 2021

        The training lasts 5 years, 2021 - 5 = 2016 If they stopped accepting people into the radiologist program but let people already in to finish, then you would stop having new radiologist in 2021.

        • nick__m 2 days ago

          Training is a lot longer than that in Québec, radiology is a specialty, so they must first do their 5 years in medicine, followed by a 5 year diagnostic radiology residency program. And it's frequently followed by a 2 years fellowship.

          So 5 + 5 + [0,2] is [10,12] years of training.

        • sarchertech 2 days ago

          Residents are working doctors, so we’d start losing useful work the year we stop taking new residents.

        • lomase 2 days ago

          'people should stop training radiologists now'

          That sentence and what you wrote are not 100% the same.

  • nextworddev 2 days ago

    Look, if we were okay with tolerating less regulation in medicine, and dismantled AMA, Hinton would have proven to be right by now and everyone would have been happier

  • DonsDiscountGas a day ago

    Definitely an aggressive timeline but it seems like the biggest barrier to AI taking over radiology will be legal. Spending years training for a job which only continues to exist because of government fiat, which could change at any time, seems like a risky choice.

  • nenenejej 2 days ago

    Too damn hard to predict the future! We live in an age where 20 years is unseeable for a lot of things.

KnuthIsGod a day ago

I sent a lady today to a radiologist for a core biopsy of a likely malignancy.

And a man to a radiologist for a lumbar perineural injection.

And a person to a radiologist for a subacromial bursa injection.

And a month ago I sent a woman to a radiologist to have adenomyosis embolised.

Also talked to a patient today who I will probably send to a radiologist to have a postnephrectomy urinary leak embolised.

Is an LLM going to do that ?

There is the another issue.

If AI commoditises a skill, competent people with options will just shift to another skill while offloading the commoditised skill to someone else.

Due to automated ECG interpretation built into every machine, reimbursement has plummeted. So I have let my ECG interpretation skills rust while focusing on my neurology and movement disorder skills.They are fun ... I also did part of a master's in AI decades ago ( Prolog, Lisp , good times, machine vision, good times...)

So now if someone needs a ECG, I am probably going to send them to a cardiologist who will do a ECG, Holter, Echo, Stress Echo etc. Income for the nice friendly cardiologist, extra cost and time for the patient and the health system.

I can imagine like food deserts, entire AI deserts in medicine that nobody want to work in. A bit like geriatrics, rural medicine and psychiatry these days.

  • ACCount37 a day ago

    The goal of the healthcare system isn't to make sure that the doctors get paid big bucks. It is, allegedly, to heal people.

    Automating as much of that as possible and making healthcare more accessible should be pursued. Just like automated ECG interpretation made basic ECG more accessible.

  • cowsandmilk a day ago

    Interventional radiology is clearly different and requires more training than plain radiology reading images.

  • FireBeyond a day ago

    > Due to automated ECG interpretation built into every machine

    Oof - I hope the tools you're using as a physician are better than in the field as a paramedic.

    I have never met a Lifepak (or Zoll) that doesn't interpret anything but the most textbook sinus rhythm in pristine conditions as "ABNORMAL ECG - EVALUATION NEEDED".

maz1b 2 days ago

As a doctor and full stack engineer, I would never go into radiology or seek further training in it. (obviously)

AI is going to augment radiologists first, and eventually, it will start to replace them. And existing radiologists will transition into stuff like interventional radiology or whatever new areas will come into the picture in the future.

  • jmhmd 2 days ago

    As a radiologist and full stack engineer, I’m not particularly worried about the profession going away. Changing, yes, but not more so than other medical or non-medical careers.

  • ProllyInfamous 2 days ago

    >AI is going to augment radiologists first, and eventually, it will start to replace them.

    I am a medical school drop-out — in my limited capacity, I concur, Doctor.

    My dentist's AI has already designed a new mouth for me, implants &all ("I'm only doing 1% of the finish-work: whatever the patient says doesn't feel just quite right, yet"—myDMD). He then CNCs in-house on his $xxx,xxx 4-axis.

    IMHO: Many classes of physicians are going to be reduced to nothing more than malpractice-insurance-paying business owners, MD/DO. The liability-holders, good doctor.

    In alignment with last week's (H)(1)(b) discussion, it's interesting to note that ~30% of US physician resident "slots" (<$60kUSD salary) are filled by these foreigner visa-holders (so: +$100k cost per applicant, amortized over a few years of training, each).

  • donnfelker 2 days ago

    There's a number of you (engineer + doctor), though quite rare. I have a few friends who are engineers as well as doctors. You're like unicorns in your field. The Neo and Morpheus of the medical industry - you can see things and understand things that most people cant in your typical field (medicine). Kudos to you!

    • thevillagechief 2 days ago

      This was actually my dream career path when I was younger. Unfortunately there's just no way I would have afforded the time and resources to pursue both, and I'd never heard of Biomedical Engineering where I grew up.

  • catoc 2 days ago

    As a doctor and full stack engineer you’d have a perfect future ahead of you in radiology - the profession will not go away, but will need doctors who can bridge the full medical-tech range

  • kstrauser 2 days ago

    What’s your take on pharmacists? To my naive eyes, that seems like a certainty for replacement. What extra value does human judgement bring to their work?

    • mandevil 2 days ago

      My wife is a clinical pharmacist at a hospital. I am a SWE working on AI/ML related stuff. We've talked about this a lot. She thinks that the current generation of software is not a replacement for what she does now, and finds the alerts they provide mostly annoying. The last time this came up, she gave me two examples:

      A) The night before, a woman in her 40's came in to the ER suffering a major psychological breakdown of some kind (she was vague to protect patient privacy). The Dr prescribed a major sedative, and the software alerted that they didn't have a negative pregnancy test because this drug is not approved for pregnant women and so should not be given. However, in my wife's clinical judgement- honed by years of training, reading papers, going to conferences, actual work experience and just talking to colleagues- the risk to a (potential) fetus from the drug was less than the risk to a (potential) fetus from mom going through an untreated mental health episode and so she approved the drug and overrode the alert.

      B) A prescriber had earlier in that week written a script for Tylenol to be administered "PR" (per-rectum) rather than PRN (per requisite need). PR Tylenol is a perfectly valid thing that is sometimes the correct choice, and was stocked by the hospital for that reason. But my wife recognized that this wasn't one of the cases where that was necessary, and called the nurse to call the prescriber to get that changed so the nurse wouldn't have to give them a Tylenol suppository. This time there were no alerts, no flags from the software, it was just her looking at it and saying "in my clinical judgement, this isn't the right administration for this situation, and will make things worse".

      So someone- with expensively trained (and probably licensed) judgement- will still need to look over the results of this AI pharmacist and have the power to override its decisions. And that means that they will need to have enough time per case to build a mental model of the situation in their brain, figure out what is happening, and override if necessary. And it needs to be someone different from the person filling out the Rx, for Swiss cheese model of safety reasons.

      Congratulations, we've just described a pharmacist.

      • philwelch 2 days ago

        > And it needs to be someone different from the person filling out the Rx, for Swiss cheese model of safety reasons.

        This is something I question. If you go to a specialist, and the specialist judges that you need surgery, he can just schedule and perform the surgery himself. There’s no other medical professional whose sole job is to second-guess his clinical judgment. If you want that, you can always get a second opinion. I have a hard time buying the argument that prescription drugs always need that second level of gatekeeping when surgery doesn’t.

        • mandevil 2 days ago

          So, the main reason for the historical separation (in the European tradition) between doctor and pharmacist was profit motive- you didn't want the person prescribing to have a financial stake in their treatment, else they will prescribe very expensive medicine for all cases. And surgeons in particular do have a profit motive- they are paid per service- and it is well known within the broader medical community that surgeons will almost always choose to cut. And we largely gate-keep this with the primary care physician providing a recommendation to the specialist. The PCP says "this may be something worth treating with surgery" when they recommend you go see a specialist rather than prescribing something themselves, and then the surgeon confirms (almost always).

          That pharmacists also provide a safety check is a more modern benefit, due to their extensive training and ability to see all of the drugs that you are on (while a specialist only knows what they have prescribed). And surgeons also have a team to double-check them while they are operating, to confirm that they are doing the surgery on the correct side of the body, etc. Because these safety checks are incredibly important, and we don't want to lose them.

    • cko 2 days ago

      I am a pharmacist who dabbles in web dev. We should easily be replaced because all of our work on checking pill images and drug interactions are actually already automated, or the software already tells us everything.

      If every doctor agreed to electronically prescribe (instead of calling it in, or writing it down) using one single standard / platform / vendor, and all pharmacy software also used the same platform / standard, then our jobs are definitely redundant.

      I worked at a hospital where basically doctors and pharmacists and nurses all use the same software and most of the time we click approve approve approve without data entry.

      Of course we also make IVs and compounds by hand, but that's a small part of our job.

    • skadamou 2 days ago

      I'm not a doc or a pharmacist (though I am in med school) and I'm sure there are areas that AI could do some of a pharmacists job but on the outpatient side they do things like answering questions for patients and helping them interpret instructions that I don't think we want AI to do... or at least I really doubt an AIs ability to gauge how well someone is understanding instructions and augment how it explains something based on that assessment... on the inpatient side, I have seen pharmacists help physicians grapple with the pros and cons of certain treatments and make judgement calls about dosing that I think it would be hard to trust an AI to do because there is no "right" answer really. It's about balancing trade offs.

      IDK, these are just limitations - people that really believe in AI will tell you there is basically nothing it can't do... eventually. I guess it's just a matter of how long you want to wait for eventually to come.

    • ralusek 2 days ago

      I work on a kiosk (MedifriendRx) which, to some degree "replaces" pharmacists and pharmacy staff.

      The kiosk is placed inside of a clinic/hospital setting, and rather than driving to the pharmacy, you pick up your medications at the kiosk.

      Pharmacists are currently still very involved in the process, but it's not necessarily for any technical reason. For example, new prescriptions are (by most states' boards of pharmacies) required to have a consultation between a pharmacist and a patient. So the kiosk has to facilitate a video call with a pharmacist using our portal. Mind you, this means the pharmacist could work from home, or could queue up tons of consultations back to back in a way that would allow one pharmacist to do the work of 5-10 working at a pharmacy, but they're still required in the mix.

      Another thing we need to do for regulatory purposes is when we're indexing the medication in the kiosk, the kiosk has to capture images of the bottles as they're stocked. After the kiosk applies a patient label, we then have to take another round of images. Once this happens, this will populate in the pharmacist portal, and a pharmacist is required to take a look at both sets of images and approve or reject the container. Again, they're able to do this all very quickly and remotely, but they're still required by law to do this.

      TL;DR I make an automated dispensing kiosk that could "replace" pharmacists, but for the time being, they're legally required to be involved at multiple steps in the process. To what degree this is a transitory period while technology establishes a reputation for itself as reliable, and to what degree this is simply a persistent fixture of "cover your ass" that will continue indefinitely, I cannot say.

      • tartoran 2 days ago

        Pharmacists are not going to be replaced, their jobs like most other jobs touched by AI will evolve, possibly shrink in demand but won't completely dissapear. AI is a tool that some professional has to use after all.

  • cfu28 2 days ago

    I feel like I keep running into your comments on HN. There are dozens of us!

  • Seattle3503 2 days ago

    I could see that as more radiology AI tools become available to non-radiologist medical providers, they might choose to leverage the quick feedback those provide and not wait for a radiologist to weight in, even if they could gain something from the radiologist. They could make a decision while the patient is still in the room with them.

  • seesthruya a day ago

    If you believe this is true, why stop at radiology? Couldn't the same be said for every other non-surgical specialty?

    • chromatin a day ago

      Partially true, and the answer to that is runway -- it will be a very long time before all the other specialties are fully augmented. With respect to "non-surgical" you may be underestimating the number and variety of procedures performed by non-surgeons (e.g. Internal Medicine physicians) -- thyroid biopsy, bronchoscopy, endoscopic retrograde cholangiopancreatography, liquid nitrogen ablation of skin lesion, bone marrow aspiration, etc.

      The other answer is that AI will not hold your hand in the ICU, or share with you how their mother felt when on the same chemo regimen that you are prescribing.

jasonhong 2 days ago

In May earlier this year, the New York Times had a similar article about AI not replacing radiologists: https://archive.is/cw1Zt

It has similar insights, and good comments from doctors and from Hinton:

“It can augment, assist and quantify, but I am not in a place where I give up interpretive conclusions to the technology.”

“Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”

Dr. Hinton agrees. In retrospect, he believes he spoke too broadly in 2016, he said in an email. He didn’t make clear that he was speaking purely about image analysis, and was wrong on timing but not the direction, he added.

palmotea 2 days ago

What we need is a mandate for AI transformation of Radiology: Radiologists must be required to use AI every day on X% of scans, their productivity must double with the use of AI or they'll get fired, etc. To quote CEOs everywhere: AI is a transformative technology unlike any we've ever seen in our careers, and we must embrace it in a desperate FOMO way, anything else is unacceptable.

  • lm28469 2 days ago

    I can't even tell if it's sarcasm anymore

    • GoatInGrey 2 days ago

      > we must embrace it in a desperate FOMO way

      It's clearly satire with the little jabs like this.

simonw 2 days ago

I wouldn't trust a non-radiologist to safely interpret the results of an AI model for radiology, no matter how well that model performs in benchmarks.

Similar to how a model that can do "PhD-level research" is of little use to me if I don't have my own PhD in the topic area it's researching for me, because how am I supposed to analyze a 20 page research report and figure out if it's credible or not?

  • tanaros 2 days ago

    The notion of “PhD-level research” is too vague to be useful anyways. Is it equivalent to a preprint, a poster, a workshop paper, a conference paper, a journal submission, or a book? Is it expected to pass peer review in a prestigious venue, a mid-tier venue, or simply any venue at all?

    There’s wildly varying levels of quality among these options, even though they could all reasonably be called “PhD-level research.”

    • matthewdgreen 2 days ago

      I'm a professor who trains PhDs in cryptography, and I can say that it genuinely does have knowledge equivalent to a PhD student. Unfortunately I've never gotten it to produce a novel result. And occasionally it does frightening stuff, like swapping the + and * in a polynomial evaluation when I ask it to format a LaTeX algorithm.

  • naasking 2 days ago

    Why, ask another deep research model to critique it of course! ;-)

ModernMech 2 days ago

Every use of AI has its own problem of "person with 10 fingers" that AI image generation faces and can't seem to solve. For programmers, it's code that calls made up libraries and makes up language semantics. In prose, it's completely incoherent narratives that forget where they are going halfway through. For programmers it's making up case law and citations. Same for scientists, making up authorities and papers and results.

AI art is getting better but still it's very easy for me to quickly distinguish AI result from everything else, because I can visually inspect the artifacts and it's usually not very subtle.

I'm not a radiologist, but I would imagine AI is doing the same thing here, making up things that are cancer, missing things that aren't cancer, and it takes an expert to distinguish the false positives from true. So we're back at square one, except the expertise has shifted from interpreting the image to interpreting the image and also interpreting the AI.

  • red369 2 days ago

    All of the examples you gave (which I agree with, btw!), are generative AI, whereas I assume radiology would benefit more from the Machine Learning (ML), image in -> black-box ML decides whether it matches pattern -> verdict out, type of AI.

    I suppose first of all, is that generally agreed? People aren't expecting a LLM to give a radiology opinion, the same as way that you can feed in a PDF or an image into ChatGPT and ask it something about it, are they?

    I'm interested whether most people here have a higher opinion of ML than of the generative AIs, in terms of giving a reliably useful output. Or do a lot of you think that these also just create so much checking it would be easier to just have a human do the original work?

    I think it's probably worth excluding self-driving from my above question, since that is a particularly difficult area to agree anything on.

  • KittenInABox 2 days ago

    > AI art is getting better but still it's very easy for me to quickly distinguish AI result from everything else, because I can visually inspect the artifacts and it's usually not very subtle.

    I actually disagree in that it's not easy for me at all to quickly distinguish AI images from everything else. But I think we might differ what we mean by "quickly". I can quickly distinguish AI if I am looking. But if I'm mindlessly doomscrolling I cannot always distinguish 'random art of an attractive busty woman in generic fantasy armor that a streamer I follow shared' as AI. I cannot always distinguish 'reply-guy profile picture that's like a couple dozen pixels in dimensions' as AI. I also cannot always tell if someone is using a filter if I'm looking for maybe 5 seconds tops while I scroll.

    • GoatInGrey 2 days ago

      AI art is easy to pick out when no effort was made to deviate from the default style that the models use. Where the person put in a basic prompt of the desired contents ("man freezing on a bed") and calls it a day. When some craftsmanship is applied to make it more original, that's when it gets progressively harder to catch it at first glance. Though I'd argue that it's more transformative and thus warrants less criticism than the lazy usage.

      As a related aside, I've started seeing businesses clearly using ChatGPT for their logos. You can tell from the style and how much random detail there is contrasted with the fact that it's a small boba tea shop with two employees. I am still trying to organize my thoughts on that one.

      Edit:

      Example: https://cloudfront-us-east-1.images.arcpublishing.com/brookf...

hn_throwaway_99 2 days ago

As someone with a family member in radiology, I thought an important thing was missing from the article and comments I've seen here:

> In 2025, American diagnostic radiology residency programs offered a record 1,208 positions across all radiology specialties, a four percent increase from 2024, and the field’s vacancy rates are at all-time highs.

One reason I hear (anecdotally) that vacancy rates are so high is that fewer top quality people are going into radiology. That is, when med students choose a specialty, they're not just choosing for now, but they need to choose a specialty that will be around in 35-40 years. Many med students see the writing on the wall and are reluctant to invest a huge amount of blood, sweat, tears and money into a residency when tech may potentially short circuit their career eventually.

So what you see is that even though AI is not there yet (I'd really highlight this from the article: "First, while models beat humans on benchmarks, the standardized tests designed to measure AI performance, they struggle to replicate this performance in hospital conditions." For the programmers in the room, it's like AI that can solve all the leetcode problems but then falls over when you get into a moderately complicated situation) but there is a shortage of radiologists now because med students are worried what will happen in 10/15/20 years.

throw954387543 a day ago

I find that SWE's and tech staff usually think -> if it take my job that means everything else is able to be automated and we no longer need people to automate things. This and other articles show me most professions have more than just the skill component to them unlike SWE. The more I read the more likely I think it will probably only be software and tech as a relatively high paid job (there are industries at lower pay that are disrupted too) that will be disrupted significantly by AI in the short/medium term. Its main moat as a job was skill difficulty which AI can overcome.

This article shows its more likely tech/software is probably the first major job to be disrupted significantly before other industries. There is an assumption from tech workers that you need a tech person to employ the AI to do the automation meaning they are the last to go. I think that assumption is questionable if AI gets good enough. Especially if I can get "spec writers/QA's/BA's/etc" to do the automation of industries once the regulation/liability side is worked out per industry. Hearing and seeing more than just rumors of AI tooling that mirrors whole software developer workflows being trialed in large tech firms now; SWE is the lowest juiciest hanging fruit.

I still assert that the next industry to feel the most pain is the SWE's and tech workers themselves. Skills and expertise in an AI world are no longer moats to your job security and ability to provide for yourself -> regulation, lack of data, physical world interaction, liability, locality. Most professions have some of the above.

Anecdotally in my local social circle as an SWE I'm now seen as the person with the least desirable job from a social status and security perspective, a massive change from 5 years ago. People would rather be a "truck driver" or in this case a "radiologist". I hope I'm wrong of course for my own personal sake.

  • random3 a day ago

    I think you haven’t seen what most white collar jobs entail. Briefly - it’s stuff software has been automating Incompletely, somewhat badly for the past 70 years. If you automate x% of software developers it means you already automated at least the same percent of every white collar job out there. Exceptions are regulated sectors - like healthcare.

    This said, interpreting images is not an image problem - it’s a human body reasoning problem. If you can’t have AI that replaces any engineer, I’d assume replacing a doctor will be just as unlikely. The healthcare bar is much higher - works in 80% of the coding scenarios may be good enough for software, it’s not good for life critical decisions.

    So likely we’re not seeing any impact on jobs from AI in relevant health sectors. Now if you your friends think that the rest of paper pushing won’t be affected, or that their jobs entail some unique people skills, they’re in for a big surprise.

    • throw954387543 a day ago

      I've been in the industry long enough to see the jobs you talk about in the office. Many don't require a "profession" and yes these are up for automation unless there is some other moat (e.g. locality or the personal touch for sales staff for example). The "Software Engineer" is grouped now with such people - general process office workers once their differentiating factor (i.e. skill difficulty) is trivialized by AI. The fact that it is paid more than those general jobs just makes it more enticing as a target.

      Most engineers, even accountants, any profession with a title really that required some study usually have the moat of liability and/or locality. SWE's don't really have this in general - a unique job that while requiring a degree for many high tech orgs, will be the first to go. As you said 80% is enough for many domains here. Any other engineering profession (e.g. electrical, civil) has other moats that mean they won't be as disrupted.

      Most of the people I talk to w.r.t this issue studied in general professions or trades, physical jobs. i.e. SWE is especially affected especially at the higher end where study was required because for the same "effort" of a CS/Engineering degree you could of been in any other profession where there was more protection from AI (bootcamps aside). AI may have the CS/SWE university pathway be redundant - ironic if most college/uni jobs are still safe except for the industry that birthed the AI in the first place.

ALittleLight 2 days ago

The only part of this article I believe is the legal and bureaucratic burdens part.

"Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians"

I've had the misfortune of dealing with a radiologist or two this year. They spent 10-20 minutes talking about the imaging and the results with me. What they said was very superficial and they didn't have answers to several of the questions I asked.

I went over the images and pathology reports with ChatGPT and it was much better informed, did have answers for my questions, and had additional questions I should have been asking. I've used ChatGPT's information on the rare occasions when doctors deign to speak with me and it's always been right. Me, repeating conclusions and observations ChatGPT made, to my doctors, has twice changed the course of my treatment this year, and the doctors have never said anything I've learned from ChatGPT is wrong. By contrast, my doctors are often wrong, forgetful, or mistaken. I trust ChatGPT way more than them.

Good image recognition models probably are much better than human radiologists already and certainly could be vastly better. One obstacle this post mentions - AI models "struggle to replicate this performance in hospital conditions", is purely a choice. If HMOs trained models on real data then this would no longer be the case, if it is now, which I doubt.

I think it's pretty clearly doctors, and their various bureaucratic and legal allies, defending their legal monopoly so they can provide worse and slower healthcare at higher prices, so they continue to make money, at the small cost of the sick getting worse and dying.

rossdavidh a day ago

This all sounds very familiar; I recall IBM Watson (remember the last AI hype cycle?) was going to replace radiologists, and in the end IBM Watson didn't even save IBM's bottom line.

Not saying nothing will come of it, but there is a definite pattern to AI hype cycles, and radiology seems to be one of the recurring points.

https://en.wikipedia.org/wiki/AI_winter

theOGognf 2 days ago

This article is pretty good. My current work is transitioning CV models in a large, local hospital system to a more unified deployment system, and much of the content aligns with conversations we have with providers, operations, etc..

I think the part that says models will reduce time to complete tasks and allow providers to focus on other tasks is on point in particular. For one CV task, we’re only saving on average <30min of work per study, so it isn’t massive savings from a provider’s perspective. But scaled across the whole hospital, it’s huge savings

  • asadotzler 2 days ago

    >reduce time to complete tasks and allow providers to focus on other tasks

    Or, far more likely, to cut costs and increase profits.

bparsons 2 days ago

The thing with medical services is that there is never enough.

If you are rich and care about your health (especially as move past age 40), you probably have use for a physiotherapist, a nutritionist, therapist, regular blood analysis, comprehensive cardio screening, comprehensive cancer screening etc. Arguably, there is no limit to the amount of medical services that people could use if they were cheap and accessible enough.

Even if AI tools add 1-2% on the diagnostic side every year, it will take a very, very long time to catch up to demand.

ViktorRay 2 days ago

One thing people aren't talking about is liability.

At the end of the day if the radiologist makes an error the radiologist gets sued.

If AI replaces the radiologist then it is OpenAI or some other AI company that will get sued each and every time the AI model makes a mistake. No AI company wants to be on the hook for that.

So what will happen? Simple. AI will always remain just a tool to assist doctors. But there will always be a disclaimer attached to the output saying that ultimately the radiologist should use his or her judgement. And then the liability would remain with the human not the AI company.

Maybe AI will "replace" radiologists in very poor countries where people may not have had access to radiologists in the first place. In some places in the world it is cheap to get an xray but still can be expensive to pay someone to interpret it. But in the United States the fear of malpractice will mean radiologists never go away.

EDIT: I know the article mentions liability but it mentions it as just one reason among many. My contention is that liability will be the fundamental reason radiologists are never replaced regardless of how good the AI systems get. This applies to other specialities too.

  • charcircuit 2 days ago

    >At the end of the day if the radiologist makes an error the radiologist gets sued.

    Are you sure? Who would want to be a radiologist then when a single false negative could bankrupt you? I think it's more likely that as long as they make a best effort at trying to classify correctly then they would be fine.

    • bonoboTP 2 days ago

      Doctors have malpractice insurance and other kinds of insurance for that. They won't go bankrupt in reality.

      • charcircuit 2 days ago

        By that logic you would get malpractice insurance for the AI to similarly offload the risk.

        • bonoboTP 2 days ago

          Yeah, I mean it's analogous to car insurance for self driving cars. People, including lawyers, insurers and courts are just averse to it intuitively. I'm not saying they are wrong or right, but it's how it is.

          I believe medical AI will probably take hold first in a poorer countries where the existing care is too bad/unaffordable, then as it proves itself there, it may slowly find its way to richer countries.

          But probably lobbying will be strong against it, just as you can't get cheap generic medications made in India if you live in the US.

joelthelion a day ago

As someone who works in a related field, I think Hinton's original assessment is mostly correct (although we will always need a few radiologists around).

However, the timelines are far too optimistic. It takes time to refine the technology for day to day use, adapt mindsets and processes, and update regulations as needed. But it will come, it just needs more time.

It's the same story as self-driving vehicles or programming. AI will have an impact. It just takes time.

donnfelker 2 days ago

Oh yes it is. I have worked on projects where highly trained specialized doctors have helped train the models (or trained them themselves) to catch random very difficult to notice conditions via radiology. Some of these systems are deployed at different hospitals and medical facilities around the country. The radiologist still does there job, but some odd, random hard to notice conditions, AI is a literal life saver. For example, pancreas divisum, a abnormality in the way the pancreas ducts fail to fuse/etc can cause all kinds of insane issues. But its not something most people know about or look for. AI can pick that up in a second. It can then alert the radiologist of an abnormality and they can then verify. It's enhacing the capabilties of radiologists.

  • seesthruya a day ago

    > Some of these systems are deployed at different hospitals and medical facilities around the country. The radiologist still does there job, but some odd, random hard to notice conditions, AI is a literal life saver

    I would be very interested if you could provide specific examples.

  • random9749832 2 days ago

    >Oh yes it is.

    >It's enhacing the capabilties of radiologists.

    So it is not replacing radiologists?

    • matwood 2 days ago

      I guess we have to define 'replace' then. If we need fewer radiologists, does that count as replacing? IDK.

      It seems that with AI in particular, many operate with 0/1 thinking in that it can only be useless or take over the world with nothing in between.

    • bluGill 2 days ago

      Not yet. Time will tell but they have a long way to go if they ever do. They are useful tools now.

  • Mehvix 2 days ago

    what's the end game here, have a slew of of finetuned models for these varying edgecases?

justlikereddit 2 days ago

As the prediction of radiologists going dodo sprung out from improvements in image recognition, why don't we see premature and hysterically hyped predictions of psychiatrists being unemployed due to language models.

Their workday consists of conversations, questions and reading. Something LLMs more than excel at doing, tirelessly and in huge volumes.

And if radiologists are still the top bet due to image recognition being so much hotter, then why not add dermatologists to the extinction roster? They only ever look at regular light images, it should be a lower hanging fruit.

(I'm aware of the nuances that make automation of these work roles hard, I'm just trying to shine some light on the mystery of radiologists being perceived as the perennial easy target)

samweb3 2 days ago

It's interesting to see people fighting so hard to preserve these jobs. Do people want to work that badly? If a magic wand can do everything radiologists can do, would we embrace it or invent reasons to occupy 40+ hours a week of time anyway? If a magic wand, might be on the horizon, shouldn't we all be fighting to find it and even finding ways to tweak our behaviors to maximize the amount of free time that could be generated?

  • xboxnolifes 2 days ago

    This isn't going to generate free time. Its going to generate homelessness and increasing wealth inequality.

    Our current economic system does not support improved productivity leading to less working (with equal wealth) for the working class.

  • kulshan 2 days ago

    People enjoy the comfort of consistent food and housing. People also enjoy serving their community. Working helps provide that. So for folks to be willing to sacrifice their security and comfort to get to the horizon of the new day with greater leisure time, it can be scary for many. Especially when you have to make a leap of belief that AI is a magic wand changing your world for the better. Is that supported by the evidence? It's quite the leap in belief and life change. Hesitancy seems appropriate to me.

  • Simulacra 2 days ago

    It's not that so much as no one wants to lose their jobs due to innovation. Just look at typewriter repairmen, TV, radio, even Taxi drivers at one point etc. One day AI and automation will make many jobs redundant, so rather than resisting the march forward of technology, prepare for it and find a way to work alongside, not against innovation.

  • simianwords 2 days ago

    People like the promised stability that comes with certain jobs. Some jobs are sold as "study this, get the GPA, apply to this university and do these things and you will get a stable job at the end". AI plans to disrupt this path.

  • snowwrestler 2 days ago

    It is precisely the attraction of the vision that makes people fight so hard to preserve these jobs.

    Because we know how well the jobs address a need, and we also know how many times throughout history we have been promised magic wands that never quite showed up.

    And guess who is best equipped to measure the actual level of “magic”? Experts like radiologists. We need them the most along the way, not the least.

    If a magic wand actually shows up, it will be obvious to everyone and we’ll all adopt it voluntarily. Just like thousands of innovations in history.

  • trenchpilgrim 2 days ago

    Problem: Food, rent, utilities and healthcare cost money.

danielodievich 2 days ago

One of my best friends is a recently retired neuroradiologist. He told me his old practice is begging for him to come back for wh a tavern money he wants. He isn't interested. Good for him to have made enough to not care anymore.

Joker_vD 2 days ago

So instead of having to train and employ radiologists, we will train and employ radiologists and pay for the AI inference. Excuse me, but how is this beneficial in any way? It's trivially more expensive, and the result has the same quality? And productivity is also the same?

  • chrisgd 2 days ago

    A lot of the tech in the space is workflow related right now. The AI scans triage to the top those cases that are deemed clinically significant, potentially saving time. I can’t tell you it isn’t solely responding to the referring doctor’s urgent stamp though.

  • theOGognf 2 days ago

    Not sure about other hospital systems, but the one I work at is developing CV systems to help fill workforce gaps in places where there isn’t as many trained professionals or even resources to train professionals

  • meken 2 days ago

    From the article:

    > Some products can reorder radiologist worklists to prioritize critical cases, suggest next steps for care teams, or generate structured draft reports that fit into hospital record systems.

jmhmd 2 days ago

While a lot of this rings true, I think the analysis is skewed towards academic radiology. In private practice, everything is optimized for throughput, so the idea that most rads spend less than half of their time reading studies i think is probably way off.

ineedasername 2 days ago

>"they struggle to replicate this performance in hospital conditions"

Are there systematic reasons why radiologists in hospitals are inaccurately assessing the AI's output? If the AI models are better than humans in testing novel data then, well, the thing that has changed in a hospital situation compared to the AI-Human testing environment is not the AI, it is the human, under less controlled constraints, additional pressures, workloads, etc. Perhaps the AI's aren't performing as poorly as thought. Perhaps this is why they performed better to begin with. Otherwise, production ML systems are generally not as highly regarded as these models when they perform as significantly below test data sets in production. Some is expected, but "struggle to replicate" implies more.

>"Most tools can only diagnose abnormalities that are common in training data"

Well yes, training on novel examples is one thing. Training on something categorically different is another thing all together. Also there are thresholds of detection. Detecting nothing, or with a a lower confidence, or unknown anomaly, false positive, etc. How much of the inaccuracy isn't wrong, but simply something that is amended or expanded upon when reviewed? Some details here would be useful.

I'm highly skeptical when generalized statements exclude directly relevant information to which an is referring. The few sources provided don't at all cover model accuracy, and the primary factor cited as problematic with AI review, lack of diversity in study composition for women, ethnic variation, children, links to a a meta study that was not at all related to the composition of models and their training data sets.

The article begins as what appears to be a criticism of AI accuracy with the thinness outlined above but then quickly moves on to a "but that's not what radiologists do anyway", and provides a categorical % breakdown of time spent where Personal/Meetings/Meals and some mixture of the others combine to form at least a third that could be categorized as "Time where the human isn't necessary if graphs are being interpreted by models."

I'm not saying there aren't points here, but overall, it simply sounds like the hand-wavy meandering of someone trying to gatekeep a profession whose services could be massively more utilized with more automation, and sure-- perhaps at even higher quality with more radiologists to boot-- but perfect is the enemy of the good etc. on that score, with enormous costs and delays in service in the meantime.

  • jmhmd 2 days ago

    Poorer performance in real hospital settings has more to do with the introduction of new/unexpected/poor quality data (i.e. real world data) that the model was not trained in or optimized for. They still do very well generally, but often do not hit equivalent performance to what is submitted to the FDA, or in marketing materials. This does not mean they aren’t useful.

    Clinical AI also has to balance accuracy with workflow efficiency. It may be technically most accurate for a model to report every potential abnormality with associated level of certainty, but this may inundate the radiologist with spurious findings that must be reviewed and rejected, slowing her down without adding clinical value. More data is not always better.

    In order for the model to have high enough certainty to get the right balance of sensitivity and specificity to be useful, many many examples are needed for training, and with some rarer entities, that is difficult. It also inherently reduces the value of the model it is only expected to identify its target disease 3 times/year.

    That’s not to say advances in AI won’t overcome these problems, just that they haven’t, yet.

  • nomel 2 days ago

    For anomaly systems like this, is it effective to invert the problem by not include the ailment/problem in the training data, then looking for a "confused" signal rather than a "x% probability of ailment" type signal?

    • ineedasername 2 days ago

      On that, I'm not sure. My area of ML & data science practice is, thankfully, not so high-stakes. There's a method of anomaly detection called one-class SVM (Support Vector Machine) that is pretty much this- train on normal, flag on "wtf is this you never training me on this 01010##" <-- Not actual ISO standard ML model output or medical jargon. But I'm not sure if that's what's most effective here. My gut instinct in first approaching the task would be to throw a bunch of models at it, mixed-methods, with one-class SVM as a fall back. But I'm also way out of my depth on medical diagnostics ML so that's just a generalist's guess.

charv 2 days ago

I find the radiologist use case an illuminating one for the adoption of AI across business today. My takeaway is that when the tools get better, radiologists aren't replaced, but take up other important tasks that sometimes become second nature when reads (unassisted) are the primary goal.

  In particular, doctors appear to defer excessively to assistive AI tools in clinical settings in a way that they do not in lab settings. They did this even with much more primitive tools than we have today... The gap was largest when computer aids failed to recognize the malignancy itself; many doctors seemed to treat an absence of prompts as reassurance that a film was clean
Reminds me of the "slop" discussions happening right now. When the tools seem good, but aren't, we develop a reliance to false negatives, e.g. text that clearly "feels" written by a GPT model.
catigula 2 days ago

Can radiology results be instantly fed into pass/fail metrics like code can via tests?

Programming is the first job AI will replace. The rest come later.

rapatel0 2 days ago

I lived this previously. The author is missing some important context.

Spray-and-Pray Algorithms

After AlexNet, dozens of companies rushed into medical imaging. They grabbed whatever data they could find, trained a model, then pushed it through the FDA’s broken clearance process. Most of these products failed in practice because they were junk. In mammography, only 2–3 companies actually built clinically useful products.

Products actually have to be useful.

There were two products in the space: CAD, and Triage. CAD is basically overlay on the screen as you read the case. Rads hated this because it was distracting and because the feature-engineering based CAD from the 80s-90s was demonstrated to be a failure. Users basically ignored "CADs."

Triage is when you prioritize cases (cancers to the top of the stack). This has little to no value because when you have a stack of 50 cases you have to do today, then why do you care about the order? There were some niche use cases but it was largely pointless. It could actually detrimental. The algotithm would put easy cancer cases on the top, so now the user would spend less time on the rest of the stack (where the harder cases would end up).

*Side note:* did you know that using CAD was a billable extra to insurance. Even through it was proven to not work, for years it remained reimbursable up until a few years ago.

Poor Validation Standards

Models collapsed in the real world because the FDA process is designed for drugs/hardware, not adaptive software. Validation typically = ~300 “golden” cases, labeled by 3 radiologists with majority vote arbitration. If 3 rads say it’s cancer, it’s cancer. If they disagree, it's not a good case for the study. This filtering ignores the hard cases (where readers disagree), which is exactly what models need to handle in the real world. Instead of 500K noisy real-world studies, you validate on a sanitized dataset. Companies learned how to “cheat” by over fitting to these toy datasets. You can explain this to regulators endlessly, but the bureaucracy only accepts the previously blessed process. Note: The previous process was defined by CAD, a product that was cleared in the 80s and shown to fail miserably in clinical use. This validation standard that demonstrated grand historical regulatory failure is the current standard that you MUST use for any devices that look like a CAD in mammography.

Politics Over Outcomes

We ran the largest multi-site prospective (15) trial in the space. Results: ~50% reduction in radiologist workload. Increased cancer detection rate. 10x lower cost per study. We even caught cancers missed in the standard workflow. Clinics still resisted adoption—because admitting missed cancers looked bad for their reputation. Bureaucratic EU healthcare systems preferred to avoid the embarrassment even through it was entirely internal.

I'll leave you with one particularly salient story. I was speaking to the head a large US hospital IT/Ops organization. We had a 30 minute conversation about how to avoid putting our software decision in the EMR/PACS so that they could avoid litigation risk. Not once did we ever talk about patient impact. Not Once...

Despite all that, our system caught cancers that would have been missed. Last I checked at least 104 women had their cancers detected by our software and are still walking around. That’s the real win, even if politics buried the broader impact.

porridgeraisin 2 days ago

AI _is_ replacing radiologists. Where AI stands for "An Indian". Search for teleradiology, on Google and on HN too.

  • quadragenarian 2 days ago

    You do realize that in order to interpret imaging for a US based patient, any physician needs to have a US medical license?

    • pessimizer 2 days ago

      Any physician with a medical license in the US can sign off on an Indian physician's work. Or hundreds of them.

      • quadragenarian 2 days ago

        This already does happen for nighthawk coverage for example where a non-US radiologist will provide a wet read and then a US rad will sign off. The US rad will always bear the liability however.

    • Am4TIfIsER0ppos 2 days ago

      States give out driver licenses and licenses for trucking they'll give out medical licenses soon too.

feverzsj 2 days ago

Building a national remote radiology service would be much more cost effective and accuracy than these unreliable AIs.

1vuio0pswjnm7 2 days ago

Original HN title: "AI isn't replacing rediologists"

JoeAltmaier 2 days ago

My anecdote (not data): I took a friend to the emergency room on Sunday,after a back crackup on a bicycle. Big bruise and swelling above the left eye socket and on the chin. Both hands sore and tender and stabbing pain when any pressure was put on them.

The radiologist said No Fracture! So treat for concussion and release. Two days later, back with unendurable head pain. Reexamine: oh, sorry, yes, fractures. Morphine.

Wtf? Are the radiologists so overworked they can only glance at a test? Or was an AI at fault? I'll never know.

  • lostlogin a day ago

    At busy times it’s often someone junior and under pressure reading the imaging.

    Was it CT or X-rays?

    X-rays are garbage compared to ct for facial fractures in my (radiography) experience.

    Having taken a lot of both: if it’s abnormal on X-rays you do a CT. If the pain or swelling is huge but the X-ray normal, you do a CT. So why bother with X-rays?

  • vkou 2 days ago

    Could have just been one that sucks at their job. Half of people who have one are below average at doing it.

whydoineedthis 2 days ago

I personally think we are in the pre-broadband era of AI. That is, what is currently being built will contribute to some AI dot-com 1.0 bubble burst, but after that there will be some advancements in it's distribution model that will allow it to thrive and infiltrate our society at a core level. Right now it's a bunch of dreamers building the pets.com that no one asked for, but after all the competition is shaken off, there will definitely be some clear winners, and maybe an Amazon.com of sorts for AI that people will adopt.

Simulacra 2 days ago

So... is it technically possible?

ninetyninenine a day ago

> Most tools can only diagnose abnormalities that are common in training data, and models often don’t work as well outside of their test conditions.

I don’t understand this. All data in existence is fodder for training. Barring the privacy issue which the article implies as orthogonal, training data and actual data are the same set of things.

Test conditions should be identical to real conditions. What in the world is the article saying. What actual differences are there?

The only issue I can think of is when there is LLM like language logic in looking at the results of a scan. Like for example the radiologist looks at 30 sections of the image and they all have different relationships with each other and those relationships end up influencing the outcome. But I doubt it’s like that, radiology should be much simpler than learning a foreign language.

rendx 2 days ago

Huh? Since when does the word 'radiologist' not already imply being human? Did I miss a memo? The original title "AI isn't replacing radiologists" makes more sense, or why not "Demand for radiologists is at an all-time high".

renewiltord 2 days ago

The obvious answer is regulation and legal risk. It's the same reason retail pharmacists still have jobs despite their currently being slow poor-quality vending machines.

jiggawatts 2 days ago

The demand for horses was also at an all time high when the Model T Ford was introduced.

aussieguy1234 2 days ago

At the end of the day, a human is still needed to do the actual scans.