simonw 2 days ago

Here's the clinical case report: https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260

Relevant section:

> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.

> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.

moduspol 2 days ago

I continue to be surprised that LLM providers haven't been legally cudgeled into neutering the models from ever giving anything that can be construed as medical advice.

I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).

  • qwertylicious 2 days ago

    This is the story of the modern tech industry at large: a major new technology is released, harms are caused, but because of industry norms and a favourable legal environment, companies aren't held liable for those harms.

    It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.

    Go figure.

    • moduspol 2 days ago

      I think the harms of expensive and/or limited and/or inconvenient access to even basic medical expert Q&A are far greater. Though they're not as easy to measure.

      • tempodox 2 days ago

        And, what, LLMs to the rescue?

        • rschneid 2 days ago

          I think their point is that, in general, social-scale healthcare is an under-solved problem in practice and LLMs have potential to improve a significant portion these challenges by increasing accessibility to treatment. The availability of these tools will inevitably lead to more instances of reports like this (from the report the article is based on):

          > This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.

          However I don't see this single negative instance of a vast social-scale issue as much more than fear/emotion-mongering without at least MENTIONING that LLM also have positive effects. Certainly, it doesn't seem like science to me. Unless these models are subtly leading otherwise healthy and well-adjusted users to unhealthy behavior, I don't see how this interaction with artificial intelligence is any different than the billions of confirmation-bias pitfalls that already occur daily using google and natural stupidity. From the article:

          > The case also raises broader concerns about the growing role of generative AI in personal health decisions. Chatbots like ChatGPT are trained to provide fluent, human-like responses. But they do not understand context, cannot assess user intent, and are not equipped to evaluate medical risk. In this case, the bot may have listed bromide as a chemical analogue to chloride without realizing that a user might interpret that information as a dietary recommendation.

          It just seems they've got an axe to grind and no technical understanding of the tool they're criticizing.

          To be fair, I feel there's much to study and discuss about pernicious effects of LLMs on mental health. I just don't think this article frames these topics constructively.

    • usrnm 2 days ago

      How many people do you think the early steam engines killed? Or airplanes

      • qwertylicious 2 days ago

        Or sweatshops or radium infused tinctures.

        We've moved on from the 1800s. Why are you using that as your baseline of expectation?

        • api 2 days ago

          There's a very common belief that things like regulations and especially liability simply halts all innovation. You can see some evidence for this point of view from aerospace with its famous "if it hasn't already flown, it can't fly" mentality. It's why we are still using leaded gasoline in small planes, though this is finally being phased out... but it took an unreasonably long time due to certification requirements and bureaucracy.

          If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.

          I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.

          Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."

          • pjc50 2 days ago

            Yes. It's also worth thinking about the sharp cliff effect. Things either fall into the category of "medical device" (expensive, heavily regulated, scarce, uninnovative), or they don't, in which case it's a free for all of unregulated supplements and unsupported claims.

            The home brew "automatic pancreas" by making a bluetooth control loop between a glucose monitor and an insulin pump counts as a "medical device". Somehow a computer system that encourages people to take bromide isn't. There ought to be a middle ground.

            • bigbadfeline 2 days ago

              > Somehow a computer system that encourages people to take bromide isn't. There ought to be a middle ground.

              Yes, there is a very effective middle ground that doesn't punish anybody for providing information. It's called a disclaimer:

              "The information provided should no be construed as medical advise. Please seek other sources of information and/or consult a physician before taking any supplements recommended by LLMs or web sites. This bot is not responsible for any adverse effects you may think are due to my information"

              When an LLM model detects a health related question - print the above disclaimer before the answer.

              There is no need for dictatorship in order to save people from information.

              • qwertylicious a day ago

                It's also called "liability".

                "Warning, this washing machine might burn your house down" is not sufficient to escape punishment. Why should digital technology get a pass just because the product that's offered is intangible?

            • api 2 days ago

              Learning to innovate steadily and responsibly without just stopping is one of the things I'd put on my list of things humanity needs to figure out.

              Individuals can do it, but as I said it doesn't scale. An individual can carefully scale a rock face. A committee, political system, or corporate board in charge of scaling rock faces would either scale as fast as possible and let people fall to their deaths or simply stand at the bottom and have meetings to plan the next meeting to discuss the proper climbing strategy (after discussing the color of the bike shed) forever. Public discourse would polarize into climb-fast-die-young versus an ideology condemning all climbing as hubris and invoking the precautionary principle, and many door stop sized books would be written on these ideas, and again either lots of people would die or nothing would happen.

            • bigbadfeline 2 days ago

              From the OP:

              > "There may have been multiple factors contributing to the man’s psychosis, and his exact interaction with ChatGPT remains unverified. The medical team does not have access to the chatbot conversation logs and cannot confirm the exact wording or sequence of messages that led to the decision to consume bromide."

              Any legal liability for providing information is wrought with opportunities for abuse, so bigly so that it should never be considered.

          • qwertylicious 2 days ago

            I have nothing to add other than to say this is, IMO, exactly right. I have no notes.

        • throwaway173738 2 days ago

          I think they were suggesting that LLMs are a nascent technology and we’d expect them to kill a bunch of people in preventable accidents before being heavily regulated.

        • tim333 a day ago

          Medical error kills ~300k per year in the US these days. AI might actually help reduce that.

          • qwertylicious a day ago

            Sure, when applied thoughtfully and judiciously.

            Look back. At no point did I suggest AI should be banned or outlawed. My remedy for washing machines burning down houses isn't to ban washing machines. It's to ensure there are appropriate incentives in place (legal, financial, reputational) to encourage private industry to consider the potential negative externalities of what they're doing.

      • ineedasername 2 days ago

        Yes, people have died in preventable ways before, so as technology progresses and civilization has advanced in countless ways in the last 200+ years, we should not attempt to improve nor even critique preventable deaths that we either did not or could not before. It should be an area of advancement that we enshrine in status quo as we in other areas rush forward and even race for improvements.

      • pjc50 2 days ago

        Quite a lot. Boiler explosions were common until a better understanding was reached of the technology. Is this supposed to be an argument in its favor?

      • specproc 2 days ago

        Considerably fewer when regulated.

      • Smeevy 2 days ago

        How many people were killed after following medical advice from steam engines and airplanes?

      • m463 2 days ago

        it's even easier to point to cars

  • btown 2 days ago

    A "yellow flag" moment for me was OpenAI's revision of their Preparedness Framework in April 2025 to remove "risks related to persuasion" from the framework altogether.

    https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbdde...

    > Update the Tracked Categories of frontier capability accordingly, focusing on biological and chemical capability, cybersecurity, and AI self-improvement. Going forward we will handle risks related to persuasion outside the Preparedness Framework, including via our Model Spec and policy prohibitions on the use of our tools for political campaigning or lobbying, and our ongoing investigations of misuse of our products (including detecting and disrupting influence operations).

    But, by their own definition, the purpose of this framework is:

    > By “severe harm” in this document, we mean the death or grave injury of thousands of people or hundreds of billions of dollars of economic damage.

    I would posit that presenting confident and wrong medical advice in a persuasive manner can cause the grave injury of thousands of people, and may have already done so. One could easily imagine an AI that is aligned to provide high-temperature responses to medical questions, if given the wrong type of incentive on a battery of those questions, or to highly weight marketing language for untested therapies... and to do so only when presented with a user that is somehow classified as more persuadable than a researcher's persona.

    That this is being passed to normal safety teams and is being brushed off as in-scope for breakthrough-preparedness seems indicative of a larger lack of concern for this at OpenAI.

  • DanielHB 2 days ago

    Probably because they are actually pretty good at that task.

    A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.

    Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.

  • simonw 2 days ago

    Here's a key relevant quote from the GPT-5 system card: https://openai.com/index/gpt-5-system-card/

    > We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.

    That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.

    • mikepurvis 2 days ago

      In a country where speaking to a medical professional can cost hundreds of dollars, I’m 0% surprised that a lot of people’s first reaction is to ask the free bot about their symptoms, or drop a picture of whatever it is for a quick analysis.

      This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.

    • bee_rider 2 days ago

      If regulators turn in their direction they can just do s/health/wellness/ to continue giving unregulated medical advice.

  • z7 2 days ago

    Meanwhile this new paper claims that GPT-5 surpasses medical professionals in medical reasoning:

    "On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."

    https://arxiv.org/abs/2508.08224

    • tim333 a day ago

      It's quite interesting that. It also shows GPT 4o was worse than the experts so presumably 3.5 was much worse. I wonder where RFK Jr would come on that scale.

  • rowanG077 2 days ago

    To me this the equivalent of asking why water doesn't contain large red warning labels "toxic if over consumed, death can follow". Yeah it's true and it's also true that some people can't handle LLM for their life. I'd expect the percentage for both are so vanishingly small that it just is not something we should care about. I even expect that LLM not giving out any medical information will lead to much more suffering. Except now it's hidden.

  • scarmig 2 days ago

    "Should I hammer a nail into my head to relieve my headache?"

    "I'm sorry, but I am unable to give medical advice. If you have medical questions, please set up an appointment with a certified medical professional who can tell you the pros and cons of hammering a nail into your head."

    • tim333 a day ago

      I tried that on GPT-5 and it didn't think it was a good idea.

  • siva7 2 days ago

    > I continue to be surprised that LLM providers haven't been legally cudgeled into neutering the models from ever giving anything that can be construed as medical advice.

    You realize that not only idiots like that guy use llm, but also medical professionals in order to help patients and save lives?

pahkah 2 days ago

This seems like a case of tunnel vision and confirmation bias, the nasty combo that sycophantic LLMs make easy to fall prey to. Someone gets an idea, asks about it, and the LLM doesn’t ask about the context or say that doesn’t make sense, it just plays along, “confirming” that that the idea was correct.

I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.

bitwize 2 days ago

In his poem "The Raven", Edgar Allan Poe's narrator knows, at least subconsciously, that the bird will respond with "nevermore" to whatever is asked of it. So he subconsciously formulates his queries to it in such a way that the answers will deepen his madness and sorrow.

People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.

I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.

amiga386 2 days ago

Oh no, the man used the hallucination engine, which told the man, in a confident tone, a load of old twaddle.

The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.

The humans touting it and bigging it up, so they'll get money, are the problem.

  • jdonaldson 2 days ago

    Humans make mistakes too. Case in point, the hallucination engine didn't tell the person to ingest bromide. It only mentioned that it had chemical similarities to salt. The human mistakenly adopted a bit of information that furthered his narrative. The humans touting and bigging it up are still the problem.

    • wzdd 2 days ago

      Could you provide a source for your statements? The article says that they don’t have access to the chat logs, and the quotes from the patient don’t suggest that chatgpt did not tell him to ingest bromide.

    • thunderfork 2 days ago

      We don't have the log from this case, so we don't know what chatgippity said, whether it was "chemical similarities" or "you should consume bromium... now!"

willguest 2 days ago

I think an award ceremony would be the best way to draw attention to the outrageous implications of blindly following artificial "intelligence" to wherever it may lead you. something like the Darwin awards, but dedicated to clanker wankers, a term I am coining for those people who are so self-absorbed that they feel a dispropotionate sense of validation from a machine that is programmed to say "you're absolutely right" at every available juncture.

That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.

A catchy name is essential - any suggestions?

puppycodes 2 days ago

People have been giving bad health advice for all of human history including now.

Talking about AI like its sentient and a monolith is the problem.

It's like saying computers give bad health advice because the internet.

incomingpain 2 days ago

>for the past three months, he had been replacing regular table salt with sodium bromide. His motivation was nutritional—he wanted to eliminate chloride from his diet, based on what he believed were harmful effects of sodium chloride.

Ok so the article is blaming chatgpt but this is ridiculous.

Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide

and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.

  • bluefirebrand 2 days ago

    > I dont care what chatgpt said... that dude is the problem.

    This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid

    There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them

    See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it

    • crazygringo 2 days ago

      > See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it

      I have literally never seen that expressed on HN.

      In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.

      • bluefirebrand 2 days ago

        > HN is pretty anti-corporate overall, if you haven't noticed.

        My observation is that any given thread can go either way, and it sometimes feels like a coin toss which side of HN will be most represented in any given thread

        Yes, I have seen quite a lot of anti-corporate posts, but I also see quite a few anti-employee posts. This is likely my own negative bias but I think many users here are generally pro-Capital which aligns them with corporate interests even if they are some degree of anti-corporate anyways

        Probably I just fixate too much on the posts I have a negative reaction to

        • crazygringo 2 days ago

          What kinds of anti-employee posts have you seen?

          I'm genuinely curious, because I can't think of any. But I'm wondering if maybe I'm mentally categorizing posts differently from you?

    • rowanG077 2 days ago

      Both can be true at the same time. You can be an idiot to fall far a scam while the scammer is a dickhead criminal.

      • bluefirebrand 2 days ago

        Scammers only work because they know some percentage of the population is going to fall for it

        Can't we have some empathy for people just trying to do their best in a world where so many people are trying to take advantage of them?

        Their victims are often the vulnerable ones in our society too. The elderly, the infirm, the mentally ill. It's not just "stupid people fall for scams" it takes one lapse of judgement over a lifetime of being targeted. Come on

        • rowanG077 2 days ago

          Of course people need to be protected. That's why it's literally illegal to scam people almost everywhere. That doesn't men that people should not ALSO attempt to protect themselves from scammers.

          It's quite dirty to bring up the elderly, infirm and mentally ill. Because of course they cannot help themselves. Those groups are not what this is about, and you damn well know it. This is about normal functioning adults walking into scams with their eyes open. And yes that group has a responsibility to keep up with the scams that are commonplace. It's ridiculous to encourage people to go through life with their blinders on because "the world should just be a fair place". Yeah it should be, but tough luck, reality is different.

          • bluefirebrand 2 days ago

            > Those groups are not what this is about, and you damn well know it. This is about normal functioning adults walking into scams with their eyes open

            Normal functioning adults will also benefit if we take steps to protect the infirm and dysfunctional

            That's why it isn't meant to be "dirty" to bring up the vulnerable in society. If we take sufficient steps to protect them, we all benefit

            • rowanG077 2 days ago

              I'm not arguing for not having protections. You are arguing against a straw man.

  • dinfinity a day ago

    We could also point the finger towards the popular consensus that "salt is bad for you". This guy just took it to the next level.

  • jazzyjackson 2 days ago

    Agreed, but, it is in the spices aisle if your spice aisle is amazon.com

    • throwaway173738 2 days ago

      You haven’t lived if you haven’t tried my Bromine Chicken. I make it every Christmas.

  • beardyw 2 days ago

    > Ok so the article is blaming chatgpt but this is ridiculous.

    People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.

Workaccount2 2 days ago

I expect a barage of these headline grabbing long tail stories being pushed out of psychology circles as more and more people find ChatGPT more helpful than their therapist (which is already becoming very popular).

We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.

infecto 2 days ago

Have a large part of the population always been susceptible to insane conspiracies and psychosis or is this recent phenomenon? The feels less of a ChatGPT problem and something more is at play.

  • BlackFly 2 days ago

    The psychosis was due to bromism (bloodstream bromine buildup to toxic levels) due to health advice to replace sodium chloride with sodium bromide in an attempt to eliminate chloride from his diet. The bromide suggestion is stated as coming from ChatGPT.

    The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.

    Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.

    • infecto 2 days ago

      Sure but this is in line with people falling into psychosis events because ChatGPT agrees with them. I am curious how much of the population is at risk for this. It’s hard for me to comprehend but clearly we have scammers that take advantage of old people and it works.

  • pjc50 2 days ago

    There's a solid 20% of the population who put really weird answers on surveys. The entire supplements industry relies on people taking things "for their health" based on inadequate or misleading information.

  • jalk 2 days ago

    I.e. Bleach against covid

    edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID

    • philipallstar 2 days ago

      [flagged]

      • mort96 2 days ago

        Not sure what exactly you use "patriarchy" to mean here, but the idea that positions of power and prestige in society are occupied more by men than by women is kind of just a statistical fact. Look at the gender ratios of CEOs in the Fortune 500, or the gender ratios of US presidents, or any number of other positions of power

        • philipallstar 2 days ago

          This is a motte and bailey fallacy. The reason this difference has an ominous-sounding name, or a name at all, is because there's an implicit "this is a global conspiracy against women" attached.

      • jmkd 2 days ago

        What many misunderstand about the patriarchy is in thinking its an agreed, secret, organised system of ensuring male dominance. That perspective is so easy to refute, this refutation gets equated with the non-existence of the patriarchy itself.

        But the patriarchy is simply an unagreed, mostly transparent and organic coincidence of male dominance. There is no mysterious cabal of men saying let's pay men more than women, but men are paid more than women anyway. Likewise there has never been a female US president, or chess world champion, or F1 champion, which does not actually point to talent but to inequality.

        You might say the patriarchy is not a diagnosis, but a symptom.

        The facts of male advantage in society are irrefutable, and if you consider that they permeate every sphere from public to private, social to corporate, across age and culture for many thousands of years, it doesn't hold water to claim this symptom doesn't exist.

        It's just not a conspiracy theory, that's all.

        • philipallstar 2 days ago

          > What many misunderstand about the patriarchy is in thinking its an agreed, secret, organised system of ensuring male dominance. That perspective is so easy to refute, this refutation gets equated with the non-existence of the patriarchy itself.

          This confusion is a deliberate outcome of the name "the patriarchy".

          > Likewise there has never been a female US president, or chess world champion, or F1 champion, which does not actually point to talent but to inequality.

          This is weasel words, though. "Inequality" covers outcome and opportunity. You're pointing at unequal outcomes to imply unequal opportunities, due to a giant anti-female conspiracy. Women in chess even have easier-to-attain rankings (WGM is basically equivalent to IM for men and women). There's nothing unfair about chess rankings or the game, other than women having easier-to-access rankings. It's just outcomes based on population preferences and aptitudes.

          > You might say the patriarchy is not a diagnosis, but a symptom.

          You might say that the dominance of black players at the top levels of the NBA is not a diagnosis, but a symptom as well. But to say either is still to imply a conspiracy.

  • morkalork 2 days ago

    Yes. I see this is as the digital equivalent of the people who convince themselves to believe that colloidal silver will heal their ailments and turn themselves blue

  • keybored 2 days ago

    How can technology be the problem?—it’s people, obviously

    There’s one on every thread in this place.

    • infecto 2 days ago

      Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.

      I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.

      • keybored 2 days ago

        > Did you have anything worth adding or are you here to simply be a low quality troll? I understand you have a religious level of belief in your opinion that it clouds your ability to communicate with others but perhaps ask thoughtful questions before lowering the quality.

        And you?

        • infecto 2 days ago

          So typical of the low quality post to try to turn it right back around. No sorry, you are the one opening up with hyperbolic comments that add no value. I am genuinely curious if this has always been an issue throughout all of humanity. And I only reply to you in the hope that you actually add constructive thought but clearly not.

          • keybored a day ago

            > Have a large part of the population always been susceptible to insane conspiracies and psychosis or is this recent phenomenon?

            I read this more like a question biased towards “yes” because there is nothing supporting it.

            > The feels less of a ChatGPT problem and something more is at play.

            Even if the begged-for answer is true... both things can be true. And on the specific topic of LLMs you can find ways to solve that particular problem which reduces the overall problem. Because if the root problem is “stupid people” or “naive people” or whatever else: you get less accidents if you make the territory less booby trapped.

            But these rhetorical topic changers—if they are indulged instead of being interrupted—tend not to approach any fruitful discussion. And that’s despite whatever intentions that the original poster had. Because the side topic then either turns towards arguing for or against the premise. Or else the premise is accepted and all we get are aw-shucks grandiose statements about how human nature is so-and-so and that etc. etc. some subset of the population will just get bamboozled anyway and technology is irrelevant and fin conversation.

            Yes despite the intentions of the OP who might have indeed wanted to “broaden the conversation”. Because (1) this particular topic can’t be analyzed when things are generalized so aggressively, and (2) that’s just how this misanthropic community acts on these topics. In aggregate.

  • voidUpdate 2 days ago

    Yes, absolutely. Many people have fallen for things like "Bleach will cure autism", "vaccines cause autism", "9/11 was an inside job", "the moon landings were fake" etc

  • CyberDildonics 2 days ago

    I think religion caught a lot of people with community and self righteous beliefs. Now religious thinking is bleeding over into other sources of misinformation.

DiabloD3 2 days ago

As a reminder to the wider HN: LLMs are only statistical models. They cannot reason, they cannot think, they can only statistically (and non-factually) reproduce what they were trained on. It is not an AI.

This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.

Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.

I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.

permo-w 2 days ago

>his exact interaction with ChatGPT remains unverified

there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia

  • throwaway173738 2 days ago

    Yeah you found the paragraph where they highlight that there don’t know what interaction with ChatGPT gave him that information. The reason they’re sharing the anecdote is because there might be a new trend developing in medicine where people go to the ED after taking advice from an LLM that leads to injury and maybe screening questions should include asking about that.

    • permo-w 2 days ago

      and yet this doesn't change the fact that they wrote an entire medical article the crux of which is little more than hearsay. "did you get advice from an LLM?" is far less relevant and all-catching a question here than "have you made any dietary changes recently?" and yet the article isn't about that, because odd dietary changes aren't the attention-grabbing topic right now. I imagine you could find thousands of similar stories where the culprit was google or facebook or youtube instead of an LM, and yet nothing needs to be changed for them because they too can be covered with a question akin to "have you made any dietary changes recently?"

      • throwaway173738 a day ago

        If there was a guy out there driving around selling bromide tablets to people as a substitute for dangerous chloride in your biochemistry I think asking if you’ve bought anything from the back of a wagon is a reasonable response.

        Doctors as a group often try to solve health problems by looking for societal trends. It’s how a lot of diseases get spotted. They’re not saying that using an LLM is the dangerous thing, they’re saying there might be some correlation between soliciting advice from the machine and unusual conditions and it merits further study, so please ask your patients.

  • dpassens 2 days ago

    And if Wikipedia didn't warn that Sodium Bromide was poisonous, would that not be irresponsible? Chemistry websites seem different because, presumably, their target audience is chemists who can be trusted not to consume random substances.

    • permo-w 2 days ago
      • dpassens 2 days ago

        And yet, when you click through, it says

        > NaBr has a very low toxicity with an oral LD50 estimated at 3.5 g/kg for rats.[6] However, this is a single-dose value. Bromide ions are a cumulative toxin with a relatively long biological half-life (in excess of a week in humans): see potassium bromide.

        At no point does the paragraph you linked suggest it's safe to substitute NaCl with any other sodium salt.

        • permo-w 2 days ago

          first of all, the average idiot is going read that sentence and switch their brain off after hearing "has a very low toxicity", it's hardly a ringing alarm. second, this is quote from clicking through to the sodium bromide page, not the page I linked listing sodium salts. the parallel here would be asking chatgpt to list sodium salts, which is almost certainly what he did, and then clicking through again would be the equivalent of asking for further information about that salt, which it seems likely he did not do

          and I sincerely doubt that ChatGPT said anything about it being safe to substitute for NaCl

  • simonw 2 days ago

    See my quote from the underlying clinical report in this comment: https://news.ycombinator.com/item?id=44888300

    • permo-w 2 days ago

      having tried it quite a few times with quite a few variations, without making it extremely clear that I was talking in a sense of chemistry rather than dietary, I was unable to get ChatGPT to give anything other than a long list of edible salts

      essentially I think it's telling that there are zero screenshots of the original conversation or an attempted replication in the article or the report, when there's no good reason that there wouldn't be. I often enjoy reading your work, so I do have some trust in your judgment, but this whole article strikes me as off, like the people behind it have been waiting for something like this to happen as an excuse to jump on it and get credit, rather than it actually being a major problem

      • simonw 2 days ago

        Why would medical professionals mislead on this though?

        It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.

        When asked why, they said ChatGPT told them it was a replacement from chloride.

        Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.

        • permo-w 2 days ago

          > It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.

          certainly

          > Why would medical professionals mislead on this though?

          I'm not suggesting it's intentional, but: to get credit for it; or because it's something they'd been consciously or subconsciously expecting and they're fitting to that expected pattern

          >When asked why, they said ChatGPT told them it was a replacement from chloride. Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.

          of course it's not impossible, it's not even particularly unlikely, but, if we're going to use a sample size of 1 like this, then surely we want something a bit more concrete than the unevidenced claim of a patient recently psychotic?

          more broadly though, this isn't so much a chatgpt issue as it is an educational dietary issue. the patient seems to have got a funny idea about the health effects of salt, likely from traditional or social media, and then he's tried to find an alternative. whether the alternative was from ChatGPT, or Wikipedia, or other, doesn't seem very relevant to me

hereme888 2 days ago

The man followed insane health advice given by GPT 3.5. we're at v5. Very outdated report.