>The people running these companies literally talk about the danger that the things they are building might take over the world and kill every human on the planet. GPT-5, meanwhile, still can’t tell you how many b’s there are in the word “blueberry.”
> GPT-5, meanwhile, still can’t tell you how many b’s there are in the word “blueberry.”
I that true? When I ask "how many letters b in the word blueberry?", I get the response: "2 — in blueberry, the letter b appears twice (positions 1 and 5)."
Do I get the correct results because I happen to ask the question in a particular way? Or is there a distribution of answers and I happen to get the correct one? Or did they fix it? Or is the reporting misleading?
>The people running these companies literally talk about the danger that the things they are building might take over the world and kill every human on the planet. GPT-5, meanwhile, still can’t tell you how many b’s there are in the word “blueberry.”
> GPT-5, meanwhile, still can’t tell you how many b’s there are in the word “blueberry.”
I that true? When I ask "how many letters b in the word blueberry?", I get the response: "2 — in blueberry, the letter b appears twice (positions 1 and 5)."
Do I get the correct results because I happen to ask the question in a particular way? Or is there a distribution of answers and I happen to get the correct one? Or did they fix it? Or is the reporting misleading?
It may have been how you manipulated your prompt ... word order and format seems to matter a lot in how well these LLMs can answer