points by OJFord 6 days ago

Completely agree (https://news.ycombinator.com/item?id=45627451) - LLMs are like the human-understood output of a hypothetical AGI, 'we' haven't cracked the knowledge & reasoning 'general intelligence' piece yet, imo, the bit that would hypothetically come before the LLM, feeding the information to it to convey to the human. I think that's going to turn out to be a different piece of the puzzle.