ccvannorman an hour ago

What a wonderful text. Easy to read, concise, clear, interesting -- and above all, important.

I would add context for 2025 about the fundamental limits this places on what (modern) AI is in principal capable of. Perhaps some "non-computable" features would need to be hard-coded into AI, so that it could at least better approximate the types of incomputable problems we might ask it to do?

Also, a search of the text for "conscious" does not yield anything, which is probably a good thing. This text also reminds me of the questions like, "What does it mean to be conscious?" and, "How are human brains able to reason about (things like) incomputability, which in some sense, computers as we currently understand them could never do?" and, "What specifically beyond pure mathematics does a brain have or need in order to be conscious enough to reason about such things?"

  • justinpombrio an hour ago

    Gödel's Incompleteness Theorem places a limit on what you can prove within a formal system. Neither humans nor LLMs are a formal system, so it says nothing about them.

    Someone's going to think that, since you can formally model computer programs (and thus formally model how an LLMs runs), that means that Gödel's Incompleteness Theorem applies to computer programs and to LLMs. But that's irrelevant; being model-able doesn't make something a formal system!

    "Formal system" has a very technical meaning: it has a set of consistent axioms, and the ability to enumerate formal proofs using those axioms. For example, Zermelo-Fraenkel set theory is a formal system [1]. It has nine formal axioms, which you can see listed in its Wikipedia article. Utterly different kind of thing than humans or LLMs. Comparing an LLM to ZFC is like comparing a particular watermelon to the number 4.

    [1] https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...

    • jltsiren 2 minutes ago

      This is unnecessary nitpicking. You can easily derive a logical contradiction of your choice by assuming that a system that can be formally modeled can prove something that cannot be proven within a formal system. If a specific theorem doesn't prove that, a simple corollary will.

      If you want to escape the limits of computability, you have to assume that the physical Church–Turing thesis is false. That the reality allows mechanisms that cannot be modeled formally or simulated on any currently plausible computer.