I learned about the Zone of Thought series through another commenter here who mentioned the concept of a "programmer at arms" and "software archeology" and pointed to his second novel, A Deepness in the Sky.
I loved the whole series and now recommend them to folks looking for something wild and fun in the sci-fi realm.
If you're so sure about that then you should be able to provide a rigorous, testable definition of intelligence, and a proof of why that definition specifically and nothing less is what must be met for machines to become exponentially self-improving.
It's the opposite. I can provide a rigorous description of how LLMs are built and operate (and have done so)[0]. There is no intelligence, however defined, in that process. It is the ultimate in rote behavior.
Furthermore, LLMs are not learning or self-improving in any way. They are fed training data, corrections, and prompts and produce their answers solely through predictions, not as a result of learning.
Ok, I choose to define intelligence as having the ability to transform vectors of tokens with matrices. There, LLMs are now intelligent and what you just said is incorrect, because this is one possible definition of intelligence.
Seriously, nobody can argue against what you are saying if refuse to elaborate on what exactly you mean. Provide the definition for "intelligence" you are using.
That is some of the most nonsensical philosophical rambling I've ever seen. There are many problems with it but the main one is that it assumes a priori that humans are intelligent and machines are not, and then uses circular reasoning to justify that.
> it is the human being reading intelligence into the behavior of LLMs. There is none in the LLM.
There is no justification for why it must be a human and only a human that can "read intelligence into" something.
I give up. I don't think you even know what you're claiming.
We lost Vernor Vinge last year.
https://en.m.wikipedia.org/wiki/Vernor_Vinge
We didn’t make it to the singularity but the race is on. Hope he realized we finally had a bit of a breakthrough.
I learned about the Zone of Thought series through another commenter here who mentioned the concept of a "programmer at arms" and "software archeology" and pointed to his second novel, A Deepness in the Sky.
I loved the whole series and now recommend them to folks looking for something wild and fun in the sci-fi realm.
RIP
Maybe its a recursive loop? Humans create AI, dies out, AI creates a Simulation tovrecreate its creators, humans build AI and so on..
It should be clear already that LLMs aren't a path to intelligence since they don't contain or exhibit any.
Knowing this, there is no approaching "singularity" of advanced machine intelligence.
If you're so sure about that then you should be able to provide a rigorous, testable definition of intelligence, and a proof of why that definition specifically and nothing less is what must be met for machines to become exponentially self-improving.
It's the opposite. I can provide a rigorous description of how LLMs are built and operate (and have done so)[0]. There is no intelligence, however defined, in that process. It is the ultimate in rote behavior.
Furthermore, LLMs are not learning or self-improving in any way. They are fed training data, corrections, and prompts and produce their answers solely through predictions, not as a result of learning.
[0] https://levelup.gitconnected.com/something-from-nothing-d755...
> There is no intelligence, however defined
Ok, I choose to define intelligence as having the ability to transform vectors of tokens with matrices. There, LLMs are now intelligent and what you just said is incorrect, because this is one possible definition of intelligence.
Seriously, nobody can argue against what you are saying if refuse to elaborate on what exactly you mean. Provide the definition for "intelligence" you are using.
This recent post by another author is a good explanation that I agree with: https://news.ycombinator.com/item?id=44423206
That is some of the most nonsensical philosophical rambling I've ever seen. There are many problems with it but the main one is that it assumes a priori that humans are intelligent and machines are not, and then uses circular reasoning to justify that.
> it is the human being reading intelligence into the behavior of LLMs. There is none in the LLM.
There is no justification for why it must be a human and only a human that can "read intelligence into" something.
I give up. I don't think you even know what you're claiming.
He was spot on
[dead]