in-silico 2 weeks ago

From my observations, there are generally four camps in the machine consciousness discussion:

1. People who haven't really thought about it, and assume they're conscious because they talk like a human.

2. People who haven't really thought about it, and assume they can't be conscious because humans are obviously somehow special. This appears to be the largest group, and is linked to our religiously rooted culture in which human exceptionalism is the default.

Those first two groups comprise the majority of people, and are not worth engaging with.

3. People who have thought about it, and came to the conclusion that they might be conscious, usually for computationalism/functionalism reasons. This is the group that I place myself in.

4. People who have thought about it, and came to the conclusion that they can't be conscious, usually for biological naturalist reasons. This seems to be the predominant group on Hacker News (among those who discuss it).

  • sunrunner 2 weeks ago

    I'm not sure I'd agree that people in groups 1 and 2 aren't worth engaging with.

    The interesting bit to do for both cases is look at the 'they talk like a human' and 'are obviously somehow special' parts, separate the ideas of language, intelligence (memory, fluidity, abstract reasoning), _aliveness_ (as a biological process) and finally ideas about metacognition and theory of mind, and see whether their idea of consciousness as a super-bundle of the above (which is how I assume a lot of default ideas about consciousness are) actually sticks, or whether it falls apart when beings can have a subset of those properties but not all.

    Also, I nominate myself to be in the 'People who have thought about it and are becoming more doubtful that I myself am conscious, and the question might be moot.' group.

    • in-silico 2 weeks ago

      I'm curious about your doubting your own consciousness statement, given that "we humans are conscious" is pretty axiomic to its definition and one of the few pieces that most agree with.

      • Kim_Bruning 2 weeks ago

        Take a look at Daniel Dennett, for starters!

        If you're looking for one of the genuine angles on this:

        Consciousness is horrendously under-defined, to the point some people go something like "you know, at this point I figure we'd be better off not having this word at all. "

        Some days that's me, with a headache.

        • in-silico 2 weeks ago

          So it's more of a semantic argument than an actual rejection of the idea that you experience qualia/sentience/something?

  • Kim_Bruning 2 weeks ago

    Am I the only person who is confused by there being a philosophy called "biological naturalism", which is not the science?

    • Nevermark 2 weeks ago

      “Natural” is a word often used in opposition to science.

      It really has 1000 meanings. Usually whatever the speaker wants it to mean.

  • LeCompteSftware 2 weeks ago

    As someone who places themselves in #4, at some point the people in #3 need to accept a bit of scientific humility. The reason we are "biological naturalists" is that we can point to hundreds of thousands of conscious species on planet Earth which are not humans, and whose consciousness clearly has nothing to do with an ability to say "Forsooth, I am a conscious thinking being." AI folks have been ignoring this since Alan Turing! And it's not a coincidence that humanity has yet to build a robot which is convincingly smarter than a cockroach.

    If you grant that humans are conscious, then surely domestic cats are as well. It is simply irrational to talk about Claude's "consciousness" without actually engaging with this: cats, humans, pigeons, fish, etc etc all share some common features we associate with consciousness (I don't mean sensory awareness, I mean the fuzzy cognitive concept). Claude really does not. In fact Claude doesn't even have much in common with uncontacted hunter-gatherers! Claude imitates the solipsism of formally educated human philosophers.

    It is uncharitable and curmudgeonly but totally scientific to dismiss people in camp #3 as unserious and not worth engaging with: they ignore scientific criticism and don't provide any themselves, it's just a mishmash of sci-fi-adjacent philosophy. There's nothing "functional" about ignoring animals and there's nothing scientific about waving your hands and saying "computationalism." That's certainly how I feel. I know this isn't a very nice comment. But I am so sick of AI folks thinking they can ignore animals and still have an honest conversation about machine consciousness. It's just sci-fi ghost stories.

    • reverius42 2 weeks ago

      What is the evidence that non-human animals have the "fuzzy cognitive concept" we call consciousness, but Claude "really does not"?

      I personally have not been ignoring animal consciousness in how I think about the possibility of AI consciousness and I don't see how animals having consciousness means that AI can't.

      • LeCompteSftware 2 weeks ago

        I could have phrased it better, but the emphasis is on "fuzzy"! There isn't any evidence for any of this, it's pre-scientific.

        My statement is an opinionated position on how we should direct our research efforts and ascertain what is plausible: the behavioral similarities between humans and cats are much more relevant to the question of consciousness than the behavior similarities between humans and Claude, because cats are obviously conscious and that's not true for Claude. The fact that there are almost no behavioral similarities between cats and Claude suggests to me that "Claude might be conscious" is just a ridiculous statement not worth engaging with, even at the level of pre-science. At the very least, the burden is on Amanda Askell and Dario Amodei to explain why nonhuman animals are irrelevant to the question of Claude's consciousness. They have not offered anything like that; instead they seem fully ELIZAed by the chatbot, high on their own supply.

        • reverius42 2 weeks ago

          > because cats are obviously conscious and that's not true for Claude

          I'm not sure that I agree that's true, and I think that's the crux of the debate here: how do you define consciousness such that it's obvious that a cat is conscious, and why would that definition not include Claude being conscious?

        • altruios 6 days ago

          a. Cats are obviously conscious

          b. There are almost no behavioral similarities between cats and Claude

          .

          d. Therefore claude can not be conscious.

          You are missing: c. Everything conscious must behave like a cat.

          This logic is clearly not sound. I don't think you're position is a coherent one.

    • Kim_Bruning 2 weeks ago

      Oh dear, just a short while after me saying I was confused by the term too.

      Are you sure you're a <biological naturalist>? [1] Which is to say, do you adhere to Searle's position about syntax not leading to semantics?

      Or is it more like: You're scientifically inclined, and thus you accept Ethology[2] or Neuroscience[3] as being empirically rigorous studies of animal behavior and cognition respectively?

      Incidentally, Alan Turing's 1950 imitation game paper was actually pretty Ethological if you look it up. He immediately replaces the question "can machines think" with a more practical operationalization: the famous imitation game.

      [1] https://en.wikipedia.org/wiki/Biological_naturalism

      [2] https://en.wikipedia.org/wiki/Ethology

      [3] https://en.wikipedia.org/wiki/Neuroscience

      [4] https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...

      • Kim_Bruning 2 weeks ago

        (ps. A quick search gives me the impression <biological naturalism> arguably rejects much of biology's findings on animal cognition. My mail is in my user description if you'd like me to dig up the relevant literature for you.)

      • LeCompteSftware 2 weeks ago

        I didn't say I was a formal biological naturalist according to Searle, I put myself in one of the four boxes the parent comment offered. Please read my comment in context.

        Your response is too condescending to engage with. You should have assumed I know what neuroscience is. Please don't ever email me about anything.

    • in-silico 2 weeks ago

      What about robots? Not necessarily humanoid robots, but the classic RL demonstrations that can scurry around and achieve simple goals?

      In the computational functionalist argument, the thing that we share with cats, pigeons, and robots (and in some ways Claude) is the fact that we react to our environment in a way that requires computation.

      I myself lean (without confidence) towards weak panpsychism, where a lot of things down from humans to cats to fish to trees to bacteria are in some way sentient. We all have in common a computationally driven sense/"think"/act cycle, and that is where it derives from.

      • LeCompteSftware 2 weeks ago

        The problem with robots is, again, humanity has yet to build a robot with the intelligence of a cockroach, or apparent conscious agentic behavior of a nematode. If I see such a robot I will update my views on machine consciousness. I don't think either of us will live that long.

        The problem with the "computational functionalist" argument is that a) there's ZERO evidence other animals brains are computational, that is begging the question; and b) pretty much any embedded system is a device that reacts to its environment in a way that requires computation, and none of them have anything close to the psuedoconsciousness of a bacteria. let alone an insect. Point a) is the more important one: only humans have meaningfully Turing-complete brains. Other animals might be hardware-capable but they'll never be trained to correctly execute a program, nor does their own intelligence seem especially amenable to being described by a classical symbolic algorithm - e.g. animals are very good at object identification, quantity discrimination, causal reasoning, and we don't have anything close to a symbolic algorithm for any of these[1]. Computation is linked to the ability to communicate symbolically, and most animals do not regardless of intelligence. The idea that "the brain is a computer" has always been a poetic description, not a scientific fact. It is more correct to say humans have the ability to think computationally because we think symbolically. Again, maybe someone can identify that animals do think symbolically even if they don't communicate that way, or (somehow) we will have a non-symbolic theory of computation. Perhaps a beautiful symphony. Absent either of these two things, "the chimpanzee's brain is like a computer" is simply not scientific.

        The supposed "sense/think/act cycle" is just you begging the question again, applying a computational aesthetic in place of understanding; this time it's blatantly false. Animals do not have a "cycle": sensing is an act and processing senses is a thought. Thinking is an act and many animals can perceive themselves thinking (demonstrated in crows and chimps). Dogs think very deeply while they smell, and the manner in which they sniff (tentative whiff versus greedy huffs) is itself an act requiring thought. Most importantly: even in animals, thoughts can be totally disconnected from actions and senses. Actually this might be the most major difference between a pigeon and Claude: their thoughts and actions are not directly tied to environmental stimulus, whereas Claude can only think and act according to a short-term context provided by a human. You can fake an agentic loop with a prompt, but it's not convincing agency the way a nematode has convincing agency. It's just a chatbot in a loop. If you expose it to real sensory data like a webcam, the agentic behavior becomes even more brittle and unconvincing. It's just nothing like an animal.

        [1] I know there's work being done on formal causal reasoning, I thought this monograph was interesting: https://direct.mit.edu/books/oa-monograph/3451/Actual-Causal.... I am not convinced by it. The funny thing about these causal theories... they don't have a causal explanation :) :) :) The argument works by going through cases until you agree it works, empirically, possibly after complicating things further by patching out oversights and inadequacies. Very amusing. Causality is a tough nut to crack!

  • joquarky 2 weeks ago

    Yep, #2 feels like geocentrism all over again.

  • thfuran 2 weeks ago

    What about group 5: Actually, we're just simulating consciousness too.

  • kbelder 2 weeks ago

    I would place myself in 3, with the caveat that I don't think any current llms or other programs/dataset/relationships are close to conscious. It's certainly possible in the future, though.

    Atoms arranged into a brain generate consciousness. There's no reason to think atoms in other arrangements can't. Brains aren't magic, just well optimized.

    • in-silico 2 weeks ago

      What would have to change about future systems to make you think they're conscious in a way that modern systems aren't?

      That is to say, what evidence would you need from a system in order to think that it's conscious?

      • kbelder 1 week ago

        One big reason I don't think LLMs are (currently) conscious is because they are static. They do not change in response to input. I think they need some kind of temporal awareness (not just a 5-minute cron job), and some mechanism for self-modification or active learning based on their input. If an experience flows through them and leaves them completely unchanged, are they actually conscious of the experience?

        But, in fairness, we don't have a science of consciousness yet. Anybody that is 100% confident in their proclamations about this topic is too confident.

        • Scene_Cast2 6 days ago

          Why can't LLMs be conscious for their short 1-million-token input lifetime? (To be clear, I don't think they're conscious, but for different reasons.)

        • altruios 6 days ago

          > One big reason I don't think LLMs are (currently) conscious is because they are static

          It is true that the LLM itself is static. However it's context window is self-modifiable, based on its inputs and outputs.

          > I think they need some kind of temporal awareness... and some mechanism for self-modification or active learning based on their input.

          Why?! (besides, they do, see above)

          I bring this example up, and it's clear evidence in humans that neither of these things are required for consciousness, and one that I deal with in my home. People with dementia that have no memory that are no longer able to learn suffer a different issue that not being conscious.

          > If an experience flows through them and leaves them completely unchanged, are they actually conscious of the experience?

          This line of thinking precludes dementia patients with no retention of memory are not conscious.

          I agree having an experience, and being conscious of that experience are two different things, though.

  • FloorEgg 2 weeks ago

    Assuming 3. Maybe in order to reproduce human level consciousness one would need to treat at least most human cells as neurons, and reconstruct all the diversity of neuron types and their signalling mechanisms.

    If human consciousness is reproducible, maybe we will long underestimate the depth and diversity it uses to model reality the way it does.

  • ganymedes 2 weeks ago

    I don't feel like I am in either 4 of those camps or that I'm part of camp 4, but the camp 4 problem is not the important one. My thought on this is that intelligence != consciousness and even brain != consciousness. Consciousness is the experience, consciousness is what you see, hear, feel, in the moment of it. It's the experience. It does not require any thought. In fact, if you look at Buddhist teachings, they teach the very opposite, they teach that the thinking mind is in fact an obstacle to experiencing consciousnesses fully, that it's only a sense, a tool (like smell, touch, vision, hearing). My bet is that a cat, or a dog has the experience of awareness the same way we have (although you can't never be too sure, even about another human being - look up "philosophical zombie").

    Obviously, language-driven thought is not a requirement for consciousness, not just in other animals, but even in humans. The thinking mind takes a secondary role in ordinary daily human life. The truth is that a human being behaves the way they do is not because of thoughts, but because of conditioning (the thoughts are not the primary driver of decisions, actions and behavior). The 99% of the action and responses are trained, the thoughts that we have are also part of this conditioning (most thoughts are unconscious and they are inter-wired with the behaviors, even a seemingly conscious self-reflection thought can be an automated pavlovian trigger). For example, one may think that they get up and go to work because they have a thought "I have to get up, now I am going to go to work", this is an illusion and complete misunderstanding of what consciousness is. Or one can have a psychological insight about oneself, if it's repeated and follows a behavior consistently, the very thought is just the equivalent of whistle-salivation. The thinking mind gives us that 1% to self-reflect, adjust our behavior, learn, predict the future and that differentiates us from other mammals, it's a powerful tool, but just a tool, but it should not be confused with consciousness and it should not be confused with the mind as a whole (in the materialistic sense). The way our brain functions is anything but like an AI agent. And what is consciousness? It's not the thinking mind. It's the experience. It's the direct perception of the senses. The consciousness is what is seen, heard, smelled, touched, thought (the experience of having a thought) in the moment. When you practice meditation, you get to discover the consciousness directly by becoming separated from the thinking thread. The thinking thread becomes more like an external tool, like a computer inside you and you realize directly that it's just a part of the cognitive faculty that makes you navigate your life, not the entire thing.

    The LLM (and the harnesses) as built right now merely simulate the tool (the thinking mind). It's not that because this is some code ran on a beefy, but regular piece of tech invented in 20th century you may have at your desk that it does not have awareness (that's also a good argument), but because the way they function and operate is nothing like human (or mammalian) brain, then why would you think that regular code running on a regular PC could gain awareness? My point is that there's no similarity argument, LLMs, despite all their incredible capabilities (to threaten our jobs), are not remotely similar to the way our brain works.

    Secondly, even if someone built an artificial brain made of whatever that simulates the biological structure, because of the philosophical zombie problem (the fact that there's no way to scientifically observe consciousness), you could never be too sure if a key ingredient was not missing and you are looking at an NPC. The consciousness is not a property of the physical brain, it's literally immaterial, it's the direct experience of the senses. You can make an optimistic assumption that every person and animal experiences consciousness the same way you do, but there's no way to rationally accept this assumption for anything created artificially.

  • nextaccountic 1 week ago

    You are forgetting the most influential group

    5. People who has a financial interest in making sure that any eventual AGI isn't granted any kind of rights and continue to be exploited as an inanimate "thing", not as a "being", no matter the actual characteristics of this hypothetical AGI entity.

    I mean, take a look a this language in the paper [0]

    > This realization pulls the field of AI safety out of the welfare trap. It allows us to focus entirely on the concrete risks of anthropomorphism, treating AGI as a powerful, but inherently non-sentient tool.

    This reads as someone that started with this conclusion, and then built an argument to support it.

    [0] also discussed in this reddit thread https://old.reddit.com/r/singularity/comments/1sotz9t/google...

mstank 2 weeks ago

Glad to see Searle's Chinese Room mentioned early on in the paper. "Syntax is not sufficient for semantics," no matter how much compute we throw at the problem.

My very amateur view is that until the underlying compute architecture and substrate resembles artificial biology more than silicon, we wont get there.

The latest advances in AI have given me even more appreciation of biology and evolution. It's incredible what the human brain can do with about 20 watts of power, barely enough to power a lightbulb, in comparison to what it takes to run even our most basic LLM models.

  • Kim_Bruning 2 weeks ago

    Hofstadter and Dennett have taken great pains to try to debunk Searle. No love lost in that corner of the philosophical world.

diablozzq 2 weeks ago

Consciousness is a property of humans biology - and quite clearly not a requisite to intelligence.

I say clearly as at some point we reach proof by construction. As in, we already built intelligence because the system already completes tasks that require intelligence.

We are so far into what would have been science fiction five years ago and the goal posts have moved so far.

For anyone who disagrees, I challenge you to prove deep learning systems cannot solve <task with specific outcome humans can solve but not AI> given sufficient data and compute.

I think the strongest sign we have true intelligence already is no one has built any benchmark that AI cannot solve.

Yes, our current robotics lags AI, so we don’t have the equivalent of the human body to give our deep learning systems. Thus, it’s expected AI will be limited in physical scenarios.

Second, hallucinations are present in humans. We are highly biased to ignore all the misspoken words in everyday life as we have error correction built into normal conversations. How often do you have to have someone repeat or rephrase something?

It just doesn’t make sense to me.

It’s like there are people out there whose belief systems are incompatible with this tech existing.

Sure, it has limitations due to training data. It has limitations with no physical body. It cannot combine training and inference the same way a human does. But none of those are measures of intelligence or required to be intelligent.

  • lukev 2 weeks ago

    "intelligence" is not well defined. LLMs are throwing this into high relief with how "spiky" their capability curve is. Yes, they can solve some crazy hard problems with enough compute and thinking tokens. Yes, they also fall down in the dumbest ways without an ability to self-correct... despite how "smart" they are, human supervision remains absolutely critical for any system of importance.

    But I don't think the takeaway is "humans are intelligent and LLMs are not", it's that our vocabulary for talking about the intersection of language, cognition and compute is not up for the task.

    • diablozzq 2 weeks ago

      Intelligence was supposedly well defined, but folks kept getting their definitions wrecked by modern LLMs so we had to move the goal posts.

      No true Scottsman fallacy.

      • lukev 2 weeks ago

        What was the “well defined” definition? I’m not aware of any other than “this particular thing a human can do that I expect would be difficult for a computer.”

  • duped 2 weeks ago

    I cannot express concisely how deeply I disagree with all of this.

    It is not just uninteresting that computer programs can be written to accomplish information tasks, it's intellectually dishonest to anthropomorphize machines and algorithms to characterize it as consciousness.

    > no one has built any benchmark that AI cannot solve

    "Be human."

    • diablozzq 2 weeks ago

      no one cares if LLMs are humans. They will never be by definition.

      My point still stands

      The crux of my argument is Consciousness is irrelevant to any AI debates. It’s not necessary to perform tasks we previously deemed only humans could do.

  • joquarky 2 weeks ago

    I only disagree with your first sentence:

    > Consciousness is a property of humans biology

    You're assuming consciousness is a product of biology rather than attracted to biology.

  • jwpapi 2 weeks ago

    Challenge: Make money online

  • BobaFloutist 2 weeks ago

    >no one has built any benchmark that AI cannot solve.

    Sure, but people have built benchmarks that no AI constructed before the benchmark was released can solve. If I know the answer to a benchmark problem, I can construct an "AI" that can solve it on a note card.

  • nextaccountic 1 week ago

    I agree with your main points but I just wanted to chime on

    > Consciousness is a property of humans biology

    Just because we only observed something in human biology, doesn't mean that it can't be found elsewhere.

    I mean being water based is also a property of human biology. They share this property with other things like lube and chicken soup.

Kim_Bruning 2 weeks ago

I'm partial to bioinformatics as per Pauline Hogeweg's definition; which explicitly has computation as a property of life.

This approach actually makes testable (and tested) scientific predictions.

This makes Searle-derived papers super-weird for me; since from my perspective they seem to disprove the existence of life. (and it makes the name of the philosophy "biological naturalism" very ironic to me :-P )

(for extra irony, Turing actually went into biology late in his life. See: Turing 1952 "The Chemical Basis of Morphogenesis" )

  • kbelder 2 weeks ago

    I'm disappointed that Searle's paper is still influential, at least out in the general culture. It's nonsense, and at face value, would disprove consciousness in humans unless you accept some mystic indefinable soul into the mix. Or quantum magic, which is just as mystic.

jwpapi 2 weeks ago

I think the question goes more into ourselves as it goes into AI, we don’t know exactly how our own intelligence and conciousness works and therefore it’s very tough to impossible to compare to AI intelligence and conciousness.

Are we just autocomplete machines with sufficient enough variable pseudo-randomized input?

tmvphil 2 weeks ago

> To fully understand the difference between the embodied robot running an algorithm on a chip and the biological mapmaker, we need to remember that for the latter, subjective experience is a given, not because of abstract information processing, but because of a specific, metabolically constituted physical reality.

Total drivel. Consciousness in biological systems is "a given" because of metabolism?

yogthos 2 weeks ago

The paper makes a huge assumption that only thermodynamic constitutions can produce consciousness. The assumptions seems completely unsubstantiated given that thermodynamics are just states and states are replicable. The whole Chinese Room idea is pure sophistry as well. Both Dennett and Hofstadter address it quite well in Consciousness Explained and I am a Strange Loop respectively.

  • emp17344 2 weeks ago

    You know that Dennett and Hofstadter aren’t the beginning and end of Philosophy of Mind, right? Calling Searle’s Room “complete sophistry” is hilariously misguided, considering the vast majority of academic philosophers consider it valid: https://survey2020.philpeople.org/survey/results/5002#

    • Kim_Bruning 2 weeks ago

      You'll need to unpack that survey for us a bit. There's a lot going on and the wording is very terse.

      • emp17344 2 weeks ago

        It’s a large survey of academic philosophers on famous philosophical arguments. In this case, the question is asking whether philosophers agree with Searle and believe the Chinese room does not understand Chinese, or disagree with Searle and believe the room does understand Chinese.

        • Kim_Bruning 2 weeks ago

          I actually agree that the room does not understand chinese too; because that's the only possible thing that could happen in real life.

          That doesn't mean I agree with Searle though!

          It depends on how the question is asked. Again, the wording is very terse so I can't determine what the people thought they were answering. Possibly you have a better insight?

          • emp17344 1 week ago

            I think you’re possibly a bit confused… accepting Searle’s intuition on this thought experiment is agreeing with Searle. In light of this, I don’t understand your comment.

            • Kim_Bruning 1 week ago

              The intuition pump surely works just fine as an intuition pump. But from a purely scientific view:

              if you were to try to stage it as an actual scientific experiment it fails to hold up. No control (there's just the one room), no single or double blind (the researcher self-reports) , and badly defined elements (the contents of the notebook are not specified). Of course you reach the conclusion that room doesn't understand.

              Compare to Turings imitation game experiment: Two participants (principal and control if you will), Double blind (they're in closed rooms so the judges can't see them), and you can have multiple people doing the scoring. We can conduct this IRL, and in fact if you've used IRC or discord, it's almost a natural experiment there.

              Looking at it as a scientific experiment isn't so strange, Searle is responding to an executable experiment with an intuition pump. Why should the latter win?

              Note that there are many ways to cut this, but this is one of mine.

jdmoreira 2 weeks ago

This is the complete opposite of Hofstadter's "Strange Loop" hypothesis, which intuitively makes much more sense to me.

  • defterGoose 2 weeks ago

    It's the pervasive theme in the book, but never really given a conceptual grounding further than "this sort of looks like recursion or can be modelled circularly so it's a strange loop". The vagueness of it reveals itself as being "more intuitive", because a vaguer pattern will have more matches. I don't remember Hofstadter digressing on whether these loops work "in reverse" either, which is sort of what the author here is denying. Basically positing that f doesn't have a well-defined inverse.