skissane 2 days ago

Articles like this annoy me: it seems to want to comment on philosophy of mind, but shows zero awareness of the classic debates in that discipline - materialism vs idealism vs dualism vs neutral monism, and the competing versions of each of those, e.g. substance dualism vs hylemorphic dualism, eliminativist vs reductionist/emergentist materialism, property dualism, epiphenomenalism, panpsychism, Chalmers’ distinctions between different idealisms, such as realist vs anti-realist and micro-idealism vs macro-idealism…

Add to that the typical journalistic fault of forcing one to read through paragraph after paragraph of narrative before actually explaining what the thesis they are presenting is. I’d much prefer to read a journal article where they state their central thesis upfront

  • mcswell 2 days ago

    As a linguist, article like this also annoy me by the claims that "X [whales, dolphins, parrots, crows...] uses language." We have known since 1957 that there is a hierarchy of "grammars", with finite state "languages" being near the bottom, and transformational grammars at the top. Human languages are certainly at a minimum at the context free phrase structure grammar level. My point is that by using the word "language" loosely, almost anything (DNA codons, for example) can be considered to be a language. But few if any other animals can get past the finite state level--and perhaps none gets even that far.

    And an article or book that uses the word "communicate" is even more annoying, since "communicate" seems to mean virtually anything.

    End of my rant...

    • teekert 2 days ago

      As a scientist, articles like this also annoy me. Because it, right off the bat, assumes that there are no degrees in consciousness. Just because we don't experience those degrees, and animals can't convey what they feel through language, we assume it is "emergent", or "suddenly there". I think we are too caught up in believing that consciousness must be something really special or some magic discontinuity of spacetime.

      I don't believe it is. Somewhere inside our brain there is some perception of self related to the outside world. So we can project self into the future using information from the now and make better choices and survive better ("Information is that which allows you [who is in possession of that information] to make predictions with accuracy better than chance"- Chris Adami). Why do we need all these difficult words?

      I bet animals also have some image of self inside there somewhere, and make decisions based on simulated scenarios. Perhaps to a lesser degree, perhaps because of a lack of language they experience it in a different way? Not being able to label any of the steps in the process... Who knows?

      Perhaps when we get to simulate a whole brain we can get some idea. But then there is the ethics. We do attribute great value to organisms that have this image of self.

      • vidarh a day ago

        Add to this that a lot of people presume that peoples experience is the same.

        E.g. I have aphantasia - I don't picture things in my "inner eye". Through discussions with people about that, a lot of people described own differences from the perceived norm, and it is clear a not insignificant numbers also have other differences, such as not thinking consciously in words.

        A lot of people then tend to express disbelief, and question the whole thing on the basis of a belief that if people's inner life does not match theirs, people couldn't possibly be conscious or reason.

        People make far too many assumptions about the universality of their own experience of consciousness.

        • torginus a day ago

          I'm curious about aphantasia - if I asked you to draw a floorplan of your home/office etc. would you not be able to do it?

          If you could wouldn't that involve you imagining what the building looks like from the inside and how the rooms are connected together?

          Or would do it some other way?

          (I'm assuming you could do it, since if you couldn't I think that would be a majorly debiliating condition rather than just some amusing fact)

          • vidarh a day ago

            Yes, in fact my ability to draw from memory places I've seen is, if anything, from experience, substantially above average. However, when I draw from memory, my drawings do tend to be more stylized.

            I can "imagine" what a building looks like, but I can't see it. Even our language makes this hard to describe, and the reason why I didn't realise that this isn't how other people do it, because most of our language for remember how something looks uses terms that implies we see them, so I just assumed until a few years ago (and I turn 50 this year) that that was just a metaphor to everyone else too.

            How do you imagine (there's that word again...) blind people remember places they can't see?

            To me, the notion that I'd need to "see" what the building looks like to draw it is bizarre because I know where everything is in relation to each other, and their shapes, so why would I need to see it?

            EDIT: increasingly, and in part drive by split-brain experiments, I tend to think that a whole lot of what we see as our conscious decision-making, are instead shallow retroactive attempts by parts of the brain at rationalising decisions largely already taken autonomously into a cohesive self/ego.

            • jarkami a day ago

              I am largely in the same boat -- I've taken to telling people that I think very well spatially (that is my primary method of thought) but not at all visually. I am much better than most people seem to be when it comes to tasks involving space, direction, and so on -- basically abstractions of reality, not images of it. But if you want me to actually visualize an apple in my head? Yeah, I can't do that at all. Not even a "stylized" apple.

              If you wanted me to draw an abstract floorplan of my house, or the layout of my town, I can do that easily. I can navigate an extended road trip unassisted by simply looking at a map ahead of time and mentally storing an abstraction of where I need to go. But if you wanted, say, an image of my house, or the main intersection in the middle of town, or anything like that -- I'm no good. I can't "see" it, and I would not be able to draw anything remotely accurate.

              To circle back around too your "imagine" vs. "see" problem -- my response to the often-referenced "imagine a ball rolling on a table" exercise often elicits some confusion from people. I can imagine a ball rolling on a table just fine. What color is the ball, though? It has no color. What size is it? It has no size. It is just "a ball", in the abstract -- its only property is shape (spherical), which is ultimately all that is necessary for imagining rather than seeing. If you wanted me to visualize a large red ball on a green table, though, that's beyond me.

              • vidarh a day ago

                Yeah, I also sometimes describe it as spatial thinking. I'm pretty sure that the extent to which I can draw from memory is down to having spent quite a lot of time drawing as a child and getting good at drawing from spatial recollection.

                E.g. if I draw a fantasy drawing, it will be closer to impressionism in style, the same way as if I draw something in front of me. It'll be messy. If I draw from memory, the lines are clear, and stylized.

                The clearest example of that was an art class at school where we were asked to draw our shoes first from memory and then while looking at it, and both were highly detailed, but without any conscious decision, the line-work was entirely different.

                Same as you when it comes to the "ball rolling" scenario. It "has" the properties people add to it, but if they're not affecting the behaviour, they're just verbal labels attached to an abstract concept - I'll remember them, but they won't change anything about the "imagined" scenario.

            • teekert a day ago

              You have probably heard this before but this is really bizarre and fascinating to someone like me who writes difficult words correctly because I see them (visualize the word) in my head and judge if they look right. It is like it all has to be visual for it to make any sense.

              As a molecular biologist I often deal with minute quantities of substances in small containers, but I just picture them there, the molecules, what happens to them as I dilute a sample or add some enzyme. Reading a book is like watching it play out.

              I have a colleague who is also much more text oriented. I really wondered how she can function without seeing all the departments in an overview and the connections between them, and how governance structures overlap with those departments, etc. For me it's a lot to hold in, overwhelming at times. But she's absolutely great in these types of things, just masterful, very structured as well. Her mind must be very alien to me, I always wonder what her experience of life looks like (or not "looks like", apparently... But how it is to her...)

              • vidarh a day ago

                I write a lot, including a couple of novels, and I realised as I found out about aphantasia that it explains a lot about what I read and write, and what I like. E.g. I skim or even entirely skip sections that focus on how something looks, unless it is beautifully written. I care about the language and the ideas much more than about any visual description, because I don't get much value from the visual description. I could, if I wanted to, sketch out a picture of things I've read a description of. E.g. I remember drawing some scenes from Lord of the Rings when I read it for the first time as a child, but I don't get the visual part of the satisfaction of those descriptions until I've drawn them.

                But with some works - like Tolkien's, I will still enjoy the descriptions because the language itself is beautiful.

                Just to make this weirder: I remember many things by appearance. E.g. I can find my place in a paper I've read years ago by what the page looks like. But I can't see it.

                I do however see things when I'm dreaming, and I have one solitary experience I think of seeing things awake, during meditation. I say "I think", because there is the possibility that I fell asleep even though I don't believe I did, and the imagery was far clearer than during my dreams.

                But I also can't recall images from my dreams while awake, and usually don't remember dreams at all past the first 30 seconds or so awake.

            • singleshot_ a day ago

              When I close my eyes (and especially if I rub them when they are closed) I see a grey/brown empty space with letters and numbers inscribed everywhere within the field, although the letters and numbers are oriented randomly and not from any script or language I’m familiar with.

              Also fifty. Realized this when I was about seven and it hasn’t materially changed since. I have no problem drawing things like blueprints of my house but when it comes to more curvy objects I’m a terrible artist.

      • IsTom a day ago

        > assumes that there are no degrees in consciousness

        And I don't get why, it seems to me quite self evident that you yourself can experience reduced-consciousness states, be it being half-asleep or quite drunk.

        • ffwd a day ago

          The problem for me is that we haven't conceptualized what we mean by "reduced" consciousness. Reduced in what way?

          If we create an analogy with audio - audio can have 2 properties: frequencies and volume. Frequencies are the content of the audio and volume is how "present" the sound is.

          Well the same could be applied to consciousness. When we say reduced consciousness do we mean that the mind experiences less content (frequencies) or do we mean that all the frequencies are there but at a reduced volume?

          • wruza a day ago

            Reduced in what way?

            In any.

            C. is a complex thing, and these can fail to lower level in myriads of ways. You have to align millions* of things to make it work. The biological (in our case) nature of it allows for some slack rather than immediate breakdown, so there are thousands of parameters to play with.

            * numbers arbitrary

          • IsTom a day ago

            Personal experience would tell me "both" - less complex thoughts with less intensity and with worse SNR.

        • taneq a day ago

          And these aren't even generally low-functioning or undesirable! The much-sought-after 'state of flow' is defined in part by a lack of self-aware consciousness.

      • readyplayeremma a day ago

        The article’s real contribution is in highlighting evidence of complex behavior in living systems that often get excluded from definitions of "intelligence". In doing so, it invites deeper philosophical reflection, even if it doesn’t mount that reflection itself.

      • whymeogod a day ago

        > We do attribute great value to organisms that have this image of self.

        My impression is we attribute great value to organisms that can effectively push back against us.

        respect based on force.

        Not saying I want things to be that way.

      • wordpad25 a day ago

        Does that mean LLMs can already be considered conscious at some level since they are able to reason and self reflect?

      • garden_hermit a day ago

        tbf, many materialists dislike the "degrees of consciousness" idea because a theory that posits "consciousness is on a spectrum" is one that starts to resemble panpsychism, which they consider magical woo.

    • ggm 2 days ago

      This. it nicely encapsulates why AI aficionados use words like "hallucinate" which become secret clues to belief around the G part of AGI. If it's just a coding mistake, how can the machine be "alive" but if I can re-purpose "hallucinate" as a term of art, I can also make you, dear reader, embue the AI with more and more traits of meaning which go to "it's alive"

      It's language Jim, but more as Chomsky said. or maybe Chimpsky.

      I 100% agree with your rant. This time fly likes your arrow.

      • crooked-v 2 days ago

        The correct term isn't "hallucinate", it's "bullshit". I mean that in the casual sense of "a bullshitter" - every LLM is a moderately knowleable bullshitter that never stops talking (In a literal sense - even the end of responses is just a magic string from the LLM that cues the containing system to stop the LLM, and if not stopped like that it would just keep going after the "end".) The remarkable thing is that we ever get correct responses out of them.

        • h0l0cube 2 days ago

          You could probably s/LLM/human/ in your comment. Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs. In between there may be thoughts in there, like 'I've no more to say', or 'Shut your mouth before they figure you out'. The question is, how is it that humans are not a deterministic computer? And if the answer is that actually they are, then what differs between LLMs and actual intelligence?

          • wnmurphy a day ago

            > Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.

            This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.

          • whycome a day ago

            There’s something meta about your comment and my reaction. Metaphor. The pachinko metaphor seems so apt it made me pause and probably internalize it in some way. It’s now added to my brains dataset specifically about LLMs. It’s an interesting moment to be hyper aware of in the context that you’re also describing (definition of intelligence). Far out.

          • wruza a day ago

            It’s survival in reality. Bullshit doesn’t survive (at least on the lower levels of existense, corporate and cultural bs easily does) and that’s why people are so angry at it. We hate absurdity because using absurd results yields failures and loses time or resources that were important to stay fed and warm.

            People also can lose your time or resources (first line support, women’s shopping, etc) and the reaction is the same.

            I don’t know why there’s still no LLM with a constant reality feedback loop yet. Maybe there’s a set of technical issues that prevents it. But until this happens, pretrained AI will bullshit itself and everyone cause there’s nothing that could hit it on the head.

            • MichaelZuo a day ago

              Well in some ways it does, some insect species use various strategies to fool other insect species, such as a special type of caterpillar that does so against ant colonies, to live at their expense.

      • taneq a day ago

        > This time fly likes your arrow.

        And fruit flies like bananas. :)

    • gizajob a day ago

      Yes, it's such a waffle. Instead of the unnecessary title "A Radical New Proposal For How Mind Emerges From Matter" – a more appropriate one would be "On plant intelligence (and possible consciousness)" given the entirety of the article is devoted to plant intelligence. We don't have anything radical, nor is it very deeply related to the mind/matter problem. If an author can't get something simple like that correct, then they don't deserve our time. Shame one has to get paragraphs deep into the article to find out we have a spiel about plants, not about mind.

    • Xmd5a a day ago

      Here lies the promised land: the possibility of a precise and concise nomenklatura that assigns each thing a unique name, perfectly matching its unique position in the world, derived from the complete determination of the laws governing what it is and how it interacts with others. The laws of what is shall dictate how things ought to be named. What a motivating carrot—let’s keep following these prescriptions, for surely, in the end, the harmony of their totality will prove they were objective descriptions all along. Above all, let’s not trust our own linguistic ability to distinguish between the subtle nuances hidden within the same word, or at least, let’s distrust the presence of this ability in our fellow speakers. That should be enough to justify our intervention in the name of universality itself.

      Imagine this: language is an innate ability that all speakers have mastered, yet none are experts in—unless they are also linguists. And what, according to experts, is the source of such mastery? A rigid set of rules capturing the state of a language (langue) at a given time, in a specific geographical area, social class, etc., from which all valid sentences (syntactically and beyond) can supposedly be derived. Yet this framework never truly explains—or at best relegates to the background—our ability to understand (or recognize that we have not understood) and to correct (or request clarification) when ill-formed sentences are thrown at us like wrenches into a machine. Parsers work this way: they reject errors, bringing communication to an end. They do not ask for clarification, they do not correct their interlocutors, let alone engage in arguments about usage, which, under the effect of rational justification, hardens into "rules."

      Giving in to the temptation of an objective description of language as an external reality—especially when aided by formal languages—makes us lose sight of this fundamental origin. In the end, we construct yet another norm, one that obscures our ability to account for normativity itself, starting with out own.

      Perhaps this initial concealment is its very origin.

    • umanwizard 2 days ago

      I’m a linguistics layman, but can’t you make an even stronger claim about human language? Apparently there are certain constructs in Swiss German that are not context-free.

      • skissane 2 days ago

        From another viewpoint, all human language is only at the finite-state level - a finite state automaton can recognise a language from any level in the Chomsky hierarchy provided you constrain all sentences to a finite maximum length, which of course you can - no human utterance will ever be longer than a googolplex symbols, since there aren’t enough atoms in the observable universe to encode such an utterance

        Really the way people use the Chomsky hierarchy in practice (both in linguistics and computer science) is “let’s agree to ignore the finitude of language” - a move borrowed from classical mathematics

        • suddenlybananas a day ago

          This is why the classical notion of competence and performance in linguistics is important. We describe programming languages as being Turing-complete even if every computer is always in practice finite because, in principle, it could be run on a computer with more memory or whatever. Likwise, it seems that language is bounded by language external facts about memory not intrinsic facts about how language is processed.

    • scotty79 2 days ago

      It's philosophy. It doesn't concern itself with facts or knowledge.

  • NoGravitas a day ago

    So, I think you' are wrong about this: the article is discussing a change in perspective that would largely make the classic debates in philosophy of mind irrelevant in the same way that heliocentrism made classic debates within the geocentric paradigm (about epicycles, and such) irrelevant. Highly worthwhile, if insufficiently in-depth.

    • photonthug a day ago

      It seems like this might be missing or just talking past parents point. Sure, new paradigms might always make old lines of research irrelevant. But otoh, something like idealism vs materialism vs dualism can’t just be dismissed because the categories are exhaustive! So new paradigms might shed light on it but can’t just cancel out the question. So yes, to parents point some stuff is fundamental, and if you’re not acquainted with the fundamentals then it’s hard to discuss the deep issues coherently. It’s possible I guess that a plant biologist or a journalist or a ML engineer for that matter is going to crack the problem wide open with new lines of inquiry that are completely ignorant about historical debates in theory of mind, but in that case it will still probably take someone else to explain what /how that actually happened.

    • SilasX a day ago

      I agree in spirit, that it's possible for a field to be so lost that a new paradigm fundamentally obviates it and frees you from having to recapitulate the entire thicket. But still, a minimal test for whether you've actually obviated them would be whether you can double back and show how the new paradigm resolves/makes them look naive and confused. And so I'd expect an author to do that for at least one such school of thought as part of their exposition.

  • protocolture 2 days ago

    I bailed 10 words in and came to the comments to see if the article was worth reading. Thanks for confirming its a skip.

    • felizuno 2 days ago

      So many adjectives... and "radical" in the title is a doozy considering this article is essentially a stoner summary of LeDoux's "Deep History of Ourselves" with the science replaced with thesaurus suggestions.

    • metalmangler 2 days ago

      I held out a bit longer, and then skimmed, but started to become offended by the whole ,thing, but couldnt be bothered to be that annoyed by so many different things. That said I am very much into bieng outside in the company of many animals, working with them, and just bieng in wild environments, but language fails, when it comes to describing how species interact, and in the end, we cant describe our own mental proceseses in a clear, intellible by all, way. There is yet to be a genius of the mind, where the definition of genius is,:an idea that once explained, everyone else goes "of course" My main issue is that these sort of philisophical projects, reek, of money and hidden agenda's, and shade very quickly into, policy decisions, and quasi religious beurocratic powers.

    • rramadass 2 days ago

      The gp is wrong. They are just being supercilious with a word salad of their own which is best ignored.

      The article is pretty good making you rethink all your preexisting concepts of Mind/Intelligence based on what we know from Biology today. It is not an article on various theories of Mind but on how scientific research (conveyed through pointers from various researchers) is advancing rapidly on so many fronts that we are forced to confront our very fundamental beliefs.

      Absolutely worth reading at least a couple of times.

  • globnomulous 2 days ago

    Is there a philosopher or philosophical school that identifies intelligence as a being's capacity to deploy a capability towards some end with intention? If so, what is this called? Or who is associated with it?

    Edit: I'd expect this thinker/school also to argue that the being needs to be able to experience its intention as an intention (as opposed to a dumb, inarticulate urge); in other words, intelligence would require an agent to be aware of itself as an intelligent agent.

    Edit 2: I strongly recommend Peter Watts' Blindsight to anybody who's on the market for sci fi that deals with these issues.

  • dsjoerg a day ago

    Yes.

    The article is really about conceptual framing — how clinging to outdated or vague definitions prevents progress in understanding biological and cognitive processes.

    People keep forcing everything into the vague, overloaded concept of "intelligence" instead of just using the right terms for the phenomena they’re studying. If we simply describe behaviors in terms of adaptation, computation, or decision-making, the whole debate evaporates

    https://chatgpt.com/share/67c07325-3140-8007-8177-c56a89b257...

  • kordlessagain a day ago

    Wait. If words fail to capture the truth, why do we keep making more words about it?

  • FrustratedMonky a day ago

    Agree. This last couple years of AI has been a wellspring of people/articles re-inventing philosophy, like all these subjects haven't been debated and studied for a few hundred years. At least acknowledge them, even if we aren't going to try and build on what has come before.

  • geuis 2 days ago

    [flagged]

    • skissane 2 days ago

      Your position is self-refuting: you are dismissing philosophy, yet simultaneously making philosophical claims in doing so-claiming that all knowledge is empirical is itself a philosophical claim (empiricism)

      • scotty79 2 days ago

        There's no contradiction. Philosophy is something everybody does after a beer. No point in pretentding that it's a relevant profession for hundreds of years already.

        • tristramb a day ago

          Philosophy can be safely ignored until you start making philosophical mistakes.

          • scotty79 a day ago

            There's no such thing as philosophical mistake because there's no correct philosophy.

            • skissane a day ago

              There is such a thing as a philosophical mistake – almost everyone agrees that logical positivism was a failure, despite the fact that there was a period in the 1950s when it was all the rage in philosophy departments in the English-speaking world.

              There are still a few philosophers who will try to raise logical positivism from the dead – but even they'll all acknowledge that in its classic formulation it doesn't work, so any attempt to do so will require significant revisions.

              Philosophers may never agree on who is right, but sometimes they can reach a consensus on who is wrong.

      • geuis 2 days ago

        This is a philosophical argument, therefore dismissible by science.

        Unless you can rephrase your argument as something testable, it's philosophy and thereby not relevant.

        • skissane 2 days ago

          > This is a philosophical argument, therefore dismissible by science.

          The claim I just made - that dismissing philosophy on empirical grounds is self-refuting because it relies on the philosophical position of empiricism - is not “dismissible by science” - there is no experiment or observation capable of proving or disproving that claim.

          Also, the assumption you seem to be making - that all genuine knowledge comes from empirical science - can be countered with the argument that mathematical theorems (e.g. those of Gödel) are true and can he known to be true, but they are not known or knowable by means of empirical science

          > Unless you can rephrase your argument as something testable, it's philosophy and thereby not relevant.

          What you just said is not testable, hence by its own terms is philosophy and thereby not relevant - it condemns itself as irrelevant

        • The_Colonel 2 days ago

          You're missing the point - your claim of "hypothesis has to be testable otherwise can be dismissed" is itself philosophical (philosophy of science). You're claiming that your claim can be dismissed.

    • antihipocrat 2 days ago

      A lot of philosophy is testable and the inspiration for scientific enquiry. Philosophy contains logic as a major area of study and the results of this work are core tenets in mathematics which enables the ability to conduct rigorous science.

      Science itself is the product of philosophical enquiry.

      • kazinator 2 days ago

        Science is the product of philosophical inquiry if you ask philosophers.

        Just like the Internet is the work of Al Gore, if you ask Al Gore.

        • skissane 2 days ago

          The difference is that what we now call “science” actually did start out as a branch of philosophy, and only gradually became separated from it; and there are figures who made substantial contributions to both disciplines (e.g. Henri Poincaré, who made significant contributions to both mathematical physics and philosophy of science)

          Al Gore’s relationship to the Internet can’t be compared

          • scotty79 2 days ago

            > The difference is that what we now call “science” actually did start out as a branch of philosophy, and only gradually became separated from it;

            It was hundreds of years ago, when philosophers of nature, aka people who thought about usefull stuff, left to become scientists. What remains in philosophy since then is all the useless stuff.

            • skissane 2 days ago

              There are practising scientists who take philosophy of science much more seriously than you do.

              To give a random example, I'm quite a fan of Lynn Waterhouse's Rethinking Autism: Variation and Complexity (Elsevier Academic Press, 2013) which seeks to provide a critical evaluation of the strength of the scientific evidence behind the theory of autism, in its various incarnations (from Kanner's early infantile autism and Asperger's autistic psychopathy through to DSM-5 ASD). And Waterhouse actually draws on philosophy of science in the process, as this quote demonstrates (p. 24):

              > In fact, the orphaned and disconfirmed theories that have failed to explain variation in autism are, in large part, examples of theory underdetermination. Science philosopher Peter Lipton (2005) argued, “Theories go far beyond the data that support them; indeed, the theories would be of little interest if this were not so. However, this means a scientific theory is always ‘underdetermined’ by the available data concerning the phenomena” (p. 1261). The theory of underdetermination, called the Duhem–Quine principle, is a formal acceptance that theories make claims that data do not fully support. Theory succession, as from Hippocrates’ theory of pangenesis to Darwin’s gemmules, to DNA and transcriptomes, moves from one underdetermined theory to the next. However, Stanford (2001) pointed out that not all theory underdetermination is acceptable. It can be a Devil’s bargain: a serious threat to scientific discovery.

              And another quote from the same book (p. 27):

              > Equally problematic, autism brain research findings have not uncovered the underlying complexity of the phenomena. Bogen and Woodward (1988) argued that what can be measured is “rarely the result of a single phenomenon operating alone, but instead typically reflect the interaction of many different phenomena ….Nature is so complicated that … it is hard to imagine an instrument which could register any phenomenon of interest in isolation” (pp. 351–352).

              And the citations referenced in those quotes:

              > Lipton, P. (2005). The Medawar Lecture 2004: The truth about science. Philosophical Transactions of the Royal Society of London Series B, 360, 1259–1269

              > Stanford, P.K. (2001). Refusing the devil’s bargain: What kind of underdetermination should we take seriously? Philosophy of Science, 68, 3. Supplement: Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association. Part I: Contributed Papers, pp. S1–S12

              > Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–352.

              The book only cites three philosophy of science papers, compared to dozens and dozens of papers from neuroscience, genetics, psychology, etc – as you'd expect for a scholarly book focused on the science of autism. Still, the fact it cites philosophy papers at all is an example of how many practising scientists are more positive about philosophy of science than you yourself are.

    • mmooss 2 days ago

      > I don't care what your ideas are. If they aren't testable, they're your opinion.

      Most of the life and the world involves beliefs that aren't testable. The lack of testability doesn't mean it's arbitrary; testing is just one tool (the most effective one, I believe). But if you restricted yourself to that, if we had no other qualification or judgment, we couldn't achieve anything.

    • tweaqslug 2 days ago

      Philosophy is precisely the domain of things that cannot be objectively testable because they are grounded in experience. You can not prove that you are conscious (especially in a world of LLMs) but you know it to be true. Should I assume you are an automaton simply because I can’t prove you are conscious?

      • geuis 2 days ago

        Consciousness is not philosophy. In the past, it was. People didn't have the tools or theoretic background to even approach it in an empiric way.

        However we now have at least the basics to hypothesize and perform experiments. So it's no longer philosophy and is in the realm of understanding what actually are the mechanisms of consciousness.

        • drupe 2 days ago

          We can find "neural correlates" of consciousness, but nothing we do right now can prove that experience/awareness is taking place somewhere other than where oneself is experiencing/being aware.

    • DrFalkyn 2 days ago

      So you’re an empiricist.

      Is science “reality” or is it just a series of models/ conceptual frameworks ?

      • geuis 2 days ago

        Keep walking into a glass door a few dozen times. Is the door a reality even though you can't see it, or just a conceptual framework you keep bouncing off of?

        At some point you have to stop thinking in your head why you can't walk through the window and start testing and figuring out what keeps breaking your nose.

    • perching_aix 2 days ago

      You're describing the scientific method. Philosophy is not for "understanding the universe". Surely it cannot be blamed for not fulfilling a goal you only imagine it to have.

      • scotty79 2 days ago

        It can be blamed for being pretentious useless ramblings.

        • gnz11 2 days ago

          Nah, that is strictly the domain of political punditry.

    • ARandomerDude 2 days ago

      > Philosophy has little to do with reality.

      Translation: I’ve never read Aristotle.

    • gizajob a day ago

      You just reinvented Wittgenstein a priori.

    • bowsamic 2 days ago

      That itself is a philosophical position

    • rramadass 2 days ago

      Well said. The gp's comment was mere snobbish verbiage since the article is not about various theories of Mind, but on biological research which is making us rethink our very fundamental assumptions. It is a pretty good one.

    • FrustratedMonky a day ago

      "Propose hypothesis. "??

      Where do you get the 'hypothesis'?

      That is where philosophy comes in, it is pre-testing, thinking of the things to test.

      (also, note. by your definition, string theory is an 'opinion', not science).

    • MadSudaca 2 days ago

      Sir, where do hypotheses come from?

xg15 2 days ago

Maybe the definition of what "intelligence" is could be sharpened by having a look at LLMs and "traditional" computer programs and asking what exactly the difference between the two is.

Almost all the traditional criteria of intelligence - reasoning, planning, decisionmaking, memory etc - are exhibited pretty trivially by standard computer programs. Nevertheless no one would think of them as "intelligent" in the sense that humans or animals are.

On the other hand, we now have LLMs, that sent the entire tech world into a multi-year frenzy, precisely because they appear to possess that human-like intelligence.

And that is even though they perform worse than classical programs in some of the "intelligence" measures: For the first time, we have to worry that a computer program is "bad at math". They cannot reflect on past decisions and are physically unable to store long-term memories. And yet, we're much more likely to believe that an LLM is "intelligent" than a classical program.

This makes me think that our formal decisions of "intelligence" (the ones that would also qualify fungal networks, swarms, cells, societies, etc) and what we intuitively look out for, are really two different things.

  • thelamest 2 days ago

    >This makes me think that our formal [definitions] of "intelligence" […] and what we intuitively look out for, are really two different things.

    Just two? You can name so many more terms in this concept cloud, e.g.: personhood, moral agency, consciousness, self-awareness, processing power, wit, autonomy, feeling-and-experiencing capacity, and so on… We don’t seem to agree on what’s separate from what, and yes, it would be useful.

  • raindeer2 2 days ago

    The difference between traditional software and LLMs is the generality of their intelligence, and there are formal definitions of general intelligence such as https://en.m.wikipedia.org/wiki/AIXI

    AIXI is a definition of the optimal agent and is hence uncomputable but LLMs are approximations which are approaching AIXI. I recommend Fridman's interview with Hutter.

  • energy123 2 days ago

    But the discussion is not about intelligence, human or otherwise.

    • naasking a day ago

      The article mentions intelligence a lot actually. It's also conjecture that consciousness and intelligence are unrelated. Qualia could be functional, and emerge naturally from the development of ever more sophisticated intelligence.

      • energy123 17 hours ago

        > It's also conjecture that consciousness and intelligence are unrelated.

        That's true. We shouldn't casually conflate the two but also shouldn't make the assumption that they're independent.

  • bryanrasmussen 2 days ago

    intelligence is a property of the species and the property of the individual, even unintelligent individuals (except for very pronounced extremes) still have the species property of intelligence.

    The species property of intelligence encompasses stupidity.

  • andoando 2 days ago

    I wouldn't say that's true at all for traditional computer programs. They're doing explicitly what they are designed to do, there is no adaptation/learning.

    • ben_w 2 days ago

      Code vs. data.

      The code needed create, train, and perform inference on a Transformer is quite short. How short depends on how you count the `import` statements in https://github.com/openai/gpt-2/blob/master/src/model.py and https://github.com/openai/gpt-2/blob/master/src/sample.py etc.

      Spreadsheets performing linear regression etc., — Do they learn? Sure!

      If you accept that Transformers adapt and learn then you must accept that a spreadsheet also does, because someone implemented GPT-2 in Excel: https://github.com/ianand/spreadsheets-are-all-you-need

      Do polymorphic computer viruses adapt? Border Gateway Protocol? Exponential backoff? Autocomplete? And that's aside from any algorithmic search results or "social" feeds, which are nothing but that.

      • andoando 2 days ago

        I am confused, a spreadsheet running the code for what a neural network does, sure. But a traditional computer program isn't just excel.

        • szvsw 2 days ago

          Any matrix multiplication can be trivially unwrapped into a bunch of for loops with basic arithmetic. In fact, that is exactly what any GPU kernel ultimately does in some sense.

          What differentiates a “traditional computer program” from a single matmul? How about a bunch of matmuls? How about a bunch of matmuls with some non-linear arithmetic thrown in between them? And if we are adding in sampling from a distribution but set temperature to zero and use random seeds so that it is fully deterministic?

          • andoando a day ago

            What you seem to be saying is that the machinary your traditional computer program executes on is capable of executing intelligence.

            What I said is thst a traditional computer program (say Excel) is not intelligent in itself, as it cannot, itself, without further input from a human, adapt.

            • ben_w 12 hours ago

              > What I said is thst a traditional computer program (say Excel) is not intelligent in itself, as it cannot, itself, without further input from a human, adapt.

              Excel is as capable of adapting to new input as any AI, and the input need not come from a human specifically: the act of adapting to input is an automatic consequence of being able to perform linear regression.

              In fact, this is the real value of a spreadsheet over a piece of gridded paper where the sums are done by hand.

              If "adapting to input" is your test for intelligence, however, it is a test which a thermostat will pass.

              • andoando 4 hours ago

                >Excel is as capable of adapting to new input as any AI, and the input need not come from a human specifically: the act of adapting to input is an automatic consequence of being able to perform linear regression.

                I said 'sensory input', as in, "any primary, real world data". Can I just download excel, and without feeding it machine learning algorithms, feed it a bunch of videos or text and have it learn history?

                >If "adapting to input" is your test for intelligence, however, it is a test which a thermostat will pass.

                Not just adapting to specific input the system is designed for, adapt it to ANY input. A thermostats function will never change on its own without modification, just like excels, or whatever other traditional program.

                • ben_w 2 hours ago

                  > and without feeding it machine learning algorithms

                  It has some built-in.

                  Linear regression is a machine learning technique.

    • szvsw 2 days ago

      How do you define adaptation and learning? What about say, an autoscaler which is programmed to just track the the load for every hour over the last week, and use the average of the last 7 days at 8am to pre-emptively auto-scale? Is that learning and adapting?

      Alternatively, neural networks are also just doing explicitly what they are designed to do… sure there is a larger computational graph with lots of operations, but it’s all deterministic… backprop is not really much different on a procedural level than the simple fitting algorithm that I outlined above, in as much as it is just a specific well-defined algorithm or sequence of steps designed to compute some parameters from data.

      • bloomingkales 2 days ago

        What if we define adaptation and learning as the ability to concentrate. Our single cell ancestors would have to concentrate and deliberately store the first memory. Otherwise it would have just taken in the world with its sensors but never do anything with it.

        Adapting and learning means it chose to concentrate on packing the world into retrievable storage.

        When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).

        In the example you gave, those classical programs cannot concentrate, it’s a one and done.

        • szvsw 2 days ago

          > When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).

          In the example I mentioned, the program clearly is taking inputs (load), storing them, querying for previous values that satisfy certain conditions (the value at a certain timestamp for each of the last 7 days), running computation (computing a mean), and operationalizing the result (pre-emptively scaling) to achieve a goal (avoiding having insufficient capacity) that affects the world (whoever interacting with the system will have different experiences because of the actions taken by the system). That seems like it satisfies your criterion to me.

          • bloomingkales a day ago

            C’mon now, you and I know both know your program is discontinuous. A few null or unknown inputs that you didn’t even consider will break it. You’ll have to keep going in there and adding more if/else statements. Your program couldn’t survive a 100 years without you debugging it.

            • szvsw a day ago

              > your program is discontinuous

              Not sure I follow your usage of discontinuous: clearly the load inputs are continuous variables (albeit discretely sampled in time and magnitude, but I don’t see the significance of either of those sampling frequencies, each of which can be chosen to be effectively arbitrarily small). Also don’t understand the relevance of continuity at all since you didn’t reference it in the post I replied to.

              > A few null or unknown inputs that you didn’t even consider will break it.

              Null handling should be assumed to be a trivial problem here (ie just use the last 7 valid values for the time step of interest), and I’m not sure what “unknown” values would be here, can you give an example? I think it’s safe to assume that the inputs have already been normalized to some meaningful scale as an implementation detail of the load sensors. Even if the telemetry scheme changes because the instrumentation changes or the infrastructure changes, so long as the instrumentation produces values at the same scale and the actuators can still respond to the outputs, the core kernel of sense/compute/actuate can still be left unchanged.

              I’m assuming that you are citing the failure to respond to null or unknown conditions as a lack of adaptation capabilities, especially as you are referencing its inability to last 100 years - though good luck finding any piece of software that can last 100 years, especially more robust machine learning algorithms that have to deal with problems like data distribution drift, hardware changes, etc etc - same goes with wetware (besides turtles?). I’m strawmanning you a little bit on the appeal to time though, since I assume your main point was to just emphasize that “it won’t deal with changing conditions/new scenarios well” which I think is somewhat fair. On the other hand, I think it is safe to say that what I described does at least respond to the world and in turn affect it in a kind of cybernetic feedback loop, which includes adapting to the shifting conditions to meet some desired state. So maybe it would be helpful if you could define more precisely what you mean by adaptation? Not trying to be snarky, just trying to genuinely push you to try to make a clear definition of what you mean by adaptation.

              • bloomingkales a day ago

                Also don’t understand the relevance of continuity at all since you didn’t reference it in the post I replied to.

                A calculator is discontinuous graph/program because there is no point to plot when you divide by 0 (undefined). This is true for your classical program. Over time it won't know how to adapt its f(x) function to handle something as crazy as dividing by zero (it can never do it, but unless you put in an error output for that input, the program will never do so on its own).

                The belief is these AI programs will be able to make that adaptation. All the changes you bring up that your program can apparently handle are still under your conceptual control (you can map in your mind if this happens, then that happens). You can prove this to yourself when you read your own statement "inputs have already been normalized" - sure, in a world you control.

                I'm suggesting a world where your telemetry monitoring function can change into a Reddit function if need be, so yeah, kinda batshit. Why would it need to do that? Remember, we can't imagine why, that's the whole point.

                That's how a program could possibly live a hundred years in a world that is constantly changing. Your program can only exist in a very static and predefined world.

                There are some that will tell you that you are an ever-changing function. I try to stay out of that simulation :)

    • Nevermark 2 days ago

      A structured program analyzing data as a graph, and optimizing access, is interacting with phenomenon, updating its working knowledge of the phenomenon, and can produce results that are very non-intuitive.

      Likewise, any symbolic mathematical system that accumulates theorems that speed up future work as it solves current tasks, seems like a high intelligence type of activity.

      Deep learning is “just” structured arithmetic.

      I think different kinds of intelligence can look quite different, and they will all be “structured” or “tropic” at their implementation levels.

      Stepping away from the means, I see at least four “intelligence” dimensions:

      1. Span: The span of novel situations for which it can create successful responses.

      2. Efficiency: The efficiency of problem solving.

      I.e. When vast lookup tables, exhaustive combination searches, and indiscriminate logging of past experience can be matched instead by more efficient Boolean logic, arithmetic, pattern recognition and logging, we consider the latter more intelligent.

      3. Abstraction: The degree to which solving previous different novel situations improves the success or efficiency in solving new problems. I.e. generalizable, composable learning.

      4. Social education: Ability to communicate and absorb learned information from other entities.

      Plants, and I expect all surviving life forms, are very high in intelligence types 1 and 2.

      Adaptive nervous systems and especially brains excel at 3.

      Many animals, but most profoundly humans (whose languages for communicating are themselves actively adapted for compounding effects), excel at 4.

      Today’s humans are effectively more intelligent than 10,000 year ago humans, not because of 1-3, but because of 4. Learning as a child to read/write, do arithmetic, understand zero and negative numbers, and countless other information processing activities and patterns, from others, profoundly impacts our intellectual abilities.

      Deep learning, as with the human species, non-trivially spans, and continues to improve, on all 4 types of intelligence.

  • empath75 a day ago

    We need to break the concept of "intelligence" down into more well defined components, probably.

  • AndrewKemendo a day ago

    The only proof of intelligence that humans accept are where they are utterly dominated by the more intelligent “thing.”

    In my experience it comes down to speed of processing and response

    Nobody views trees as intelligent because despite having extremely complex interdependencies with their ecology, including fungus and all kinds of other organisms, they don’t move quickly or appear to respond to input (even though they do, just slowly).

    Meanwhile the mantis shrimp because of it speed, superhuman vision and flexibility is considered extremely intelligent.

    The only thing that humans will accept as more intelligent than them is (something) that they cannot control and can dominate them.

    This is why AI is “the things we haven’t done yet” - once a technology is pervasive and integrated it is “just computing.”

    • bloomingkales a day ago

      Nobody views trees as intelligent

      Because we take freely from trees. They must be stupid or something because they give things freely. It's kind of why many employers view labor as stupid. They give their labor for very little. Humans dabble in arrogance.

giorgioz 2 days ago

I've been thinking as well that from some perspective a human being isn't actually a single life but rather itself a multitude of separate tiny-life forms that cooperate to survive (the cells). The voice in our head is the emerging consciouness that act as a captain, it's useful for the captain to think of itself as one being. Now this said, I feel the article is jumping a bit too much on the Animism hippie bandwagon: https://en.wikipedia.org/wiki/Animism

Of course, there is some intelligence in any life form behaviour, but if you want to say that a tomato plant is intelligent than you need to use another word or set of words for more advanced life forms. Putting in the same bag a tomato plant and a dolphin clearly makes the word intelligence so vague it loses almost every practical meaning.

To clarify also the part where it talks about earth being an organism. I've thought of this as well, the whole universe could be a life form, each planet and star in it a cell in its body. It's a possibility. Or maybe even our whole universe is just a cell in someone's body.

It's possible, but I fear there is little science in the people of that article and just old fashion Animism and protect "the mother earth" natural spiritual thinking that has existed for thousands of years. Those people see the world as if they are druids in a fantasy novel. I see myself more as a wizard. We might have different opinions. I will take them seriously when they can LITERALLY speak with animals daily in useful manners and tell vines to move. Until then is just (a) Fantasy. I can summon electricity and fireballs (with technology), if they want to say they are druids they better step up their enchantments. Telling "sit" to a dog and then writing long articles on how dogs are so intelligent doesn't cut it for me.

  • dimal 2 days ago

    The term I’ve seen used the most is “basal cognition”. It’s used to describe agentic problem solving behavior seen at lower scales and in different problem spaces where we normally have trouble imagining intelligence. Michael Levin’s paper, “Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds”[0] is a good, readable explanation of the concept. He does a ton of YouTube talks explaining it as well.[1] Very watchable, and pretty mind-blowing. It’s not wishy washy animism. He and his lab are doing rigorous experiments and finding some very unusual things.

    [0] https://www.frontiersin.org/journals/systems-neuroscience/ar...

    [1] https://youtu.be/StqX-LH0IN8?si=NhwMdWxBLZrghUom

  • truculent 2 days ago

    While it shares some similarities with animism, it’s a fundamentally materialist viewpoint, which puts it at odds with animism. Materialism is so rooted within our culture that I think it can be hard to fully grok alternative viewpoints.

  • bloomingkales 2 days ago

    Trees giving you the very thing you need to live every minute wasn’t enough of a compromise for you? They’ve quite negotiable with us.

    Gotta watch The Happening again.

    • giorgioz 2 days ago

      To be clear, trees did not "give" us oxygen. Trees evolved from bushes which evolved from algaes, which evolved from some mono-cell life forms which are a common ancestors to us as well. They were shaped by natural selection, it was indeed awesome for them to get out of the water and start consuming carbon dioxide in the atmosphere. They did not give us oxygen though, there was no high level consciousness of gifting life to others as your sentence is implying. This said, trees are awesome and we should definitely not cut them or burn them for no reason or pure sadism. We should though also consider them as a parte of the environment where live. If we need them and there is plenty and there aren't other constraints we can use them to build things. Having too much carbon dioxide built up in the atmosphere is a constraint very important to value. We don't want to literally destroy our home. I can keep this thoughts rationally in my brain without having to come up with metaphors where the giant rock we happened to be on is a "mother" nurturing us. The earth is not our "mother", we evolved here because simply all other rocks did not have the right conditions to sustain life. The universe is mostly a harsh and cold place, we should save/improve/increase the places that can be home. I feel though the people that start to lean into Animism are just not very scientific, they mean well, but they won't learn all the technology tools that could help me preserve our home planet more effectively.

      Ending on a positive note, many scientists both love nature and are rationally scientific (rather than romantic). All the awesome work being done with using LLM to speak with whales and recognize better dogs emotions and pain levels is very inspiring.

      • bloomingkales a day ago

        It’s not that we’re unscientific, it’s just that we’re putting science aside unless it corroborates things. It’s totally a terrible process, there’s no evidence.

        Some people over time cannot put down the fact that there’s just way too many coincidences. If you just pile up what it took for you to be here (let’s include your one in a million chance of out swimming the other sperm along with a perfect planet, possibly perfect parents, list goes on), I think you’d at least agree it’s one miracle after another.

        I can sit here and measure the anger on your face based on the metric of how red your face is but I’ll never know why you are angry. You can tell me, but even then, is it true? That’s the issue with science, that measuring and collecting observations won’t reveal the true nature. So it must be put aside, at least to investigate the other possibility.

        I don’t see how people aren’t open minded about there being more than a clinical understanding of this experience.

        From my other comment:

        https://www.sciencedirect.com/science/article/abs/pii/S01676...

        Is it so hard to believe there is a global frequency?

  • timewizard 2 days ago

    You perceive one voice and give it a position of authority.

    What if it isn't?

    What if it's many voices and they compete for control?

    • phito a day ago

      This is 100% my experience. There are different captains, but since they share the same memory, there's an illusion that they are one. When you look into who is in the driver seat now at different times, it is clear that there are multiple drivers. Which one is driving now depends on the context and state of my brain.

myflash13 2 days ago

Here's an interesting thought experiment. Take any definition of consciousness or intelligence that is not based on biological components. For example "reacts to stimuli, exhibits anger". You can apply that definition to other entities like the United States. Does the United States react to stimuli (i.e. invasion) and exhibit anger? Yes (e.g. Pearl Harbor). Therefore the United States is conscious?

If a person argues that an LLM is conscious or intelligent based on how it responds, is the United States conscious or intelligent?

  • skissane a day ago

    There's a great philosophy paper making this argument: Schwitzgebel, Eric. “If Materialism Is True, the United States Is Probably Conscious.” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, vol. 172, no. 7, 2015, pp. 1697–721. https://faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious...

    Discussed previously on HN:

    July 2015 (210 comments): https://news.ycombinator.com/item?id=9905847

    Feb 2021 (78 comments): https://news.ycombinator.com/item?id=26217834

    Dec 2023 (1 comment): https://news.ycombinator.com/item?id=38814769

    • naasking 9 hours ago

      > There's a great philosophy paper making this argument: Schwitzgebel, Eric. “If Materialism Is True, the United States Is Probably Conscious.”

      Maybe, but this strikes me as a borderline category error, like saying, "If databases can order results, then they are probably sorting algorithms". The argument makes assumptions that consciousness is transitive or "sticky" in some sense, where if a property applies to a part of the system it must also apply to some aggregate of those parts.

      • skissane 6 hours ago

        > The argument makes assumptions that consciousness is transitive or "sticky" in some sense, where if a property applies to a part of the system it must also apply to some aggregate of those parts.

        I think the argument is really about substrate-independence. If consciousness is just about functional properties, then why can’t a social collectivity exhibit those functional properties?

        Many materialists do in fact endorse substrate-independence - the common belief that an AI/AGI could in principle be as conscious as we are (even if current generations of AI likely aren’t) depends on it - and I think substrate-independent materialism likely does fall victim to this argument. Now, maybe not, if there is some functional property we can point to that individual humans and animals possess but which their social collectivities lack-but then, what is that property?

        Other viewpoints don’t endorse substrate-independence. For example, Penrose-Hameroff’s orchestrated objective reduction, if you would call that materialism - I think you can interpret it as a materialist theory (e.g. the in principle empirically testable claim that neurons have a physical structure with certain unique quantum properties, plus the less testable claim that those properties are essential for consciousness) or as a dualist theory (e.g. these alleged unique quantum processes as a vehicle for classical Cartesian interactionism). The more materialist reading could be viewed as a substrate-dependent materialism which escapes Schwitzgebel‘s argument. But I don’t think most materialists want to go there (seems too dualism-adjacent), and the theory’s claims about QM and neurobiology are unproven and rather dubious.

        • naasking 6 hours ago

          I have no problem with substrate-independence, but given substrate independence that doesn't mean the "consciousness" property is transitive in the way needed to claim the US is conscious. You need additional assumptions beyond just materialism and substrate independence.

          Computations are mechanistically clear, deterministic and substrate independent, but they still have properties that we use to classify them. Just because some component of a larger computational system has a property (or classification), it does not entail that the larger system has that property (or classification). Consciousness could be like this.

          • skissane an hour ago

            Schwitzgebel‘s isn’t assuming any property of consciousness is transitive.

            Rather what he is saying is this: given many candidate substrate-independent materialist definitions of consciousness, if the property is true of individual humans and animals, it will true of their social collectivities. But, he says “many”, not “all”-he’s not claiming you can’t define properties of consciousness for which that is false, he’s simply putting the onus on the proponents of the “materialism can explain consciousness” project of explaining in detail how, and justifying such a definition.

            Furthermore, he’s not claiming that properties of consciousness are transitive from individual organisms to any arbitrary grouping of them. Rather, he’s pointing to social collectivities such as countries or governments as being so coordinated they sometimes act like they have will of their own emerging from the coordinated wills of their individual members. This isn’t true of arbitrarily defined wholes of which those individuals are part, e.g. the set of all humans (anywhere on earth) whose first name starts with the letter A. You are interpreting his point in terms of part-to-whole transitivity in general, but that’s not the case-the emergent properties of social groups are far more complex than simple part-to-whole transitivity. And it is those emergent properties he points to as evidence they may have a consciousness distinct from those of their individual members

      • myflash13 8 hours ago

        The difference here is we know the definition of sorting algorithms, but we don't have a working definition for "consciousness". The argument is, if we use materialist definitions, then lots of unexpected things fit the definition.

        • naasking 7 hours ago

          > The argument is, if we use materialist definitions, then lots of unexpected things fit the definition.

          No, this doesn't follow, that's my point. This depends on further constraining materialist consciousness to have the specific kind of property I described. It's possible that consciousness is like "sorting algorithm", a specific property of a specific kind of system, and aggregates of such systems don't necessarily have that same property. They might, but it's too strong to say they definitely or probably do.

  • Joker_vD a day ago

    > "reacts to stimuli, exhibits anger".

    Nitroglycerin comes to mind.

  • koakuma-chan 2 days ago

    > Therefore the United States is conscious?

    Assuming the United States react to stimuli and exhibit anger, and assuming that the definition of being conscious is reacting to stimuli and exhibiting anger, yes, the United States are conscious.

  • cjfd 2 days ago

    One part of consciousness is the 'stream of consciousness'. I.e., a single-threaded, if you will, sequence of observations and/or language that is extracted from all the parallel processing that is happening in the brain. The US does not have that. If there were one single news broadcast that all people were listening to all the time and were basing their action on, one might start considering that the US is conscious.

    Also, considering who the current president is, the US is quite the opposite of intelligent.

  • svantana 2 days ago

    This is one of the themes of the classic 1979 book "Godel, Escher, Bach".

  • timewizard 2 days ago

    If I break a bone. It heals itself. Are my bones intelligent?

gregwebs 2 days ago

I don't see the word "consciousness" in the article. I thought that was the thing to figure out to understand the emergence of the mind.

  • sctb 2 days ago

    My general understanding is that "mind" is an objective concept; people have minds that cognize and think and learn and so on. Some minds are apparently more capable of those things than others. When speaking about intelligence, it makes sense to associate that with the mind.

    Consciousness, on the other hand, is (even) less well-defined and is usually considered to be subjective. Being subjective, it tends to resist all of the usual objective approaches of description and analysis. Hard problem and all that.

    • mirekrusin 2 days ago

      I don't understand why people have problem with simply stating that it is emergent phenomenon and that's it.

      Similarly to how computer is computer and half sized computer is half of its bigger friend – you can keep halving it until there is no "computer" left in it.

      Or pencil – you have pencil that you call pencil; what about pencil half size of it? and so on until you hit single atom. You had pencil, now you don't, where on this line there was pencil and then there wasn't?

      • heyjamesknight 2 days ago

        Because that's the same as giving up and saying "we don't understand."

        What is mind emerging into? When a video game experience emerges from the combination of processing, display, sound, and controller input, it emerges into a level of organization that a mind can participate in. It emerges into a system of organization emanating downward from the mind experiencing it. It can't just "emerge" into existence on its own. If a game falls in the woods, its not a game.

        If you call the mind an emergent phenomenon but can't describe the context into which it emerges, you've added nothing to our understanding.

        • ryandvm 2 days ago

          I agree with GP. Consciousness isn't so hard to explain if you don't enshroud it with mysticism.

          Consciousness is the emergent, graduated phenomenon of an information processing system when that information processing system has achieved significant complexity to model itself with relation to the various systemic inputs.

          It's not binary, it's a gradient. I have more developed consciousness than my dog, which has more developed consciousness than a rat, and then a fish, then an insect, etc.

          Somewhat disturbingly it also goes the other way, AIs may achieve a more profound conscious experience than humans - same for aliens. What does it mean for inferior forms of consciousness that have always placed themselves on a pedestal in relation to the rest of the animal kingdom simply because they have the most developed consciousness?

          • heyjamesknight 2 days ago

            Welp, that does it. Pack it up, Cognitive Scientists: we've solved the hard problem of consciousness right here on HN.

            What you're describing is what's been proposed by Giulio Tonini as "Integrated Information Theory" (IIT) [1]. I quite like the framework and the math behind it is beautiful. Unfortunately, it hasn't been supported well empirically.

            Re: AI, IIT actually gives basis for AI not being conscious. Not to mention that all conscious systems we can currently observe are dynamic/continuous, not discrete. The difference there is qualitative—there's no reason to assume that because a dynamic system is conscious that a discrete system approximating it is conscious too.

            [1: https://www.nature.com/articles/nrn.2016.44]

          • wat10000 2 days ago

            It's a nice idea but there's no evidence for it. Not only is there no evidence, but nobody has any idea what such evidence would even look like. We can't even conceptualize an experiment that would support or refute this theory.

            • heyjamesknight 2 days ago

              That's not true, the authors of IIT [1] propose a number of experiments that would support or deny the underlying theory. To my knowledge, those experiments haven't shown much support. But there are aspects of it that are absolutely empirically falsifiable.

              [1: https://www.nature.com/articles/nrn.2016.44]

              • wat10000 2 days ago

                I don't buy it. It might support the part of the theory that talks about how brains work. But the statement that this is qualia is different, and can't be proven or disproven. Let's say I believe that some person is actually a P-zombie, someone with no conscious experience but who behaves exactly like a normal person. Would these experiments be able to tell me if my belief is correct? I don't see how.

                • heyjamesknight 2 days ago

                  You're welcome to read the paper. Tonini's work is well-known within CogSci and its not quackery by any stretch.

                  • wat10000 2 days ago

                    I skimmed it. There's one mention of "qualia" and I didn't spot anything to connect their theory with the actual experience of consciousness besides them saying they think so.

                    • heyjamesknight 2 days ago

                      Better let Nature know their Peer Review committee screwed up then!

                      • skissane 16 hours ago

                        > Better let Nature know their Peer Review committee screwed up then!

                        The article you cite [0] is labelled as "opinion". The standards for peer review of opinion articles in scientific journals are a lot lower than those for ordinary research articles. While precise standards vary from journal to journal, for opinion articles peer reviewers often see their role as simply excluding egregious misinformation and blatant errors, as compared to research articles where their role is to make sure the article is presenting high quality evidence in support of its conclusions. [1]

                        [0] https://www.nature.com/articles/nrn.2016.44

                        [1] https://ecologyisnotadirtyword.com/2021/02/24/lets-talk-abou...

                        • heyjamesknight 9 hours ago

                          That would be because I linked the wrong article. The original is here, different journal:

                          https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...

                          • skissane 7 hours ago

                            I think this article has the problem that it is addressing an interdisciplinary topic with too much focus on only a single discipline, which can be a sign of lacking sufficient diversity in disciplinary background of peer reviewers.

                            Scott Aaronson’s attempted refutation of IIT - https://scottaaronson.blog/?p=1799 - I think is better in that he actually tries to relate IIT to some of the philosophical literature (e.g. his distinction between Chalmers’ “Hard Problem” and the distinct “Pretty Hard Problem” which he sees IIT as trying to address)

                            I think it is a pity that Aaronson has never (to my knowledge) published his criticisms of IIT in a more formal setting-and I don’t know if Tononi has responded to them anywhere. I think Aaronson is probably right - that IIT fails as a mathematical model of what we intuitively consider conscious, since even though it excludes many common electronic devices we wouldn’t “conscious”, it is possible to mathematically construct an algorithm, capable of being physically implemented in electronics, which would be conscious per IIT but not per our intuition. And even if Tononi patches his mathematics to solve a particular case of that problem, someone with Aaronson’s skillset may just be able to construct another.

                            Tononi might then argue that if there is no mathematical model of our intuitions about consciousness lacking in special pleading, that’s a sign our intuitions are flawed. Okay, but then if we accept our intuitions can be flawed in some cases, why not in more cases? One could decide the intuition of consciousness is completely erroneous and become an eliminativist about it. Or, if IIT forces you to accept (contrary to our intuitions) certain (special cases of) simple electronic devices or computer systems as just as conscious as humans, why not violate those intuitions further and insist on that for even more cases?

                            • heyjamesknight 4 hours ago

                              I'm not a "believer" in IIT. But I think its an incredible idea and taking the time to really understand what Tononi et al are proposing is a mind-expanding experience. It may not explain consciousness but it does make you think about what things could be a part of it. And any attempt to mathematically formalize cognitive science gets a vote of approval from me.

                              My personal belief is that consciousness requires dynamic continuity. I don't think an algorithmic system is conscious because it's "cognition" is discrete and the information isn't integrated across frames. I don't have a "why that works"—its just a gut belief.

                              • wat10000 4 hours ago

                                Funny, I was just thinking in the opposite direction. There's "I think therefore I am," but there isn't really "I thought therefore I was." I know I'm conscious, but I only have the memory of being conscious before, which could be false. Conscious could just be a snapshot, although it certainly doesn't feel like it.

          • chasd00 2 days ago

            You’re modded down for some reason but I haven’t thought of consciousness as a model that eventually becomes sophisticated enough to model itself. That could be an explanation for self awareness. …haven’t thought of that.

        • layer8 2 days ago

          Emergence has no “into”, IMO: https://en.wikipedia.org/wiki/Emergence

          • heyjamesknight 2 days ago

            All of those examples are emergence "into". The snowflake emerges into the mathematical patterns emanating downward. The termite cathedral emerges into the architectural context of the observer. Without emanating structure, there is no "emergence"—just a proliferation of chaos and error.

            • BriggyDwiggs42 2 days ago

              Not to butt in, but if the emergent phenomena only exists in the mind of the observer, and the mind is a material phenomenon, the where in the observer-snowflake system is there anything not fully decomposable to atoms, particle motion, so on?

              • heyjamesknight 2 days ago

                The mind is not a material phenomenon, in the same way that a video game is not a computational one.

                The emergent experience exists at the level into which it emerges. It's constructed at a lower level of organization, but not decomposable to them in a way that's meaningful without their recomposition—that's what makes the phenomenon emergent. The qualitative experience of a film does not meaningfully break down to the bits in the video stream, the compressed sound waves carrying the dialog, the photons hitting your eyes' rods and cones, or the biochemical signals in your brain.

                The mind is not in the brain, but on the brain.

                • BriggyDwiggs42 2 days ago

                  But we have to classify it as an “appears to” rather than an “is,” don’t we? It’s perfectly fine to do categorize out emergent phenomena that have practical utility, eg its useful to see the snowflake over its constituent parts, or the film over the bits, but what underlies the choice to see it as a film rather than an improbably corrupted png? When talking about the mind, then, why is it we choose to see the mind at all, and how does this constitute more than a convenient framing device, ie how can it explain qualia?

                  • heyjamesknight 2 days ago

                    Because our entire perception system functions as a mediation between the teleological affordances an object presents at a given level of organization/analysis and how those affordances relate to our motivational system's current objective and directed action. The emergence only "is" at a certain level of analysis and its emergence at that level is dependent entirely on the perception of an observer.

                    If a car is hurtling towards you, you don't perceive its handle. But if you're trying to go somewhere, you have to open the door. "Threat", "vehicle", or "handle" aren't just convenient framing device, but an accurate depiction of the object within your perceptual/motivation systems based on the current level of organization and analysis you're participating in.

                    We choose to see the mind because we are minds. Consciousness is. There is something which it is like to be. Denying our emergent experience of it, or reducing it to a "convenient framing device" tosses out the most fundamental empirical experience we have: to exist.

                    • BriggyDwiggs42 2 days ago

                      I completely agree with you, I’m just being more reductive when I say it’s a practical categorization rather than essential reality. Certainly it’s also reasonable to say there’s no essential reality, just subjective levels of analysis, so everything is practical categorization. The issue is that we’ve gotten nowhere in explaining why we seem to exist.

                      A video game is relatively easy, at least seemingly, to reduce down to its underlying principles. The content dissolves the more closely I look at the game. The issue here isn’t whether the game still exists (it does, in the place I’m no longer looking), the issue is in seeing why the game arises from its component parts, and not something else. Easy-ish for the game, it follows directly from what we know about physics and such, but hard for the mind. Why do neurons together produce pain that exists, rather than pain as a purpose-driven internal signal to help organize the escape from a predator? Emergence doesn't tell us why one or the other, just that whatever it is must emerge from constituent parts.

                      • heyjamesknight 2 days ago

                        I don't think its a question of whether there is an essential reality or not, but rather whether we have access to essential reality. Donald Hoffman makes a strong game theoretical argument for how natural selection chooses effective presentations of reality rather than necessarily accurate ones [1]. Based on your level of interest in this conversation I'd expect you would really enjoy that book!

                        The game is certainly easier than the mind—I like it as an example because most of us have a hands-on knowledge of what the qualitative experience of "playing a game" is like. But the game still only emerges because the game developer, computer manufacturer, and player jointly give it a emanating system into which it can emerge. On its own, the raw game data doesn't really mean anything at all—if the bitstream of Diablo IV washed up on the beach, there's nowhere in that data encoding the experience of killing Diablo for the first time. One wouldn't even recognize it as something that could be decoded into such an experience [2].

                        I agree with you that the "why" is tough. Why have a conscious experience? Why have a sense of self at all? Why experience emotions rather than have them be—like you described—a purpose-driven internal signal? And then you get into theories like Internal Family Systems, which has empirical support at least within a prescriptive context if not necessarily a descriptive one [3].

                        The whole thing is a mess. A great, big, beautiful mess.

                        [1: https://www.amazon.com/Case-Against-Reality-Evolution-Truth/...] [2: https://benjamincongdon.me/blog/2021/02/21/Three-Layers-of-I...]

      • LiquidHelium 2 days ago

        It depends on what you mean by consciousness, if we are talking about the intelligence or self awareness or thoughts then I don’t see any problem with it being emergent. But if we are talking about conscious experience/qualia (not something that thinks or interacts, but something that just experiences) then I think it’s incoherent for it to be emergent. That there is a consciousness that is experiencing something is the only thing we can know as 100% true, and the world itself is something we can never know is 100% true: we could be a brain in a vat, we could be dreaming, in the matrix, a demon making us hallucinate everything etc. It seems a bit silly to say the 100% true thing is an illusion or is dependent because something that we don’t know is true tells us it is.

      • BriggyDwiggs42 2 days ago

        Pencil is just an idea, minds objectively have qualia (measured internally).

        Edit: you can’t measure “pencilness,” but you can’t help but know whether or not you’re in pain.

        • heyjamesknight 2 days ago

          You can measure "pencilness" a number of ways depending on how you operationalize the term. It could be a measure of how well it achieves the function of a pencil, how well it matches the collective understanding of the form of a pencil, how closely it materially relates to an existing reference pencil.

          These are all proxy measures, but all of science is done by proxy.

          • BriggyDwiggs42 2 days ago

            Well sure, but you’re putting “pencilness” onto the collection of heterogeneous matter, same with any other level of analysis. Consciousness isn’t debatable by the thing doing it, it’s an irrepressible fact of existence to the conscious thing. Science needs a falsifiable hypothesis for the “why” of the material->consciousness transition, and constructing such a hypothesis is difficult for a lot of reasons. Saying “it emerges from neuron connections” just doesn’t capture the issue. Why should neuron connections produce this observer thing when we seem to see machines do similar things without it? Is a sufficiently large recurrent neural network conscious by the same process? If not or if so, then why? What precisely produces the phenomenon. Emergence is an observation, not a hypothesis for why that observation occurred. It could just be a trick of the light.

            • heyjamesknight 2 days ago

              Agreed, I was just commenting that one can measure "penciliness". I see a lot of pedantic arguments against measurement by proxy, as if every single measurement we do wasn't by proxy.

              • BriggyDwiggs42 a day ago

                Actually yeah I see your point. I’ll concede on that.

    • Symmetry 2 days ago

      There's a whole scientific study of consciousness that actually comes out of behaviorism. The thought is, if I have a conscious experience I can then exhibit the behavior of talking about it. From this developed a whole paradigm of investigation including stuff like the research of subliminal images.

      Stanislas Dehaene's book Consciousness and the Brain does a great job of describing this, though it's 10 years old now.

      • wat10000 2 days ago

        Trouble is that you can also exhibit the behavior of talking about it just by being exposed to the idea, even if you don't have the experience. If you were never exposed to the idea and you started talking about it, then I'd be convinced you had the experience, but nobody is actually like that. The fact that the idea exists at all proves to me that at least one human somewhere had conscious experience, and I know there's at least one more (me), but that's it.

        • Symmetry 2 days ago

          I was evidently unclear. I mean, if an image of a parakeet is flashed up on a screen for 100ms and you can say "I saw a parakeet" you were conscious of the image. If the image is flashed for 50 ms and you can't you weren't conscious of the image. In this paradigm being conscious is being conscious of particular things.

          • wat10000 2 days ago

            That seems like a fairly simple machine could be conscious, which is not usually how the word is used. Typically consciousness means that there is some ill-defined entity that has a subjective experience, what the philosophers call qualia.

    • Aardwolf 2 days ago

      The mind concept here then could apply to computers as well since after all those can also be configured to learn things and behave in certain intelligent ways

    • baddash 2 days ago

      mind = container of values

      consciousness = meta-attention

card_zero 2 days ago

I read it all, for a certain value of "read". It's very long, and heavy on examples and fascinating facts, but skimps on getting to the point. I enjoyed the line about plant biologists suffering from brain envy. The article gets better from about halfway through as skeptical views begin to be introduced, but eventually it lets go of that and turns back into lot of hand-wavy awe about mycorrhizal networks, and I missed what the "new proposal" is. If it's only saying that intelligence is an emergent property of connections, and could therefore emerge in swarms or societies, we've had that idea since at least Hofstadter and his sentient ant nests.

  • SubiculumCode 2 days ago

    'Get to the point' is my primary response to the article..

  • hikarudo 2 days ago

    > we've had that idea since at least Hofstadter and his sentient ant nests.

    A similar idea is present in Herbert Simon's 'The sciences of the artificial', where he describes a sentient city.

ivan_gammel 2 days ago

Intelligence is the ability of a system to make observations and adjust itself based on them (like our brain is changing while learning or our environment is changing with technological progress). It’s definitely not binary state. If we put the internal complexity of the system on one axis and external complexity (what can be observed and meaningfully processed) on another axis, there’s a circle on the plane representing what humans perceive as being intelligent. It intersects with a few other species so we do think now that they are intelligent. Everything else outside that circle is either too primitive or too complex for us, so we do not see e.g. plants as intelligent, but we also may not recognize aliens as intelligent because their existence is too complex for us to even notice it. Humans unlocked evolutionary path not based on DNA, so we evolve much quicker now through science and culture. Our circle is thus expanding and we start realizing how more primitive systems think and create new intelligent systems.

marcus_holmes 2 days ago

I found it fascinating how this discussion dovetails with the discussion around free will.

In both cases, defining the actual thing under discussion is hard. If you can accurately predict a decision in advance, is that "free will"? If an organism reacts to a stimulus in an appropriate manner, is that "intelligence"?

In both cases, we're complex chemical organisms doing all of this with complex chemistry. If we rule out souls and spirits and similar, then it's just chemistry.

If we're following a predefined set of chemical rules in response to a set of stimuli, then how is that "free willed" or "intelligent"? The arbitrary line between tropism and intelligence seems very arbitrary.

But on the other had, we are made of meat. We experience free will and intelligence. We think, and make decisions, seemingly unrestricted by the method we use to do that. We are clearly intelligent, and clearly make decisions that we are apparently free to make.

  • anon291 2 days ago

    One's observations of one's own agency is really the only thing one can be assured of. And this is where these arguments that seek to reduce intelligence to purely mechanistic processes breaks down. For sure, everything the article is saying is true, and indeed, the system could be classified as intelligent, but this is a wholly different question from 'agency' or 'free will'. Even exceptionally dumb people have free will, and exceptionally intelligent computers (ChatGPT, DeepSeek, et al) have no free agency.

    • NoGravitas a day ago

      > One's observations of one's own agency is really the only thing one can be assured of.

      Can one? One of the most disturbing short stories I've ever read is "Love Is the Plan, the Plan Is Death" by Alice Sheldon (writing as James Tiptree). It is the first-person narrative of an unusually self-aware member of a non-technological intelligent species trying to make a life different from "the plan", the species' instincts, in the face of an oncoming slow-motion multigenerational catastrophe. His efforts end up, all for contextually rational and agentive reasons, reiterating his species instinctual lifecycle.

      My takeaway from that, from other readings, and from self reflection, is that we are puppets that may or may not become aware of our strings; but if we cut them, we die.

    • LiquidHelium 2 days ago

      This is an argument for anecdote so feel free to ignore me but if you meditate for long enough or take certain substances you can experience that the conscious experience we are having doesn't actually control things in the way we think it does - you don't actually think your thoughts, you just observe them, like we don't control the sounds we hear - and same with everything else we do. The "only thing one can be assured of" is the experience, not the control of the experience.

      This is completely contradicted by the fact I could talk about that experience, which does imply some control from the observer to the physical world. Which makes the whole thing paradoxical. The only way I can square it is with my religious beliefs.

      • Miraltar a day ago

        I feel like what you are describing is like letting go of the wheel while driving. Then the car does its own thing but that doesn't mean you don't have control. It's just that you decided (or sometimes were forced) to let go. I agree that we're never fully in control but I don't think we're simple observers either.

    • koakuma-chan 2 days ago

      > Even exceptionally dumb people have free will

      “Free will” does not exist because the world is deterministic. In other words, if you have made a decision, you couldn’t have made any other decision, so there wasn’t any choice in the first place. A persons IQ has nothing to do with this.

      • marcus_holmes a day ago

        I always get stuck with this. That yes, I could not have made any other decision, but that decision is still mine to make.

        It's like being able to predict what someone is going to do, because you know them and how they think, and what decision they will make when presented with the choice, doesn't stop that from being their choice.

        The universe may be deterministic, but my personality is still my personality. It is encoded into the chemical make-up of my brain, and so that complex chemistry behaves in ways that align with my personality. My personality is shaped by my previous experience, but it's still my personality. I still choose, even though the universe can predict all of my choices, because the thing I do the choosing with is part of the process.

        And this seems very like the argument about intelligence and instinct. If I respond in a certain way to an event, is it because I am intelligent and "thinking" about my response, or is it instinctual and coded into my meat to respond this way? How would I tell the difference?

        Same with free will, how would I tell the difference between a choice I freely made and one I didn't?

      • NoGravitas a day ago

        One may object that at the quantum level, the world really is nondeterministic. Epicurus also argued this over 2000 years ago - that sometimes atoms "swerved" unpredictably in their movements, accounting for free will. Of course, the counterpoint to this argument is that randomness is not free will any more than determinism is; neither offers any space for agency as something that's causal rather than just experienced.

        • koakuma-chan a day ago

          Is there evidence that quantum mechanics is not deterministic?

          • anon291 a day ago

            Yes, the entirety of the math behind it which says that various quantities are unknowable except as a distribution of probabilities. Various experiments have shown these formulas to be 'real', as in, the alternative deterministic version is untenable unless you make new and equally disturbing assumptions.

            • koakuma-chan a day ago

              Sorry if I’m being ignorant, but let’s say you measured a qubit and it collapsed to a certain state. Now, hypothetically, if you rewind the time and re-measure it, would it collapse to a different state? I think it wouldn’t, and in this sense it is deterministic.

              • ianburrell a day ago

                You would see a different state. Quantum mechanics is random. It is possible that there is hidden state that determines the outcome, but there is Bell's Theorem that limits local hidden state and it has been tested by experiment.

                • koakuma-chan a day ago

                  I define deterministic as only having one possible outcome, predictable or not.

                  Here is a proof that quantum mechanics is deterministic:

                  Let there be a time `t`.

                  Let there be a qubit `q`.

                  At the time of `t`, we measured the qubit `q`, and the qubit `q` collapsed to the state of `a`.

                  At the time of `t`, we measured the qubit `q`, and the qubit `q` collapsed to the state of `b`.

                  `a` equals `b`.

                  Quantum mechanics is deterministic.

                  • ianburrell 21 hours ago

                    I think you just assumed the result. With quantum mechanics, a and b will be different if could rollback time. Or if made the same measurement with the exact same state.

                    Quantum mechanics is indeterminate and probabilistic. Some of the most brilliant physicists like von Neumann and Bell, who won the Nobel Prize for this work, have proved it. Unless you have something that will win the Nobel Prize, your intuition is wrong.

                    Check out https://en.wikipedia.org/wiki/Quantum_indeterminacy

    • infinitifall 2 days ago

      You haven't defined this property you call "agency". How then can you definitely determine whether you posses it or that someone else doesn't? The only thing I can be assured of is my own existence.

      • anon291 a day ago

        Well agency is the feeling I have of being able to impact the world. I cannot know if you have it but I have extrapolated that based on my impression of you. For all I know, I'm the only one to exist.

ilaksh a day ago

Maybe the increased status management, self pattern persistence, or general problem solving abilities of intelligence comes from processes that integrate data from multiple colony members and use the integration, broadcast and storage of information over a series of immediate steps and long time frames to synthesize concepts and plans at a higher level than individuals can manage.

But the intelligent work is in the connections, exchange, and integration of information.

I think that security for human groups should be thought of in this context. Many strong communication links or maybe a holistic network are required for preventing sub-colonies from becoming "other".

lubujackson 2 days ago

Just read an interesting sci-fi novel grappling with this question very directly - Blindsight by Peter Watts, if anyone is interested.

  • red75prime 2 days ago

    The problem with sci-fi and this matter is that the author can portray anything that suits his/her ideas. Can Rorschach exist as described? Who knows.

    • Apocryphon a day ago

      Watts also believes in the Ganzfeld effect, as per Starfish, yeah so his beliefs in the mind should definitely be taken with a grain of salt.

      • red75prime 15 hours ago

        What's wrong with the Ganzfeld effect? I can experience it personally even without a uniform visual field by suppressing saccades for about thirty seconds.

        Or, at least, I could. It seems I lost the knack of controlling saccades. When I was younger I was able to suppress saccades and induce nystagmus-like eyeballs movement.

        Ah, I still can do it after some tries. Amplitude of induced nystagmus seems to have diminished though. And it's more uncomfortable than it was.

tweaqslug 2 days ago

The article went straight from mind to intelligence before the first paragraph; admitting defeat before making a move. Intelligence is still riding the wave of an evolutionary hype cycle. Mind, on the other hand, is some supremely useful stuff.

amanaplanacanal 2 days ago

The interesting thing about mind (as far as I'm concerned) isn't intelligence, but consciousness.

Intelligence isn't well defined. According to our current understanding of physics, everything is cause and effect, which throws free will out the window, so what do we even mean by intelligence? Is it just when chains of cause and effect become so complex we can't understand them any more? And if so, why does consciousness even exist at all?

The only consistent theory of consciousness I'm aware of is panpsychism, which seems very unsatisfying.

I guess the other option is to divorce consciousness from physical master entirely, but then we have kind of opened ourselves up to almost any kind of woo.

It really is a hard problem.

  • red75prime 2 days ago

    I'm happy with "Consciousness is a platonic form of self-reflective information processing thanks to all simplifications this processing has to do to reflect the messy physical processes underlying the information processing."

  • messe 2 days ago

    > The only consistent theory of consciousness I'm aware of is panpsychism, which seems very unsatisfying.

    Integrated Information Theory seems interesting, at least, but far from flawless. Like panpsychism though, I don't think it's falsifiable.

    • skissane 2 days ago

      > Integrated Information Theory

      To me, it seems like a bit of a trick - take a philosophical question, propose an answer, stuff some mathematics into the answer, maybe even a few minor empirical predictions, make people think those added bits make the answer more likely to be correct and more respectable than competing answers which don’t do that - I don’t think that actually works though

      > Like panpsychism though, I don't think it's falsifiable.

      Well, I don’t think any theory in this area is falsifiable. Either you give up demanding falsifiability or embrace agnosticism (we don’t know and we maybe never will)

      Maybe give up on falsifiability, since presenting it as an absolute must-have is self-defeating (the idea that falsifiability is a must-have is itself unfalsifiable)

  • skissane 2 days ago

    > According to our current understanding of physics, everything is cause and effect,

    We don’t know whether the apparent indeterminism of QM ultimately reduces to determinism or not. It depends on which interpretation of QM you prefer, and none of them has any strong empirical support

    > which throws free will out the window,

    As well as assuming deterministic QM, you also assume incompatibilism. Reject incompatibilism and you can still have free will even if physical reality turns out to be 100% deterministic

    > so what do we even mean by intelligence?

    People disagree greatly as to what “intelligence” means, but this is the first time I’ve seen anyone suggest that the issue turns on free will. Usually people present “definition of intelligence” and “does free will exist?” as questions having a significant degree of orthogonality

    > Is it just when chains of cause and effect become so complex we can't understand them any more?

    Mediaeval philosophers commonly viewed intelligence or intellect as a power, the power to engage in conceptual thought. A “power” is a casual concept - it is the possibility of causing something - but possibility, not necessity. It is orthogonal to the question of free will, so long as our position on free will enables us to meaningfully speak of “things we could have done but never actually did”, “thoughts we could have thought but never actually thought”…

    It was our dog’s birthday the other day. She’s a smart dog, but she’s never going to understand the concept of a birthday… she enjoyed some special treats but she’ll never understand why she got them on that occasion. She has intelligence, but not the specifically conceptual sort of intelligence mediaeval philosophers were talking about. And free will doesn’t enter into that

    > And if so, why does consciousness even exist at all?

    I think intelligence and consciousness are distinct issues. Consciousness is having subjective experiences; intelligence is having thought processes of a certain kind. One entity might have consciousness without intelligence, another might have intelligence without consciousness.

    > The only consistent theory of consciousness I'm aware of is panpsychism, which seems very unsatisfying.

    I don’t think the competitors to panpsychism are necessarily inconsistent. Of course, if you reject panpsychism, you need a non-trivial criterion to decide what is conscious - and people will ask you to justify that criterion - and maybe all you can say is that it is axiomatic - and I get why proposing axioms can feel unsatisfying, but it isn’t strictly speaking inconsistent, assuming your axioms are consistent with each other. By the Münchausen trilemma, every quest for justification must end in either infinite regress, circularity, or axioms - and if axioms feel unsatisfying, infinite regress and circularity are just as unsatisfying - so maybe we are just doomed to feel unsatisfied.

    > I guess the other option is to divorce consciousness from physical master entirely, but then we have kind of opened ourselves up to almost any kind of woo.

    I don’t know what you mean by “divorce… entirely”-do dualists (of whatever kind) do that? Do idealists? From my own idealist viewpoint, I reject the claim that idealism necessarily entails “almost any kind of woo”, and I think the thought that it does relies on misunderstanding or misrepresenting idealism, or else confusing certain versions of it which maybe do do that with other versions which don’t

Jordan_Pelt 2 days ago

I'm probably a crackpot, but I'm convinced it's the other way around--matter emerges from mind. The only refutation I'm familiar with is Samuel Johnson kicking a rock, which I don't find very persuasive.

standardly a day ago

Someone hasn't read Julian Jaynes...

nilslindemann 2 days ago

Mind emerges from matter? I thought it was the other way around.

keernan a day ago

When I finished reading this article, several of its resource articles, and the HN comments, I returned to my HN feed, and a few HN posts down, I come across: DARPA Large Bio-Mechanical Space Structures. What an interesting intersection.

https://news.ycombinator.com/item?id=43185769

keernan a day ago

Meet the Electrome. It Can Turn You Into an Assassin. In “We Are Electric,” Sally Adee explores the body’s capacity for electricity; the results can be shocking.

NYT 2023 https://archive.ph/DpMMr

kayo_20211030 2 days ago

The whole proposal can be distilled to Connections => Mind. We've been struggling with the causal effect, even the definition, of what the `=>` means in humans. Seems a long shot that the study of, and breathy coverage, of plant signals will illuminate much. Maybe it will, but I'm not holding my breath.

carlosjobim a day ago

What is the god damned problem with writers today?

> From a snarl of roots that grip dry, shallow soil, the knobbly trunk of an ancient olive tree twisted into a surprisingly lush crown of dense, silvery-green leaves. Far above, the retrofuturistic pattern of a geodesic dome framed the blue sky outside. Dan Ryan considered the tree: “It’s probably close to 1,800 years old.”

Why do articles always have to start this way? These writers are writing the blandest and most boring clichés imaginable and everybody hates it, yet they keep doing it. Is it so fucking important for their egos to be perceived as some kind of antiquated stereotypical writer that they have to continue to humiliate themselves and their readers with this, just to impress who?

No wonder everybody is watching short form stuff on TikTok, that gets to the point.

No wonder people are listening to three hour podcasts, where the people who are being interviewed can talk uninterrupted, instead of having their quotes salted and peppered between some musings from the writer.

Where are the writers that make articles for normally intelligent people to read, that aren't filled with fluff, but aren't filled with equations either? Are these all making YouTube videos now?

scotty79 2 days ago

New proposal for how digestion emerges from the stomach.

causality0 a day ago

And they believe that if we can bring ourselves to dramatically reconsider what we think we know about it, we will end up with a much better concept of how to restabilize the balance between human and nonhuman life amid an ecological omnicrisis that threatens to permanently alter the trajectory of every living thing on Earth.

Do we really have to resort to total bullshit claims like this in order to make people care about this subject? Knowing what is and is not intelligent has fuck-all to do with not destroying the environment we as a species rely on for survival. The people and forces at work care not one whit. You could prove that chickens can write symphonies and it would make no difference.

ripped_britches 2 days ago

Can we retitle this as there is nothing new here besides long musings

  • mandmandam 2 days ago

    There is, if you read it.

    > Which brings us to the most striking idea — that some types of electrical oscillations could mediate an experience of self.

    > In 2021, Hanson found that similar electrical activity — spontaneous low-frequency oscillations — is evident across many different organisms, from E. coli to humans. She concluded that across a diverse range of creatures, the oscillations may have a shared function: constructing a single organismal whole from many parts.

    That's fascinating stuff which deserves the title (and the long context). 2021 is pretty damn new as far as theories of mind go.

    • card_zero 2 days ago

      Oh that's the key point. I barely noticed it. OK, well, this is very literally about the first steps in how brains evolved. It's akin to theories of abiogenesis like the one involving clay crystals "reproducing". But it's steep to say this is about minds, which only works if we accept the various parts of the article that urge us to stop being so egotistical and anthropocentric as to expect that a mind should be capable of doing something clever.

      • mandmandam 2 days ago

        I think you missed more than the key point.

        > this is very literally about the first steps in how brains evolved

        Well, no; they even highlighted this section:

        > “Intelligence, according to some, is a biological function that evolved not with humans or brains but way back in some form to the earliest organisms, a fundamental biological function like respiration.”

        It would definitely connect to how brains evolved, but the article and the main idea are a lot broader than that.

        > It's akin to theories of abiogenesis like the one involving clay crystals "reproducing"

        Well no; unless the clay crystals are producing interesting electrical oscillations.

        > it's steep to say this is about minds, which only works if we accept the various parts of the article that urge us to stop being so egotistical and anthropocentric as to expect that a mind should be capable of doing something clever.

        It's not clever to live for 2,000 years? And, are only clever people worthy of the label of having a mind? ... Because I know an awful lot of people with no interest in doing anything particularly clever.

        And yes, your comment is anthropocentric... Definitively so. Whether something has a "mind" depends on the definition used, and if you define that as "what humans call clever", then yeah you're being anthropocentric.

        • card_zero 2 days ago

          Sure, we disagree on that, anthropocentrism, which is why you think the part you say they highlighted is significant. I think it's hot air.

          (I meant very loosely akin, because it's another example of some complex phenomenon emerging from matter.)

    • simonh 2 days ago

      Is there any reason why we should associate oscillations with any such thing though? Lots of systems oscillate. It could well be that oscillations are a common emergent property of many systems, including consciousness, rather than consciousness being emergent from oscillations.

      It seems to me that a key characteristic of consciousness is it's informational character. It is representational of a cognitive state, it's interpretive, introspective. It seems to me that for any system to be conscious it must at a minimum be generating and interpreting representational structures. I'm sure that's nowhere even close to being a sufficient criterion, but I don't see how it can't be a necessary one.

      I think that's key to subjectivity, because different systems can have radically different ways to represent and interpret even the same phenomenon they have representations of. The details of the representational structure, and it's network of associations with other representational structures in the system, are intrinsically tied to details of the system processing and interpreting the representation.

    • rzz3 2 days ago

      The right way to include all of this context would be to start with the lead and then back it up with the context. I can’t read stuff written like this.

      • mandmandam 2 days ago

        I agree that they buried the lede, but I'll defend the article anyway because the subject matter is still worth the effort imo.

    • bloomingkales 2 days ago

      If I were to say this is just a coincidence, how would you respond?

      If it’s not:

      https://www.sciencedirect.com/science/article/abs/pii/S01676...

      If everything is pulsing with electricity and they are syncing to something global …

      From the paper:

      Examples of biological systems that have been modeled using PCOs include cardiac pacemakers [2], crickets that chirp in unison [3], and rhythmic flashing of fireflies [4]

      • mandmandam a day ago

        > If I were to say this is just a coincidence, how would you respond?

        I'd say you probably don't know enough to say that with such certainty, and should probably avoid jumping to conclusions lest you be Dunning-Krugered.

        I would then encourage you to ask ChatGPT something like: "How are SELFOS (Spontaneous electrical low-frequency oscillations) different to PCOs (pulse-coupled oscillators)".

        Tl;dr - SELFOs and PCOs aren't the same.

        • bloomingkales a day ago

          I'd say you probably don't know enough to say that with such certainty, and should probably avoid jumping to conclusions lest you be Dunning-Krugered.

          I'll absolutely take that.

    • unsupp0rted 2 days ago

      This would be a mechanism for how in sci fi they scan a nebula and say "signs of intelligence" or something like that. "Our scanners have picked up the telltale oscillations".

      • mandmandam 2 days ago

        Exactly! And even a possible mechanism for the Universal Translator (as long as you don't Darmok your Tanagras too hard).