musicale 3 days ago

I've always thought that Searle's argument relied on misleading and/or tautological definitions, and I liked Nils Nilsson's rebuttal:

"For the purposes that Searle has in mind, it is difficult to maintain a useful distinction between programs that multiply and programs that simulate programs that multiply. If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought."

I also find Searle's rebuttal to the systems reply to be unconvincing:

> If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

Perhaps the overall argument is both more and less convincing in the age of LLMs, which are very good at translation and other tasks but still make seemingly obvious mistakes. I wonder (though I doubt) whether Searle might have been convinced if by following the instructions the operator of the room ended up creating, among other recognizable and tangible artifacts, an accurate scale model of the city of Beijing, and an account of its history, and refer to both in answering questions. (I might call this the "world model" reply.)

In any case, I'm sad that Prof. Searle is no longer with us to argue with.

https://news.ycombinator.com/item?id=45563627

  • tug2024 2 days ago

    Searle’s argument is like a captain claiming his ship isn’t sailing because the compass is inside a cabin, not on deck.

    Nilsson points out: if the vessel moves as if it’s cutting through waves, most sailors would say it’s sailing. Even Searle’s “deep thought” may just be a convincing simulation, but the wake is real enough.

    The systems reply? Claiming the ship can’t navigate because the captain doesn’t understand the ropes feels like denying the ocean exists while staring at the harbor.

    In the age of LLMs, the seas are charted better than ever, yet storms of obvious mistakes and rows of confusion, misguided and misled folk still appear. Perhaps a model city of Beijing as old town, new streets, and maps can sway Searle readers in the 21st century!

    Alas, the old captain has sailed into the horizon, leaving the debate with the currents.

fellowniusmonk 3 days ago

Meaning bootstrapped consciousness, just ask dna and rna.

I don't get any of these anthropocentric arguments, meaning predates humanity and consciousness, that's what dna is, meaning primitives are just state changes the same thing as physical primitives.

syntactic meaning exists even without an interpreter in the same way physical "rock" structures existed before there were observers, it just picks up causal leverage when there is one.

Only a stateless universe would have no meaning. Nothing doesn't exist, meaninglessness doesn't exist, these are just abstraction we've invented.

Call it the logos if that's what you need, call it field pertubations, reality has just traveling up the meaning complexity chain, but complex meaning is just structural arrangement of meaning simples.

Stars emit photons, humans emit complex meaning. Maybe we'll be part of the causal chain that solves entropy, until then we are the only empirically observed, random walk write heads of maximally complex meaning in the universe.

We are super rare and special as far as we've empirically observed, doesn't mean we get our own weird metaphysical (if that even exists) carve out.

  • marshfarm 3 days ago

    There's much more meaning than can be loaded into statements, thoughts, etc. And conscious will is a post-hoc after effect.

    Any computer has far less access to the meaning load we experience since we don't compute thoughts, thoughts aren't about things, there is no content to thoughts, there are no references, representations, symbols, grammars, words in brains.

    Searle is only at the beginning of this refutation of computers, we're far more along now.

    It's just actions, syntax and space. Meaning is both an illusion and fantastically exponential. That contradiction has to be continually made correlational.

    • fellowniusmonk 3 days ago

      meaning is an illusion? That's absurdly wrong, it's a performative contradiction to even say such a thing, you might not like semantic meaning but it, like information, physically exists, and even if you're a solipsist you can't deny state change, and state change is a meaning primitive, meaning primitives are one thing that must exist.

      this isn't woo, this is just empirical observation, and no one is capable of credibly denying state change.

      • marshfarm 2 days ago

        The idea of meaning is contradictory, it's not strictly an illusion. There's a huge difference. State changes mean differences, they don't ensure meaning. This is an obvious criteria. We have tasks and the demands are variable. We can assign meaning, but where is the credibility? Is it ever objectively understood? No. That's contradictory.

        You have to look at mental events and grasp not only what they are, both material and process, how the come to happen, they're both prior and post-hoc, etc.

        I study meaning in the brain. We are nit sure if it exists and the meaning we see in events and tasks are at a massive load. Any one event can have 100s even 1000s of meaningful changes to self, environment and others. That's contradictory. Searle is not even scratching the surface of the problem.

        https://arxiv.org/vc/arxiv/papers/1811/1811.06825v2.pdf

        https://www.frontiersin.org/journals/psychology/articles/10....

        https://pubmed.ncbi.nlm.nih.gov/39282373/

        https://aeon.co/essays/your-brain-does-not-process-informati...

        • fellowniusmonk 2 days ago

          What does ensure meaning? Interpretation?

          If that's your position, that's where we disagree, state changes in isolation and state changes in sequence are all meaning.

          State change is the primitive of meaning, starting at the fermion, there is no such thing as meaninglessness, just uncomplex, non-cohered meaning primitives, the moment they start to be associated through natural processes you have increasing complex meaning sequences and structures through coherence.

          We move up the meaning ladder, high entropy meaning (rng) is decohered primitives, low entropy meaning is maximally cohered meaning like human speech or dna.

          Meaning interactions (quantum field interactions) creates particles and information. Meaning is upstream, not downstream.

          Now people hate when you point out semantic/structural meaning is meaning, but it's the only non fuzzy definition I've ever seen, and with complexity measures we can reproducably examine emissions objectively for semantinc complexity across all emitter types.

          The reason everyone has such crappy and contradictory interpretations of meaning is because they are trying to turn a primitive into something that is derive or emergent and it's just simply not, and you can observe the chain of low to high complexity without having to look at human structures.

          This meaning predates consciousness, even if you are a dualist you have to recognize that dna and rna bootstrap each "brain reciever" structure.

          Meaning exists without an interpreter, the reason so many people get caught up in the definition is because they can't let go of anthropocentric views of meaning, meaning comes before consciousness, logic, rationality, in the same way the atom comes before the arrangement of atoms rockwise.

          Even RNG, the rng emissions from stars lets say, which is maximally decohered meaning, has been made meaningful to the point of extreme utility by humans via encryption.

          Now, you may be a dualist, and that's fine, the physical reality of state change doesn't preclude dualism, it sets a physical empirical floor, not an interpretive ceiling.

          Even some very odd complaints about human interpretation, like still images being interpreted as movement some how being a problem, in the viewing frame you are 100% seeing state changes and all you need for meaning are state changes, each frame is still but the photon stream carried to our eyeballs is varying, and that's all you need.

          Anyway, you make meaning, you are a unqiue write head in the generation of meaning, we can't ex ante calculate how important you are for our causal survival because the future stretches out for an indeterminate time, and we haven't yet ruled out that entropy can be reversed in some sense, so you are an important meaning generator that needs to be preserved, our very species, the very universe may depend on the meaning you create in the network (is reversing entropy even locally likely? I doubt it, but we haven't ruled it out yet, it's still early days.)

          • marshfarm 2 days ago

            Without being a dualist, we can say from neurobiology, ecological psych, coord dynamics, neural reuse that meaning isn't simply upstream.

            Technically it can't be because of the language problem is post-hoc.

            You're an engineer so you have a synthetic view of meaning, but it has nothing to do with intelligence. I'd study how you gained that view of meaning.

            A meaning ladder is arbitrary, quantum field dynamics can easily be perceived as Darwinism, and human speech isn't meaningful, it's external and arbitrary and suffers from the conduit metaphor paradox. The meaning is again derived from the actual tasks, scientifically no speech act ever coheres the exact same mental state or action-syntax.

            Sorry you're using a synthetic notion of meaning that's post-hoc. Doesn't hold in terms of intelligence. Not even Barbour (who sees storytelling in particles) et al would assign meaning to Fermions or other state changes. It's good science fiction, but it's not science.

            In neuroscience we call isolated upstream meaning "wax fruit." You can see it is fruit, but bite into it, the semantic is tasteless (in many dimensions).

            • fellowniusmonk 2 days ago

              [flagged]

              • Marshferm 2 days ago

                Scientists hacking engineers who pretend meaning is in fermions is one of the great experiences here. Don't sell it short, engineer. Science is coming to overtake binary. And if you ever get to sign a paper for a presidential session at a top-level conference, you'll know what it's like to practice science and not debate ideas merely in social media.

jedberg 2 days ago

I knew this title looked familiar! It was required reading when I took Searle's course. I always thought it funny that CogSci majors (basically the AI major at Berkeley in the 90s) were required to take a course from a guy who strongly believed that computers can't think.

It would be like making every STEM major take a religion course.

  • countrymile 2 days ago

    Not sure that equivalence works, cognitive science doesn't require that people believe that computers can think; and STEM doesn't require that people think of the world in a purely mechanistic way - e.g. historically, many scientists were looking for the rules of a lawgiver.

    Apologies if I'm misreading you here.

  • actionfromafar 2 days ago

    Not a bad idea, actually. Religion is a big deal and it can only help to know the basics of how it works. Some of the fanboi behavior common in tech is at least religion adjacent.

generuso 3 days ago

It all started with ELIZA. Although Weizenbaum, the author of the chatbot, always emphasized that the program was performing a rather simple manipulation of the input, mostly based on pattern matching and rephrasing, popular press completely overhyped the capabilities of the program, with some serious articles debating whether it would be a good substitute for psychiatrists, etc.

So, many people, including Searle, wanted to push back on reading too much into what the program was doing. This was a completely justified reaction -- ELIZA simply lacked the complexity which is presumably required to implement anything resembling flexible understanding of conversation.

That was the setting. In his original (in)famous article, Searle started with a great question, which went something like: "What is required for a machine to understand anything?"

Unfortunately, instead of trying to sketch out what might be required for understanding, and what kinds of machines would have such facilities (which of course is very hard even now), he went into dazzling the readers with a "shocking" but a rather irrelevant story. This is how stage magicians operate -- they distract a member of the audience with some glaring nonsense, while stuffing their pockets with pigeons and handkerchiefs. That is what Searle did in his article -- "if a Turing Machine were implemented by a living person, the person would not understand a bit of the program that they were running! Oh my God! So shocking!" And yet this distracted just about everyone from the original question. Even now philosophers have two hundred different types of answers to Searle's article!

Although one could and should have explained that ELIZA could not "think" or "understand" -- which was Searle's original motivation, this of course doesn't imply any kind of fundamental principle that no machine could ever think or understand -- after all, many people agree that biological brains are extremely complex, but nevertheless governed by the ordinary physics "machines".

Searle himself was rather evasive regarding what exactly he wanted to say in this regard -- from what I understand, his position has evolved considerably over the years in response to criticism, but he avoided stating this clearly. In later years he was willing to admit that brains were machines, and that such machines could think and understand, but somehow he still believed that man-made computers could never implement a virtual brain.

31337Logic 3 days ago

RIP John Searle, and thanks for all the fish.

jmkni 3 days ago

Long read, I'm sure it's fascinating, will get through it in time

Just Googling the author, he died last month sadly

  • measurablefunc 3 days ago

    It's the responses & counter-responses that are long. The actual article by Searle is only a few pages.

BadThink6655321 3 days ago

A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.

  • wk_end 3 days ago

    So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.

    > let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

    In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.

    • mannykannot 3 days ago

      People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.

      • TheOtherHobbes 2 days ago

        Yes, it's a wrong-end-of-the-telescope kind of answer.

        A human does simulates a Turing machine to do... something. The human is acting mechanically. So what?

        If there's any meaning, it exists outside the machine and the human simulating it.

        You need another human to understand the results.

        All Searle has done is distract everyone from whatever is going on inside that other human.

    • rcxdude 2 days ago

      In that case you've basically just created a split-brain situation (I mean like the actual phenomenon of someone who's had the main part of the connection between the two hemispheres of the brain). There's one system which is the man and the rules that he has internalized, and there's what the man himself consciously understands, and there's no reason that the two are necessarily communicating in some deeper way, in much the same way as a split-brain patient may be able to point to something they see in one side of their vision when asked but be unable to say what it is.

      (Also, IMO, the question of whether the program understands chinese mainly depends on whether you would describe an unconscious person as understanding anything)

      I also can't help but think of this sketch when this topic comes up (even though, importantly, it is not quite the same thing): https://www.youtube.com/watch?v=6vgoEhsJORU

    • glyco 2 days ago

      You and Searle both seem to not understand a simple, obvious fact about the world, which is that (inhomogenous) things don't have the same thing inside. A chicken pie, for example, doesn't have any chicken pie inside. There's chicken inside, but that's not chicken pie. There's sauce, vegetables and pastry, but those aren't chicken pie either. All these things together still may not make a chicken pie. The 'chickenpieness' of the pie is an additional fact, not derivable from any facts about its components.

      As with pie, so with 'understanding'. A system which understands can be expected to not contain anything which understands. So if you find a system which contains nothing which understands, this tells you nothing about whether the system understands[0].

      Somehow both you and Searle have managed to find this simple fact about pie to be 'the grip of an ideology' and 'metaphysical'. But it really isn't.

      [0] And vice-versa, as in Searle's pointlessly overcomplicated example of a system which understands Chinese containing one which doesn't containing one which does.

    • BadThink6655321 3 days ago

      Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.

      • siglesias 3 days ago

        "Actual understanding" means you have a grounding for the word down to conscious experience and you have a sense of certainty about its associations. I don't understand "sweetness" because I competently use the word "sweet." I understand sweetness because I have a sense of it all the way down to the experience of sweetness AND the natural positive associations and feelings I have with it. There HAS to be some distinction between understanding all the way down to sensation and a competent or convincing deployment of that symbol without those sensations. If we think about how we "train" AI to "understand" sweetness, we're basically telling it when and when not to use that symbol in the context of other symbols (or visual inputs). We don't do this when we teach a child that word. The child has an inner experience he can associate with other tastes.

        • bonobo 2 days ago

          You mentioned experience, but it's not clear to me if you mean that it's a requirement for "actual understanding." Is this what you're saying? If so, does that mean a male gynecologist doesn't have an "actual understanding" of menstrual cycles and menopause?

          I think about astronomers and the things they know about stars that are impossible to experience even from afar, like sizes and temperatures. No one has ever seen a black hole with their own eyes, but they read a lot about it, collected data, made calculations, and now they can have meaningful discussions with their peers and come to new conclusions from "processing and correlating" new data with all this information in their minds. That's "actual understanding" to me.

          One could say they are experiencing this information exchange, but I'd argue we can say the same about the translator in the chinese room. He does not have the same understanding of chinese as us humans, associating words to memories and feelings and other human experiences, but he does know that a given symbol evokes the use of other specific symbols. Some sequences require the usage of lots of symbols, some are somewhat ambiguous, and some require him to fetch a symbol that he hasn't used in a long time, maybe doesn't even know where he stored it. To me this looks a lot like the processes that happen inside our minds, with the exception that his form of "understanding" and the experiences that this evokes to him are completely alien to us. Just like an AGI would possibly be.

          I'm not confortable looking at the translator's point of view as if he's the analogous to a mind. To me he's the correlator, the process inside our minds that makes these associations. This is not us, it's not under our conscious control, from our perspectives it just happens, and we know today it's a result of our neural networks. We emerge somehow from this process. Similarly, it seems to me that the experience of knowing chinese belongs to the whole room, not the guy handling symbols. It's a weird conclusion, I still don't know what to think of it though...

          • siglesias a day ago

            When I say "experience," I mean a sufficient grounding of certainty about what a word means, which includes how it's used, how it relates to the world that I'm experiencing, but also the mood or valence the word carries. I can't feel your pain, or maybe you've been to a country that I haven't been to and you're conveying that experience to me. Maybe you've been to outer space. I'm not saying to understand you I need to literally have had the exact experience as you, but I should be able to sufficiently relate to the words you are saying in order to understand what you are saying. If I can't sufficiently relate, I say I don't understand. You can see how this differs from what an AI is doing. The AI is drawing on relationships between symbols, but it doesn't really have a self, or experience, etc etc.

            The process of fetching symbols, as you put it, doesn't feel at all like what I do when somebody asks me what it was like to listen to the Beatles for the first time and I form a description.

        • mannykannot 3 days ago

          The irony here is that performing like an LLM the very thing that Searle has the human operator do. If it the sort of interaction that does not need intelligence, then no conclusion about the feasibility of AGI can be drawn from contemplating it. Searle’s arguments have been overtaken by technology.

          • siglesias 3 days ago

            Can you expand on this? The thought experiment is just about showing that there is more to having a mind than having a program. It’s not an argument about the capabilities of LLMs or AGI. Though it’s worth noting that behavioral criteria continue to lead to people overestimating the capabilities of promise of AI.

            • mannykannot 2 days ago

              LLMs are capable of performing the task specified for the Chinese room over a wide range of complex topics and for a considerable length of time. While it is true that their productions are wrong or ill-conceived more often than one would expect from a well-informed human, and sometimes look like the work of a rather stupid one, the burden now rests on Searle's successors to show that every such interaction is purely syntactic.

ogogmad 3 days ago

[flagged]

  • lo_zamoyski 3 days ago

    [flagged]

    • dang 3 days ago

      You can't attack others like this on HN, regardless of how wrong they are or you feel they are. It's not what this site is for, and destroys what it is for.

      Btw, it's particularly important not to do this when your argument is the correct one, since if you happen to be right, you end up discrediting the truth by posting like this, and that ends up hurting everybody. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

      If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.

    • wk_end 3 days ago

      > Gödel incompleteness

      I agree with your comment, FWIW - I have no idea what OP is trying to demonstrate - but to maybe suggest some context: Gödel incompleteness is a commonly suggested "proof" as to why computers can't be intelligent in the way that humans can, because (very very roughly) humans can "step out" of formal systems to escape the Gödel trap. Regardless of your feelings about AI it's a silly argument; I think possibly the popularizer of this line of thinking was Roger Penrose in "The Emperor's New Mind".

      I haven't re-read Searle since college but as far as I recall he never brings it up.

    • BadThink6655321 3 days ago

      Wnat about Gödel incompleteness? Comptuers aren't formal systems. Turing machines have no notion of truth. Their programs may. So a program can have M > N axioms in which case one of the N+1 axioms recognizes the truth that G ≡ ¬ Prov_S(⌜ G ⌝) because it was constructed to be true. Alternatively, construct a system that generates "truth" statements, subject to further verification. After all, some humans think that "Apollo never put men on the moon" is a true statement.

      As for intentionality, programs have intentionality.

    • oh_my_goodness 3 days ago

      Mostly agree, but the Chinese Room doesn’t prove anything about whether the algorithm understands Chinese. It’s a bait and switch.

      • 31337Logic 3 days ago

        The CRA proves that the algorithm by itself can never understand anything by virtue of solely symbol manipulation. Syntax, by itself, is insufficient to produce semantics.

        • mannykannot 3 days ago

          It proves nothing, and has, in fact, been overtaken by events (see another of my replies [1]).

          As for its alleged conclusion, I love David Chalmers’ parody “Cakes are crumbly, recipes are syntactic, syntax is not sufficient for crumbliness, so implementation of a recipe is not sufficient for making a cake.”

          [1] https://news.ycombinator.com/item?id=45664226

          • oh_my_goodness 2 days ago

            But your Chalmers line is also literally true. If you're a Martian on Mars and you don't have cake ingredients available, the recipe won't work. If you're on Earth and you have the ingredients, it works fine. Even if (like me) you have almost no understanding of what the ingredients are, how they are made, or why the recipe works.

            • mannykannot 2 days ago

              If I'm following you correctly, you are saying that the conclusion of Chalmers' parody is actually correct, as having a recipe is indeed not sufficient to successfully bake a cake: you will not succeed without the ingredients, for example.

              This is indeed true, but we should bear in mind that Chalmers' parody is just that: a parody, not a rigorous argument. It seems clear that, if Chalmers wanted to make it more rigorous, he would have concluded with something like "therefore, even if you have all the prerequisites for baking a cake (ingredients, tools, familiarity with basic cooking operations...), no recipe is sufficient to instruct you in successfully completing the task." This would be a better argument, but a flabbier, less to-the-point parody, and it is reasonable for Chalmers to leave it to his readers to get his point.

              • oh_my_goodness a day ago

                Sure. I'm not saying it's a bad parody, or that bulking it up with footnotes would improve it.

                I'm still coming to grips with the idea that LLMs seem to translate pretty well without understanding anything.

                • mannykannot a day ago

                  The question of whether, or to what extent, LLMs understand anything is an interesting one, tied up with our difficulty in saying what 'understanding' means, beyond broad statements along the lines of it being an ability to see the implications of our knowledge and use it in non-rote and creative ways.

                  The most honest answer to these questions I can give is to say "I don't know", though I'm toying with the idea that they understand (in some sense) the pragmatics of language use, but not that language refers to an external world which changes according to rules and causes that are independent of what we can and do say about it. This would be a very strange state to be in, and I cannot imagine what it would be like to be in such a state. We have never met anybody or anything like it.

                  • oh_my_goodness a day ago

                    Well ... bright engineering students who have very little real-world experience are a little bit like it.

          • measurablefunc 3 days ago

            Simulation of a recipe is not sufficient for crumbliness which is the only thing a computer can do at the end of the day. It can perform arithmetic operations & nothing else. If you know of a computer that can do more than boolean arithmetic then I'd like to see that computer & its implementation.

            • mannykannot 2 days ago

              Searle tries this approach against the 'simulation agument' in this paper (see page 423) and also elsewhere, saying "a simulation of a rainstorm will not get you wet" (similarly, Kastrup says "a simulation of a kidney won't pee on my desk"), to which one can reply "yet a simulation of an Enigma machine really will encode and decode messages."

              The thing is, minds interact with the outside world through information transmitted on the peripheral neural system, so the latter analogy is the relevant one here.

              • oh_my_goodness 2 days ago

                Mostly. Probably.

                • mannykannot 2 days ago

                  Enough to show that Searle's successors need a better argument.

              • measurablefunc 2 days ago

                You're not addressing the actual argument & not bridging the explanatory gap between arithmetic simulation & reality. Saying people can read & interpret numbers by imbuing them w/ actual meaning & semantics is begging the question.

                • mannykannot 2 days ago

                  I'm showing that Searle's argument against the simulation reply doesn't hold up against relatively straightforward scrutiny. If you think you know of a better one, present it.

                  • measurablefunc 2 days ago

                    You're the one arguing against it by begging the question. Define all your terms & then maybe we'll have an actual argument.

                    • mannykannot 2 days ago

                      So no argument or explanation from you, just unsubstantiated allegations and bluster.

                      • measurablefunc 2 days ago

                        You can believe whatever you want about arithmetic as a foundation of your own metaphysics. I personally think it's silly but I'm not interested in arguing this any further b/c to close the explanatory gap from extensionality to intentionality you'd essentially have to solve several open problems in various branches of philosophy. Write out your argument in full form & then maybe you'll have something worth discussing.

                        • mannykannot 2 days ago

                          Once you punted a second time on an opportunity to explain yourself, I was fairly confident that there was nothing there. It's a common pattern.

                          • measurablefunc 2 days ago

                            Right back at ya.

                            • mannykannot 2 days ago

                              The great thing about your latest reply is that it takes no time at all to see that you have still not offered any justification or explanation of anything you have claimed.

                              • measurablefunc 2 days ago

                                Likewise.

                                • mannykannot a day ago

                                  For example, this response fails to give any justification for your claim that I am not addressing the actual argument.

                                  • measurablefunc a day ago

                                    Already covered further up the thread.

                                    • mannykannot 8 hours ago

                                      Then you will have no difficulty in pointing out where that happens. While you are about it, you can point out where you think I said people can read & interpret numbers by imbuing them w/ actual meaning & semantics.

                                      • measurablefunc 2 hours ago

                                        It's in your response. You're welcome to elaborate your argument about Enigma ciphers in other terms if you want but you'll reach the same conclusion as I did.

          • 31337Logic 2 days ago

            [flagged]

        • oh_my_goodness 2 days ago

          It proves that the guy in the room doesn't understand Chinese.

          And it does kind of help explain why LLMs sound like bright students who completely memorized their way through school.

          • mannykannot 2 days ago

            Strictly speaking, it does not even prove that: the claim that the guy in the room will not end up understanding Chinese is a premise, and some people argue that it an unjustified one. Personally, I think Searle's argument fails without one having to resort to such nit-picking.

            • oh_my_goodness 2 days ago

              Fine, it doesn't prove that. But I'm comfortable assuming it. Searle doesn't need to say the guy doesn't end up understanding Chinese. All he has to say is the guy doesn't need to understand Chinese. And then ... some†hing some†hing ... and then ... suddenly Chinese isn't understood by the algorithm either.

              It's that last part that I can't follow and (so far) totally disbelieve.

              • mannykannot 2 days ago

                To be clear, I don't think the guy in the room will end up understanding whichever Chinese language this thought experiment is being conducted in, either.

                You have put your finger on the fundamental problem of the argument: Searle never gave a good justification for the tacit and question-begging premise that if the human operator did not understand the language, then nothing would (there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language.)

                Searle's attempt to justify this premise (the 'human or nothing' one) against the so-called 'systems reply' is to have the operator memorize the book, so that the human is the whole system. Elsewhere [1] I have explained why I don't buy this.

                [1] https://news.ycombinator.com/item?id=45664129

                • oh_my_goodness 2 days ago

                  "there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language."

                  Yeah. I used to assume that. But it's much less obvious now. Or just false or something.

                  It's actually kind of spooky how well Searle did capture/foreshadow something about LLMs decades ago. No part of the system seems to understand much of anything.

                  My theory is that Searle came up with the CR while complaining to his wife (for the hundredth time) about bright undergrads who didn't actually understand anything. She finally said "Hey, you should write that down!" Really she just meant "holy moly, stop telling it to me!" But he misunderstood her, and the rest is history.

gradschool 3 days ago

tl;dr:

If a computer could have an intelligent conversation, then a person could manually execute the same program to the same effect, and since that person could do so without understanding the conversation, computers aren't sentient.

Analogously, some day I might be on life support. The life support machines won't understand what I'm saying. Therefore I won't mean it.

  • 31337Logic 3 days ago

    Wow. That was remarkably way off base.

    • rcxdude 2 days ago

      I think it gets to the heart of the matter quite succinctly, but the more I see discussions on this the more I think that there's two viewpoints on this which just don't seem to overlap. (as in, I feel like people feel like the Chinese room is either obviously true or obviously false and there's not really an argument or elaboration on it that will change their minds).