AlexandrB 2 days ago

I can't tell if I'm just getting old, but the last 2 major tech cycles (cryptocurrency and AI) have both seemed like net negatives for society. I wonder if this is how my parents felt about the internet back in the 90s.

Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.

  • tudorizer 2 days ago

    This parallel is something that I've been mulling over for the better part of this year.

    Are we simply getting old and bitter?

    Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.

    Are we really better or worse off than a few decades ago?

    • tines 2 days ago

      > Are we simply getting old and bitter?

      No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."

      • im3w1l 2 days ago

        We know that people can easily end up irrational either way. Some people more naively positive and others more cynical and bitter. Maybe it's even possible to make both mistakes at once: The same person can see negatives that aren't there, positives that won't happen, miss risks, and miss opportunities.

        We cannot say "I'm criticial therefore I'm right", neither "I'm optimist therefore I'm right". Right conclusion comes from right process: gathering the right data, and thinking it over carefully while trying to be as unbiased and realist as possible.

        • tines 2 days ago

          Your comment is, strictly speaking, correct, but not very useful, because nobody is saying either of those things. The reality is that 90% of people are totally oblivious to the danger of any technology, and they scorn the 9% who say "Let's examine this carefully and see if we can separate the bad from the good." There is the 1% of people who will oppose any change, but they're not dominating the conversation like the people are who say that this technology is unmitigated good (or at least that the bad is so minor that it isn't worth thinking about or changing for).

          (Also strictly speaking, "I'm critical therefore I'm right" isn't always valid, but "I'm uncritical therefore I'm right" is always invalid.)

          • tines 2 days ago

            > (Also strictly speaking, "I'm critical therefore I'm right" isn't always valid, but "I'm uncritical therefore I'm right" is always invalid.)

            I can't edit my comment any more, but I should have said, "The opposite of being 'critical' isn't being 'optimistic,' it's being 'uncritical.'"

    • marcosdumay 2 days ago

      > Are we simply getting old and bitter?

      For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.

      The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.

      • walterbell 2 days ago

        > It's basically only useful for illegal action

        Blockchain assets ("controllable electronic records") are defined in the UCC (Uniform Commercial Code) Article 12 that regulates interstate commerce, https://news.ycombinator.com/item?id=33949680#33951026. Some states have already ratified the changes, others are in progress.

        U.S. federal stablecoin legislation was passed earlier this year.

        • conception a day ago

          Being legal and being useful are different.

          • walterbell a day ago

            The legality is new. Time will determine the success of legal + useful applications.

    • risyachka 2 days ago

      > Are we simply getting old and bitter?

      Maybe, but it has nothing to do with change itself.

      Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.

      • tudorizer 2 days ago

        My theory is that bitterness, at least this particular flavour, stems from seeing this negative impact, more than anything.

        Change itself is a must. It's nature's law.

    • rolandog a day ago

      > Are we simply getting old and bitter?

      It depends, maybe 20 years ago — a couple of years after the dot com bubble — we thought we were not gonna repeat the same mistakes as we did before, and I do believe we blindly drank the kool-aid thinking we were gonna solve all problems with tech.

      Now, we're another year old another year wiser times 20. I don't think having one's eyes open is synonymous with bitterness... but, it is what we do with the information and knowledge we have acquired that defines that trait: do we sit and grumble and shake our fists at the cloud (providers?), or do we seek others to try to prevent problems from escalating.

      > Are we really better or worse off than a few decades ago?

      While technologic progress in several fields has been amazing, it would be naïve of us to not recognize the areas where we have regressed.

      Looking back, I think we should have normalized caution, not moving fast and breaking things; normalized interoperability, and not walled gardens; and we should have been more wary about the dangers of not having solved business models instead of normalizing tracking and targeted advertising, which enabled personalized propaganda...

      ... we should have also paid more attention at the unchecked power of monopolies and media conglomerates and done more to foster a healthier economy as well as improve the quality of life and rights protection of people, including access to education and the strengthening of institutions.

      So, to finally answer your question, I think we are in general a bit worse off. Why? Well, I look back to 20 years ago when our outlook on the future was that the sky was the limit if you worked and studied hard; and now the outlook on the future 20 years from now... seems uncertain.

    • RicoElectrico 2 days ago

      Low interest rates favor parasite middlemen, not those who actually do stuff

  • rnxrx 2 days ago

    I think the progression of sentiment is basically the same. There were lots of folks pushing the agenda that connecting us all would somehow bring about the evolution of the human race by putting information at our fingertips that was eventually followed by concern about kids getting obsessed/porn-saturated.

    The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.

    The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.

  • j-bos 2 days ago

    > Interestingly, both technologies also supercharge scams

    Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.

  • cedws a day ago

    Civilisation has had the technology to sustain itself and minimise suffering for a while now. There’s no reason to push further, yet we do.

  • ysavir 2 days ago

    A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication, but economically and culturally got in the habit of looking for new and exciting improvements to daily life.

    The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.

    But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.

    • bsenftner 2 days ago

      > A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication,

      Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.

    • rightbyte 2 days ago

      > Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient.

      No it has regressed now. We are probably back to the level of 1950s before telephones became common.

      People don't answer unknown numbers and are not listed in the telephone book.

      When I was a kid in the 90s I could call almost anyone in my town by looking them up in the phone book.

  • throwaway22032 2 days ago

    They are both force multipliers. The issue of course is that technology almost always disproportionately benefits the more intelligent / ruthless.

    • podgietaru 2 days ago

      I think the biggest problem with both technologies is how many people seem to think this.

      Crypto was a way that people who think they’re brilliant can engage in gambling.

      AI is a way for “smart” people to create language to make their opinions sound “smarter”

  • adastra22 2 days ago

    It’s how I feel about internet and social media now.

  • add-sub-mul-div 2 days ago

    I'm not generally anti-capitalist, but what capitalism has become at this point in history means that technology is no longer for helping people or helping society.

    Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.

    • Refreeze5224 2 days ago

      I am generally anti-capitalist, and a big reason is because I don't think capitalism, inherently and fundamentally, can become anything other than what it is now. The benefit its provided is rarely accurately weighed against the harms, and for people who disproportionately benefit, like most here on HN, it's even harder to see the harms.

      Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it. If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.

      • AlexandrB 2 days ago

        Capitalism has many harms, but what's the alternative? Communism is worse - much worse.

        • telesilla 2 days ago

          Participatory economies such as those discussed here:

          https://znetwork.org/wp-content/uploads/zbooks/htdocs/books/...

          "In this book we argue for a new alternative based on public ownership and a decentralized planning procedure in which workers and consumers propose and revise their own activities until an equitable, efficient plan is reached. The vision, which we call a participatory economy, strives for equitable consumption and work which integrate conceptual and manual labor so that no participants can skew outcomes in their favor, so that self-motivation plays a growing role as workers manage their own activities, and so that peer pressure and peer esteem provide powerful incentives once excelling and malingering rebound to the advantage and disadvantage of one's work mates."

          • Refreeze5224 2 days ago

            Exactly the kind of idea I was trying to get at in my sibling comment, thanks for posting it.

        • Refreeze5224 2 days ago

          It's really not an either/or proposition. Humans have used a much larger variety of systems of social and economic organization than capitalists would have you believe. We have innovated as a species in so many ways, like putting a man on the moon, AI, the internet, etc. But for some reason (i.e. capitalists like it that way), we don't seem to think we can innovate in the way that we organize our society and economy. Which is total BS in my book. Of course the people who massively and disproportionately benefit from the current arrangement don't want us image an alternative to it.

          Capitalism is designed to maximize profit, which it does well. It has even improved life for many people. Even the most ardent Marxist acknowledges this fact. But what we really care about (unless you're super rich) is maximizing human well-being overall. So why rely on a system that is not actually meant to maximize, prioritize, or focus on what we actually care about, and only does so occasionally or incidentally? It doesn't make sense, and in almost no other arena of human endeavor is this done. Imagine writing software to maximize x, when you really want it to do y, and just hoping that x makes y happen, and saying it's the least worst way of doing it, without trying any other option.

          Fundamentally, I think any socioeconomic system should be designed with people in mind as the organizing principle. If we care about human well-being, happiness, flourishing, etc., it makes no sense not to prioritize it from first principles. I imagine some form of economic and political democracy, wherein people have direct control over the things that affect their lives, in the social, political, and economic spheres (the three are inseparable, despite common capitalist dogma to the contrary). And not the usual representative democracy, where you abdicate any real decision-making power to an effectively unaccountable representative every 4 years.

          The usual objection to this is that it would be impossible to maintain the status quo with an arrangement like this. But that's just more capitalist self-preservation talking. There are clearly tradeoffs required in a new system oriented towards maximal human well-being. Likely tons of dirt-cheap Chinese-made products are out. But those never made us happy to being with!

    • neutronicus 2 days ago

      > Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.

      That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.

      You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.

      But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.

      • add-sub-mul-div 2 days ago

        Of course LLMs today are the most consumer-friendly they're ever going to be. It's irresponsible not to look ahead to the inevitable 180.

        • neutronicus 2 days ago

          Sure - and in fact I'm looking into setting up a local GPU compute server for precisely this reason - but I think capitalism and technology are interacting roughly the same way now that they did when the DVR was invented (and when Google search was good).

    • AlexandrB 2 days ago

      Yes, at some point mainstream technology turned on the users. So much modern tech seems to be about exerting control or "monetizing" instead of empowering.

kohsuke 2 days ago

So they run 5 different experiments to test the hypothesis, and they were nothing like what I imagined.

For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)

They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.

This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...

  • sillysaurusx 2 days ago

    Do you understand how they chose the two groups? And why show one group one video, and the other group the other video? Shouldn’t both groups be shown the same video, then check whether the group division method had any impact on the results? E.g. if group one was dance lovers and group two were dance haters, you wouldn’t get any data on the haters since they were shown the parkour video instead of the dance video.

    Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"

    • daveguy 2 days ago

      Apparently you do not understand how they chose the two groups. Group identity was not based on a survey or any attribute of the participating individuals.

      Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.

      • sillysaurusx 2 days ago

        Thanks! You’re right, I didn’t understand.

cryoshon 2 days ago

To the point of the paper, it has been a somewhat disturbing experience to see otherwise affable superiors in the workplace "prompt" their employees in ways that are obviously downstream of their (very frequent) LLM usage.

  • shredprez 2 days ago

    I started noticing this behavior a few months ago and whew. Easy to fix if the individual cares to, but very hard to ignore from the outside.

    Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.

    • chuckadams 2 days ago

      I still say "please" to the AI assistant so that I'll be among the last to be made into paperclips.

    • topaz0 2 days ago

      I'd take this advice one step further: just don't use the robots

  • AlienRobot 2 days ago

    What does that sound like?

    • righthand 2 days ago

      Ask chatgpt ways to instruct an employee on a task.

lordnacho 2 days ago

One very new behavior is the dismissal of someone's writing as the work of AI.

It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.

  • SkyeCA 2 days ago

    Unfortunately it's the correct thing to do. Just like in the past where you shouldn't have believed any stories told on the internet, it's now reasonable to assume any image/text you come across wasn't created by a human, or in the case of images is simply an event that never happened.

    The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.

    • AftHurrahWinch 2 days ago

      1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.

      2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.

      I think you can't rationally apply the same standard to these 2 things.

      • rightbyte 2 days ago

        > 1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.

        In classical forums arguments are often some form of stamina contest and bots will always win those.

        But ye it is like a troll accusation.

      • foobiekr 2 days ago

        The problem is the bullshit asymmetry and engaging in good faith.

        AI users aren’t investing actual work and can generate reams if bullshit that puts three burden on others to untangle. And they also aren’t engaging in good faith.

        • AftHurrahWinch 2 days ago

          Some discussions are dialectic, where a group is cooperatively reasoning toward a shared truth. In dialectical discussions, good faith is crucial. AI can't participate in dialectical work. Most public discourse is not dialectical, it is rhetorical. The goal is to persuade the audience, not your interlocutor. You aren't "yelling into the void", you're advocating to the jury.

          Rhetoric is the model used in debate. Proponents don't expect to change their Opponent's mind, and vice versa. In fact, if your opponent is obstinate (or a non-sentient text generator), it is easier to demonstrate the strength of your position to the gallery.

          People reference Brandolini's "bullshit asymmetry principle" but don't differentiate between dialectical and rhetorical contexts. In a rhetorical context, the strategy is to demonstrate to the audience that your interlocutor is generating text with an indifference to truth. You can then pivot, forcing them to defend their method rather than making you debunk their claims.

      • virtualbluesky 2 days ago

        Ad hominem may require a human on the receiving end, no?

    • ncr100 2 days ago

      As a person with trust issues, I find this adaptation to the change in status-quo quite natural for me.

  • nineplay 2 days ago

    My partner has become tiresome about this - even if I was to tell them that I responded to your comment on HN, they'd go "You probably just responded to a bot".

    Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".

    • topaz0 2 days ago

      I've seen chatgpt output here as comments for sure. In some cases obvious, in other cases borderline. I wouldn't guess that it's a major fraction of comments, but it's there.

megamix 2 days ago

How do you guys read through an article this fast after it's submitted? I need more than 1 hr to think this through.

  • bee_rider 2 days ago

    So far (as of 15 or so minutes after your comment) we have only one top-level comment that really indicates that the poster has started trying to read the paper seriously, Kohsuke’s post.

    https://news.ycombinator.com/item?id=44912783

    They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).

  • jncfhnb 2 days ago

    Ask AI to summarize and write a response

  • skeezyboy 2 days ago

    cos its mostly fluff you can skip over

skeezyboy 2 days ago

Essentially he did a bunch of surveys. Apparently this is science

cm2012 2 days ago

Interesting theory with insufficient evidence

fontsgenerator 2 days ago

Interesting point — AI can automate tasks, but we need to ensure it doesn’t strip away human judgment and empathy

  • netsharc 2 days ago

    On the opposite side (i.e. the side of what Bender called meatbags), there are a lot of jobs where judgment and empathy are not allowed. E.g. TSA agents examinining babies for bombs in case they're terrorists -- they were told "You must do this to every passenger, no questions asked" and making a decision means deviating from their job description and risking losing it.

temporallobe 2 days ago

As a Black Sabbath fan, I love that they envisioned dystopian stuff like this. Check out their Dehumanizer album.

cratermoon 2 days ago

I'm unwilling to accept the discussion and conclusions of the paper because of the framing of how LLMs work.

> socio-emotional capabilities of autonomous agents

The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/

  • kohsuke 2 days ago

    But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.

  • Isamu 2 days ago

    No, I think the thesis is that people perceive falsely that agents are highly human, and as a result assimilate downward toward the agent’s bias and conclusions.

    That is the dehumanization process they are describing.

  • kingkawn 2 days ago

    The paper literally spells out that this is a perception of the user and that is the root of the impact

    • cratermoon 2 days ago

      Perhaps I missed it, could you help me see where specifically the paper acknowledges or asserts that LLMs do not have these capabilities? I see where the paper repeatedly mentions perceptions, but I also see right at the beginning, "Our research reveals that the socio-emotional capabilities of autonomous agents lead individuals to attribute a humanlike mind to these nonhuman entities" [emphasis added], and multiple places in the paper, for example in the section titled "Theoretical Background", subtitle 'Socio-emotional capabilities in autonomous agents increase “humanness”', LLMs are implied to have at least low levels of these capabilities, and contrasts it to the perception that they have high levels.

      In brief, the paper consistently but implicitly regards these tools as having at least minimal socio-emotional capabilities, and that the problem is humans perceiving them as having higher levels.

      • empath75 2 days ago

        Whether they have those capabilities or not is totally irrelevant to the conclusions of the paper, because it is a study of people and not AI.

      • kingkawn 2 days ago

        “…leads individuals to attribute a human-like mind to these nonhuman entities.”

        It is the ability of the agent to emulate these social capacities that leads users to attribute human-like minds. There is no assertion whatsoever that the agents have a mind, but that their behavior leads some people to that conclusion. It’s in your own example.

      • cootsnuck 2 days ago

        I can’t tell if you’re being disingenuous, but the very first sentence of the abstract literally says the word "simulate":

        > Recent technological advancements have empowered nonhuman entities, such as virtual assistants and humanoid robots, to simulate human intelligence and behavior.

        In the paper, "socio-emotional capability" is serving as a behavioral/operational label. Specifically, the ability to understand, express, and respond to emotions. It's used to study perceptions and spillovers. That's it.

        The authors manipulate perceived socio-emotional behavior and measure how that shifts human judgments and treatment of others.

        Whether that behavior is "illusory" or phenomenally real is orthogonal to the research scope and doesn’t change the results. But regardless, as I said, they quite literally said "simulate", so you should still be satisfied.

  • chrisweekly 2 days ago

    +1 Insightful

    Your "timmy" post deserves its own discussion. Thanks for sharing it!

  • stuartjohnson12 2 days ago

    Your socio-emotional capabilities are illusory. They are a product of how craving for social acceptance "hacks" your brain and exploits the hundreds of thousands of years of evolution of our equipment as a social species.

    • skeezyboy 2 days ago

      its a next word predictor. if youve been convinced it has a brain, i have some magic beans youd be interested in

      • empath75 2 days ago

        Consider whether it is possible to complete sentences about the world coherently in a human like way without knowing or thinking about the world.

      • stuartjohnson12 2 days ago

        and if it is a sufficiently accurate next word predictor then it may accurately predict what an agent with socio-emotional skills would use as their next word in which case it will have exhibited socio-emotional skill.

      • ACCount37 2 days ago

        You're saying "next word predictor" as if it's some kind of gotcha.

        You're typing on a keyboard, which means you're nothing but a "next keypress predictor". This says very little about how intelligent are you.

        • skeezyboy 2 days ago

          not my only trick is it though. human brain engages in all sorts of cognitive enterprises, language formation being just one of them. LLMS are essentially statistical predictors - which is indeed part of what a human brain does but only a small slither of its abilities.

          • ACCount37 2 days ago

            And why does it matter?

            For all I know, humans are "essentially statistical predictors" too - and all of their insistence on being something greater than that is anthropocentric copium.