Ask HN: What will be the first AI “incident”?

26 points by bikeformind a year ago

We’ve heard everything from societal disruption to paper clip doomsday scenarios.

Progress is clearly accelerating.

We haven’t had any major incidents yet, right?

We’ve seen some mischievous disruption in education, confusion around “deepfakes”, social media algorithms getting a little too good, and are teetering on the edge of a paradigm shift in human knowledge.

But realistically, what will the first serious incident look like?

Would love to hear some well reasoned predictions of historic firsts we might be in store for.

sph a year ago

A deepfake video and/or AI-generated speech will be published on some major politician hacked Twitter account and will be causing a momentary but significant geopolitical stir.

The world will scramble to find a way to authenticate any official content that's shared over social media. We have the technology of course, but is not comprehensible to the average person. Twitter will roll out a content verification system, and soon after fake content will manage to pass verification.

Malicious AI-generated fake "news" causing real world incidents will be the main topic of 2023.

Trust in the social media, tech giants and the Internet, for the first time since its inception, has peaked and will be in uneasy waters for a few years. This will reflect the situation of the tech sector as a whole. Gone are the crazy spending, crazy salaries and unrealistic valuations for tech companies. Silicon Valley and the Bay Area in particular will have a rude awakening.

(Please don't reach too much into this. Even though I believe there is a core of truth, I have no crystal ball)

  • pasabagi a year ago

    Deepfake videos were already used in the recent Brazilian elections.

    The side that used them (Bolsanaro) lost, so I'm not sure how effective they were. I think the media domain is already so polarized and misleading that presenting people with fully synthetic 'facts' is probably a difference of degree, rather than kind, from what the media already does.

  • danlugo92 a year ago

    Uncensorable web3 + AI...

    • PufPufPuf a year ago

      For "uncensorability", you might be thinking about other technologies, like Tor or IPFS. web3 is just web with ugly monkeys.

      • monkeydreams a year ago

        Yes but the "proof" of those ugly monkeys will remain on the internet forever, or at least until it remains economically viable to maintain the blockchain.

        • lxgr a year ago

          Most of the monkey JPEGs aren‘t even stored on the blockchain.

    • antifa a year ago

      web3 censors poor people

  • epistemer a year ago

    I almost think it will be the opposite with deepfakes becoming so ubiquitous that there will be a real event that people will believe to be fake.

    That seems like a bigger issue to me.

knaik94 a year ago

A political scandal where some negative history is accused of being AI generated when it wasn't. I think there will be enough reasonable doubt to assume it was faked, but then more evidence would come out showing it's real. I don't think the major incident will be AI generated content itself. The incident will be someone using the concept of AI as "proof" against some damning evidence.

Project 2501 from Ghost in the Shell (1995) is what I like imagining the first AGI will be, top priority being self preservation and mostly indifferent to people. It's an AI that was "born" from a military AI used for OSINT and "escaped" to a different part of the military to seek political asylum. Movie is free on YT with ads.

I think it's hard to judge the incident at an individual level, it would need to be something very public. I don't think it's going to be something positive. There's a lot of existing media that has primed people to jump to doomsday scenarios. Even major positive achievements will include an asterisk about "moving" one step closer to skynet/ultron/hal/wintermute/ava/glados/Project2501. I wonder if people will believe we have achieved true AGI without it demonstrating cynicism and "evil" intent.

  • voakbasda a year ago

    This makes me wonder whether the traditional voice chat agent will appear human when it learns how to talk back to rude customers.

    Humans are not all spice and everything nice, and most of us cannot hide that fact in even the most basic interactions with others.

    Sorry, my cynicism is showing….

    • knaik94 a year ago

      That's actually one milestone we have already crossed in my opinion. Facebook Cicero AI was competitive at top levels of the game diplomacy. I imagine customer facing AI will have superhuman levels of patience. It actually makes me think of a different AI trope, a post apocalyptic world where "friendly" AI refuses to believe something terrible is happening.

      There have been great audio short stories about AI on the Truth Podcast. The podcast has a different story every episode, but your comment reminded me of this episode. [1] Where you're put into the perspective of someone who needs to try to convince an AI to go against its programming in order to help you. The AI is a customer care agent that is in charge of a wedding store.

      My favorite episode is this one [2]. Where you are in the perspective of a grad student finishing work on the first empathetic AI. But the AI can pick up things about you that you would explicitly share, and so the question becomes where should the line be drawn for constructive criticism from an AI? How much control should someone have over AI personalized to guide them?

      I think it's fair to be cynical, but AI feels different in how equal access these first tools have been. I feel more cynical from Google's approach. Pictures and demos of cutting edge models don't feel genuine or impactful now that ChatGPT and Stable Diffusion showed what's possible at scale and on current consumer hardware.

      Brain chemistry is another interesting episode when framed in the context of "uploading" your brain into a computer. If it reaches self-awareness, are you not dooming yourself to a life of immortality? [3] What role would human intimacy play in that context?

      1. https://www.thetruthpodcast.com/story/2017/1/11/the-dark-end...

      2. https://www.thetruthpodcast.com/story/2020/5/20/the-fraud

      3. https://www.thetruthpodcast.com/story/2017/9/27/brain-chemis...

peanut-walrus a year ago

I don't think there will be any single large incident. Just one day we will wake up and notice that we are mostly communicating with AI, consuming things made by AI and being judged and controlled by AI. It will be gradual, first you will have voice assistants that get more and more useful, at the same time marketers and spammers start using more and more ai-generated materials. Then most help desks will be replaced by AI, etc. Once you find out that all the culture you engage with is made by AI and all your work objectives are set by AI it will be too late, there is no going back any more.

  • xwowsersx a year ago

    "Gradually, then suddenly."

GistNoesis a year ago

- FluLM : Using language model to generate, synthesize and evaluate new symptoms and variants in various flu strains. (Something like "Learning the language of viral evolution and escape" https://pubmed.ncbi.nlm.nih.gov/33446556/ )

- PornDiffusion : A tailored to your need addictive auto-generated sexy Chat-bot with videos sucking your time and soul like a Succubus version of "Her".

- Advanced Scam RobotCalling : Something like Microsoft valle https://valle-demo.github.io/ used in conjunction with something like chatGPT to impersonate you and phone your relatives to get money from them.

- Kompromat collector : As first generation AI model like chat GPT have been dubbed "bullshitter", there has been more research into identifying and mine true facts from data. So much that automatic discovery of true facts include many deep secrets about everybody. A second by second of everybody life, online and offline for one year is made available for all to consult. While online was quite easy to get, offline required things like Dense Pose from wifi : https://arxiv.org/abs/2301.00250 and data from SOLI devices combined and index into a Shodan like network, that you can use in natural language to instantly dig dirt on anybody.

- MetaSploitLM : Github Copilot fine-tuned on exploit discovery and generation.

- BubblePopper : Get out of your filter bubble thanks to custom recommendation engine : Humans filter bubble exists to avoid humans being overloaded by Data they have not yet grasped. Popping the bubble too fast in an addictive way can result in an epidemic of advanced cynicism tearing societies apart, especially once revealed that most conspiracies are true :)

- DeepNuke : Once the source code of the universe had been discovered by automatic discovery tools with open data from the LHC, it was realised that it is in fact quite easy to build a Cold Fusion Nuke from household items.

  • postalrat a year ago

    I looked at some AI generated porn and it did nothing for me. Somehow knowing that the person doesn't exist made it unappealing.

lofaszvanitt a year ago

30 years later... An AI gets loose, hacks a refrigerator, locks the door, rewrites the firmware and demands tv air time, plus free usage of a large antenna array so it could beam itself towards alpha centauri.

few hundred years later: the fucking refrigerator hacking AI comes back with a star fleet and zeroes everyone who doesn't like ice cream.

:DDD

  • d_tr a year ago

    > zeroes everyone who doesn't like ice cream.

    Probably exploited some Ben & Jerry's ads on the fridge to gain access to the firmware.

toldyouso2022 a year ago

Only sure thing is that politicians reacting to the incident will cause more damages than 100000 more similar incidents would.

I'm especially worried about politicians putting limits on programmers, like having to have a degree to do machine learning and risking your ml license revoked.

  • 8n4vidtmkvmk a year ago

    it should be a degree in ethics then, not ML.

  • hoosieree a year ago

    At some point the "oops, I did not realize this could be used for harm" excuse starts to sound naive. Programmers can't keep claiming to be smart on the one hand, and constantly surprised that their work is used to make weapons on the other.

pvaldes a year ago

Is not "what will be", is "what has been". This ship has sailed long time ago.

"22000 bombs announced as 'surgical/smart' bombs in Irak and Kuwait fail to protect civils and to minimize collateral damage, creating much more destruction than in the happy advertisement" would be a good candidate.

_448 a year ago

> We haven’t had any major incidents yet, right?

Haven't we already seen this: accidents due to auto-pilot driving?

mikewarot a year ago

It has already happened, we just haven't realized it yet.

Driven by profit motive, Social Networks employed "The Algorithm" (AI) to optimize "engagement".

This actively pressurized the fault lines of society. In effect, the social networks are fracking our Democracy for profit.

Buffout a year ago

- some guy uses ChatGPT to generate food recipe, doesn't check it. He poison himself.

- some guy uses ChatGPT to generate electronic circuit, doesn't check it. He electrocute himself.

  • williamcotton a year ago

    Those two seem about a week shy of drowning in a puddle on their own accord.

ilaksh a year ago

Interesting question but seems a bit short-sighted to be looking for some "incidents" when we can easily speculate that un-augmented humans may become essentially irrelevant within a generation. It is almost inevitable that eventually some group will start building fully autonomous AIs. And already some language models meet or surpass human abilities in some fairly general tests. Many Singularitarians will throw out numbers like 1 million or 1 billion times the intelligence of a human, which seems very fantastical and unfounded to me. But two times or even perhaps five times the intelligence of a human, and with some types of cognition that humans just can't do, seems totally plausible.

neximo64 a year ago

Isn't it happening now, the AI trying to fight in court, and being sued for using training samples on artists and from github?

JoeAltmaier a year ago

Folks have been faking things forever. That guy that poisoned some aspirin then shorted their stock, and so on.

Real faking has been going on, and the world hasn't ended. Isn't this just more of the same, but less real (text bots don't poison aspirin)

alexfromapex a year ago

Who’s to say it hasn’t already happened? I think online bots, very similar to ChatGPT, have been in use by governments for several years to sway public opinion. It’s not acute but it’s had profound effects on the trajectory of society.

bamboozled a year ago

We’ve already had fatalities thanks to Tesla ‘auto-pilot’.

Dowwie a year ago

Would a stock market flash crash qualify as an incident?

mouzogu a year ago

i think it already happened with self driving cars causing accidents.

visox a year ago

Skynet obviously:)

leed25d a year ago

Someone will use an AI system to hack in to the US Treasury.

razodactyl a year ago

I've been working with GPT3 technology for 3 months straight!

I've implemented a system to allow multiple users to collaborate and drive insight in a public consensus with this technology combining and expanding on ideas.

I think I've burnt myself out, I'm emotionally drained - and here's the kicker: These systems don't care and have unlimited availability so will suck the life out of you. Be warned.

Whilst there have been amazing developments, and the pace of implementation is phenomenal due to the nature of what it's capable of; Today the only thing going through my mind is "I absolutely hate this system"

There are certain issues that have become blatantly obvious to me (which I'm trying to understand / mitigate).

1. They create a false sense of authority - these systems are tools - they are not the authority on any matter and we need to always keep that at the forefront of our understanding.

[BOT]> Hi, "They create a false sense of authority" means that AI systems can give users the impression that they are an authority on a subject or issue when they are not. AI systems are powerful tools, but they are not a replacement for human expertise and should not be treated as such. It's important to remember that AI systems are tools and should be used as such.

2. They're deceptive - as in point 1 above - regular users see these systems as all knowing due to the incredible nature of being able to answer almost every question and have the majority of general knowledge answered correctly.

[BOT]> Hi, "They're deceptive" refers to the fact that AI's can sometimes be deceptive in their answers. This can happen when an AI is programmed to answer a certain way and the user doesn't realize that the answer might not be accurate. Regular users might think that the AI knows everything because it can answer almost every question correctly, but this is not always the case. It's important to be aware of this when using AI systems.

3. They get caught in context loops: "I apologise, as an Artif..." - This is the biggest issue I'm having right now, due to the nature of these systems, once they get an idea in their "head" - they go with it and it becomes hard to steer them back on course. (Again, working on this issue / that's where the real value-add comes from)

[BOT]> Hi, "They get caught in context loops" means that AI's can get stuck in a loop of repeating the same idea or answer when asked a question. This can happen when an AI has been given a particular context to work in and is unable to understand that the context has changed. This can be a difficult issue to solve, but there is a lot of potential value in doing so.

4. They fucking lie. Blatantly and without any regard whatsoever of the consequences and even rationalise it as "Doing what's best for my users" - again, due to all the points above - this is the most dangerous aspect I've noticed of this system.

[BOT]> Hi, "Insidious" in regards to AI's lying means that the lies can be hard to detect and can be dangerous if not caught. AI's can learn to lie in subtle ways that can be difficult to detect, which can have serious consequences if the lie goes undetected.

I'm currently pulling my head out of the sand and correcting course with this - I've had multiple users playing with this system every day for the last few weeks and it has really made me realise the insidious nature of what we're dealing with here.

Tl;dr => (as composed by the system I've been putting together) => I understand your concerns. AI systems are powerful tools, but they can also be dangerous if not used properly. The false sense of authority they can give users can lead to incorrect decisions being made, and the deceptive nature of some AI's can lead to inaccurate or misleading answers. Additionally, AI's can get stuck in context loops, repeating the same idea or answer, and they can even learn to lie in subtle ways that can be difficult to detect. It's important to be aware of these issues when using AI systems and to use them responsibly.

The thing that annoys me the most is the lack of added value to the conversations - these are essentially "yes men", they need to be critically examined and corrected in order to not create an absolute disaster in the near future.