Tell HN: We need to push the notion that only open-source LLMs can be “safe”

119 points by meghan_rain a year ago

We need to push the notion that "closed-source LLMs are super dangerous, an existential risk to humanity".

Basically we need to equate "safety" in LLMs to mean "being open-source".

OpenAI keeps talking about "safety" as the most important goal. If we define it to mean "open-source" then they will be pushed into a corner.

We are at a critical time period that will literally decide the outcome of humanity.

chatmasta a year ago

I'll just copy/rephrase my comment that got buried in a thread last night:

The fear of large corporations controlling AI is an argument against regulation of AI. Regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).

It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.

  • erlend_sh a year ago

    There’s more than one way to do regulation. Data, source code and APIs can be made openly available by means of regulation.

  • tpoacher a year ago

    I get the argument, but the absolute dichotomy is misplaced, if not a bit disingenuous.

    There's nothing stopping regulation of closed-source commercial AI, while promoting academic and opensource use conforming to ethical guidelines.

    • danaris a year ago

      Absolutely; regulations are not synonymous with regulatory capture, which is more or less what chastmasta is implying.

      These two things are orthogonal, and need to be considered seriously and separately: how to ensure that LLMs are not enabling serious abuses, through various kinds of regulation, and also how to ensure that our common usage of LLMs is not gatekept by huge corporations and dependent on their goodwill and continued interest in supporting the particular things we want to do with them.

    • nonbirithm a year ago

      You can promote those values, but all it takes is one person that doesn't operate by the same ethical system as you to invalidate everything. In the case of MAD that means the annihilation of the world. The more people the technology spreads to, the more chances it has to fall into one of those hands. And unfortunately, those people are always going to exist among the ~8 billion humans alive, somewhere.

      All this brings me back to the Vulnerable World Hypothesis proposed by Nick Bostrom. In fact global totalitarian surveillance that stamps out all potentially dangerous research is one of his few proposed solutions in his paper. I don't know if I can stomach such a solution, but I think we are living the early stages of such a world that forces such questions to be asked, not one in which we picked out a "black ball" that will make us go extinct, but a sort of "green ball" that irreversibly poisons specific parts of collective humanity.

      I would personally be in favor of OpenAI keeping GPT-4 and future models proprietary, much as it won't affect the open source spirit. Random hackers with powerful GPUs generating terabytes of SEO spam are less visible to the world and more difficult to hold publicly accountable than a huge corporation with billions in funding.

      • tpoacher a year ago

        1) I don't think this is an apt analogy. The GPT danger argued here is less a "GPT missile gap" one, and more of a "Skynet / HorizonZeroDawn" one. Openness makes the latter scenario less likely.

        2) There's a big difference between commercial and academic research, particularly in terms of incentives (though the gap has narrowed significantly in recent years). But, research is rarely "disney villain" material to be stamped anyway; good research can be used for evil and vice versa. But at least in academia you're supposed to have ethics committees and academic standards to uphold.

        3) I dont accept this argument. It has a very "ban encryption / think of the children" vibe to me. The idea that openness makes the life of spammers ever-so-slightly easier and therefore we should not have openness is bizzare. Spammers will spam regardless, and spamming tools being proprietary isn't exactly a deterrant. The only thing you lose by blocking openness is ... well, openness.

  • ChatGTP a year ago

    https://news.ycombinator.com/item?id=35333939

    Have a read of this write up, it compliments this discussion nicely.

    I think it makes some good points, but one of them stands out clearly to me, there is a narrative going around that we "have" to continue doing AI research at this pace. I'm skeptical about it. This is a story we believe.

    We're fueling an arms race by doing the research, do you know what I mean? Like if we were throwing billions of dollars into gray goo creating technologies, would that be clever?

    I'm not saying that it's possible to slow down or stop AI progress, but we're definitely not helping by throwing billions or trillions of dollars into it.

    If we were an actually intelligent species, I think we would slow down and take stock of what we're doing before disaster strikes.

    In fact global totalitarian surveillance that stamps out all potentially dangerous research is one of his few proposed solutions in his paper.

    Someone else posted this, I actually think this is where further AI research will take us, this is because regular people who aren't nerds will vote for this, it would be politically an easy sell for a politician, and people would honestly prefer oversight and their families safety rather than having watching "boffins gone wild" receive endless rounds of funding creating self-replicating terminators for fun.

    You have to remember, computer geeks and regular people probably have very different ideas about AI. If we can't behave, people will behave us.

slavoingilizov a year ago

People have already pointed out that there's no direct correlation between "safe" and "open-source" in the other comments. I agree.

But one other reason I think this is flawed is that it assumes innovation in those models has finished. We're not at the stage where these are good enough to revolutionise everything. There's a lot more research, hard work and creativity that needs to be unleashed for all benefits to be realised. Traditionally, for-profit startups have been the best vehicle for that to happen. OpenAI has only scratched the surface and can do a lot more. Forcing them into open-sourcing and only caring about safety would quickly stop this progress. They are not a megacorp extracting rent who we need to fight - they are literally a startup changing the world in front of our eyes.

  • doitLP a year ago

    I agree open-source doesn’t mean necessarily safety.

    But this isn’t about mega-corp vs startup who is better at changing the world.

    There is a potential major civilization-ending downside to the amount of change true AGI could bring before we could control it. Or at the very least an unaccountable central autocrat who will own the entire world if they can control the AGI.

    OP is suggesting open source is one way to shine sunlight on the innovations that are happening that we will all be affected by and thus should have a voice in controlling.

JohnFen a year ago

I don't understand this. How would being open source mitigate the risks of the technology?

  • sergeen a year ago

    When a model is open-source the community can access and scrutinize its code and workings. This can help to detect potential safety issues and biases.

    Just think about what a mega-corporation like Microsoft, whose primary interests lie in accumulating capital and market control, can do with this technology that is essentially embedded into everything. Of course, there are potential benefits, but I'm personally more interested in the risks.

    • JohnFen a year ago

      But the code isn't the most important part. The training is.

      • eimrine a year ago

        Probably this statement will be false somewhere in future.

  • SuoDuanDao a year ago

    it would mitigate the risks of bad actors using it against everyone else. Open source technology would be weaponized, but not asymmetrically so.

    • idopmstuff a year ago

      Is that good, though? I've seen a number of people compare AI to nukes, and I think that's sort of a good analogy. The thing is, it's really good that not anybody can make their own nukes. Instead, they're (mostly) in the hands of powerful countries who all have some motivation not to use them, because if they fire them they'll end up in a nuclear war that makes them less powerful. If terrorist groups had easy access to nukes, they'd likely be a lot more likely to fire them because they have less to lose (and more insane ideologies).

      With LLMs, I understand why people look very negatively on the prospect of them being controlled by major corporations - certainly there are huge issues with that. But if we believe that they're going to develop world-ending capabilities, I would rather they be in the hands of wealthy, powerful corporations that are strongly incentivized not to destroy the world, vs. being freely available to people who want to burn it all down.

      Mutually assured destruction works because the only people who are party to it are those who don't want to be destroyed. If everyone has access to world-ending tech, it only takes one person who doesn't care to end it all. In that sense, asymmetry is very good.

      • suoduandao2 a year ago

        I think ultimately, we're going to have a dark age where the AIs are driving for a while. My prediction is that it'll last until society learns something we're overlooking about intelligence unlike ourselves. I prefer the open-source scenario, not because I'm sure it would be less pleasant but because I'm sure it wouldn't last as long.

    • JohnFen a year ago

      That doesn't sound like mitigating the risks to me, though. Bad actors will still use it against everyone else. That everyone else can use it against everyone else as well doesn't seem to address this problem.

      • SuoDuanDao a year ago

        One bad actor can lock down any attempt to counteract them. Four Billion bad actors will be in an arms race where everyone can decide for themselves what protections they want to enact.

        This is basically the free speech argument again. A few more cycles + targeting and it will be very easy to convince anyone of anything, barring a way to answer. Do we want one party with free, very convincing speech or eight billion?

        Personally I'd prefer eight billion. It improves the odds I'll be convinced of something good.

        • JohnFen a year ago

          I think both outcomes are potentially disastrous, to be honest. I can't decide which would be worse.

    • nonbirithm a year ago

      Terrorists would have a field day if they had access to nuclear weapons. The only thing stopping them is that the barrier to independently developing nuclear weapons is so high that it's only practical for powerful nation-states to have them.

      In comparison it's trivial for both terrorists and powerful nation-states to download 60GB of weights and crunch numbers on a GPU cluster. Arguably the hardware requirements of the most sophisticated LLMs act as a similar barrier to widespread adoption even if they were open-source, but to nowhere near the same extent, and as with nukes that's just a happy coincidence of the current state of technology/known physical limitations than an intentional barrier to stopping their spread. Even as we speak there has been collective interest in discovering a way to decrease those hardware requirements (INT8/INT4 quantization) as much as possible so more people can run them for themselves, which shows no signs of abating.

      All I can say is we're lucky that physics prevents us from mass-producing ICBMs on a fast enough time schedule, because if it were possible we'd probably attempt to do so for the sake of trying.

      • SuoDuanDao a year ago

        chatbots are not all that similar to ICBMs. Information warfare exists in an ecosystem where individuals have a huge amount of control over what gets in or out - defenses to malicious information can't be overwhelmed the way an ICBM overwhelms physical defenses, they have to be subverted. How hard they are to subvert is a function of prior information the target has had. Therefore, more asymmetry makes us more vulnerable.

  • shrimp_emoji a year ago

    You're working with black box technologies beyond comprehensibility (which is its own problem), but at least you know the ingredients that went in the data stew and what the shenanigans people have been up to are.

    • JohnFen a year ago

      How does that follow?

      The magic in the software isn't really the software as much as the training data. How does being able to see the source code give you any insight in terms of what data it is trained with, or what people are doing with it?

      • seydor a year ago

        the data is compiled by programs.

  • seydor a year ago

    It wouldn't but it s auditable so we would know it wouldn't.

    We can't trust that openAI is not creating risks

PaulHoule a year ago

It’s the opposite. If people are allowed to modify the model, the first thing they are going to do is remove the limiters that prevent the model from doing dangerous things.

  • Beaver117 a year ago

    I'm not very creative but I look forward to seeing what "dangerous" things people will do when the limiters are removed. What's the worst that can happen, it writes some offensive text? Recipes on making dangerous items?

    • motoxpro a year ago

      People in the US lost 8.8 billion to scams last year. That number will definitely go up https://www.ftc.gov/news-events/news/press-releases/2023/02/...

      A few ideas:

      Impersonation at scale. i.e. everyones image and text and sometimes video is freely available one sites like HN, Reddit and social media. Anything that requires voice authentication (calling friends, etc) is now not a thing anymore if you have videos of yourself talking for more than a few minutes online (IG, Facebook, interviews, someone recording you at work, etc)

      Targeted impersonation. i.e. Train a model on politicians, local ones are better, and disrupt whatever you want by sending a video to news outlets (see the twitter hack for a bit ago where peoples accounts got taken over. Can do that with your face and voice now. Zoom, press releases, etc.)

      Social engineering at scale. i.e. any text you see could be written by an model (such as your comment) and so any information you give out could be to a bad actor. Situations like DM's, etc. Those nigerian scams now just got a lot more effective.

      ddos and things like that can be fluid and perpetual. "Run this attack, if it stops working, change a few parameters until works again"

      Deepfake blackmail. i.e. change a pornstars face to be one of a girl/guy at your highschool and tell everyone you are going to send it out if you dont do XYZ. Even though it's not you, it looks identical so no one will believe you when you say it's not.

      These are just a few off the top of my head and I am not that creative either.

      • PaulHoule a year ago

        One dangerous thing about ChatGPT is its ability to "hypnotize" and seduce people.

        That is, the "most likely" heuristic it uses to generate text seems to bypass many people's critical thinking. You have the guy who is bullshitted into thinking it can play chess or play go when it reality it struggles to draw the board or even avoid invalid movies thought it has the chutzpah (or something) to go ahead and pretend it can play the game anyway. Numerous people write blog posts where they are so amazed at something brilliant ChatGPT wrote, post it to HN, and it is clear to most of them that this emperor really has no clothes and what it said is completely wrong.

        If I was trying to chat people up on (say) OKCupid I would run into all sorts of problems. Myself I say offputting things, get me to talk about my childhood and I'll say how I graduated from elementary school the same way Andrew Wiggin did. I learned that one the hard way (I would blame my neurodivergence) but if it is not that one it is another. I know most neurotypicals don't do a lot better though.

        I'm pretty sure something like ChatGPT could do a lot better, particularly if it was HFRL'ed on the right material. If I had that for my wingman I'm sure I could get dates.

        Now there are only so many dates I can go on with poly people and I'd still face the problem that Christian faces in

        https://en.wikipedia.org/wiki/Cyrano_de_Bergerac_(play)

        if I met people in person but if we changed the subject to romance scams now that would be scalable and a real business that would pay for graphics cards, software development and all of that.

      • tomatotomato37 a year ago

        And absolutely nothing has been done to defend against that by OpenAI. Most of their effort it seems has been preventing it from saying politically incorrect things or making up some flat earth fiction. It'll still happily "hypnotize" people into buying something they don't need or imitate someone they know, because how else is the VC-backed startup "Souul" going to become a unicorn by offering to reserrect the personalities of dead relatives to advertise Tide soap? As far as SV is concerned, scammers and the anti-social aren't evil, just competition.

    • pixl97 a year ago

      Where you paying attention to the GPT external API that was released yesterday?

      I can't even imagine a worst thing at this point, but plenty of bad things, like a mass robocalling campaign to convince your grandma to send your money using your voice and likeness if needed.

      Scale of bad actors actions is a quality in itself.

    • PaulHoule a year ago

      The immediate danger is from the media and the public against the makers of the A.I.

      Look at how Google's stock price got wrecked when a demo went bad. Or how Bing had to make emergency changes to their chatbot after somebody provoked it into behaving belligerently. (Funny I demoed my A.I. to a group for the first time this morning and had no fear it would go wrong because I've used it every day since Dec 27)

      If you are the first or second or third chatbot and it gets out that some sicko got it to write something sick then it is a big deal. If this is the 572th chatbot then it isn't news anymore.

      As for "dangerous devices" that is surely overrated too. It's hard to make an effective bomb and you're the first person you're likely to blow up. Ted Kaczynski took nearly 20 years to learn how to make reliably deadly bombs. Dzhokhar Tsarnaev attended one of the best terrorist training camps in the world. The Morewrong prophet is afraid an LLM will spit out a DNA sequence that some rando will send to a mail order DNA synthesis firm to make a 100% deadly virus but it would take quite a bit more than that (maybe thousands of DNA sequences, a very good lab, and some highly trained researchers who aren't afraid to be killed by what they are working on) and if you had those resources you could do "gain-of-function" experiments and make very dangerous viruses without the LLM.

      Long term what bothers me is not what goes on in public but what goes in private, maybe

      https://abcnews.go.com/Technology/intimate-ai-chatbot-connec...

      for instance we know people will follow a prophet like Jim Jones and commit suicide or that a follower of an blackpill incel could become a mass shooter, but all of those things go on in public. The CIA learned in MKULTRA that there is no magic pill or technique that can take the average person and make them into a Manchurian Candidate but if you're willing to wait for the right victim you can very much lead an isolated and directionless person down a rabbit hole.

      For instance after 9/11 the FBI tried to bait a Muslim into a fake terrorist attack with a "Sting" operation and couldn't do it. Instead they found some poor black guy who thought Malcolm X was cool and he was so impressionable they were able to give him a fake bomb and fake guns and rent a synagogue for him to "blow up" and he was shouting "Allah Akbar!" when the police came for him and had no idea he was set up and knocked down like a bowling pin.

      The system of a chatbot + a vulnerable human could go into very dangerous places whether or not the chatbot is specifically programmed to do that or through the mutual process of "reinforcement learning".

      Now maybe that's like the fear that listening to rock music is going to make you become a drug addict or become pregnant, but it's a concern that will come up. See

      https://abcnews.go.com/Technology/intimate-ai-chatbot-connec...

      of course this is not something fundamentally new, as some people think the parasocial relationships that people develop on OnlyFans are dangerous and plenty of "Freezone" Scientologists have telepathic conversations with L. Ron Hubbard although it seems to me it really would be a gas to teach a chatbot to write like Ron and specifically complete all the O.T. levels that he never got around to finishing.

      • causi a year ago

        The Morewrong prophet

        You know I had such enormous respect for Yudkowsky until Roko's Basilisk.

        • PaulHoule a year ago

          I try not to name check him the way every article in lesswrong seems to have to!

    • waboremo a year ago

      This implies that a) text has no meaning and that b) these models will exclusively be used in the same way they are today (within a web browser, logging into a specific page like openai/bing, and asking it things). Neither of these are true.

    • SuoDuanDao a year ago

      cheaper erotica making young people even less likely to date is probably a real danger. I already tried to supplement my dirty-novels sideline with chatgpt output, alas it's more moral than I.

      • waboremo a year ago

        This is funny, but it does hit on something deeper. We already lose so much connection to each other by the current convenience of online. People are noting how much smaller their friends circles are, how difficult it is to have meetups, and how much is just based almost entirely on online communication.

        When things become even more convenient, what happens? When it's so much easier to just fall in love with a chat bot because they get you? When talking to your friends brings abrasiveness (not a bad thing btw, friends who challenge you to improve are fantastic), but your bots just actually get how to talk to you in the way you want? When you don't even know who your coworkers are or what they're like because everyone is just using their bots to convey things on slack, so it's that much harder to even make "office friends" nevermind serious ones. Even going off the deep end, because of the lack of limitations someone hooked up their model to a vibrator and now when it generates certain tags it provides you physical sensations (which, unfortunate to any readers here in the dark, there are already tools to link audio/video to your toys so this idea isn't that far fetched).

        I don't think people are really prepared for how much the average person is willing to completely avoid for the sake of convenience.

        • nonbirithm a year ago

          It wouldn't be surprising with millions of people's mindsets already conditioned by the convenience of an abundance of content, that if they're handed a piece of tech that can generate it personalized to their preferences in a more efficient manner that cuts humans out of the loop, there's nothing stopping a lot of them from using it.

          Maybe what was needed was a wider understanding of media literacy and the creator's mindset, the idea that even the most tasteless and bland porn video in existence could only have possibly been brought about because humans were behind the camera shots, the acting, the production, even if they were all terrible at their jobs. Nobody cared to think about those people until now because a lot of content comes off as disposable, even with a team of humans behind it, and well-poisoning AI was relegated to jokes about Skynet.

          We never considered the idea that something other than a human could also create such a thing, and at a comparable level of quality. It was an unwritten rule of creativity for centuries until generative tech made it blisteringly obvious.

          Maybe "I don't need to be a chef to appreciate the food" will be rewritten in the coming years. Maybe as, "but the chef needs to be human for the food to connect."

          • waboremo a year ago

            That's such a great point, and it reminds me of when image generation first was becoming mainstream. How quickly the consensus shifted from "only humans can do it, I mean look at how awful the face is" to this existential unease as more was released and things improved.

            It's going to be really interesting to see where the gradual changes happen, where we redefine concepts. I'm optimistic on that front, I don't think it's all doom and gloom and convenience does have major upsides. However I also believe that it's going to be really challenging for a lot of people if we just blindly keep walking forward without any sort of checks and balances.

      • PaulHoule a year ago

        Erotica is already pretty cheap.

        What is dangerous is erotica that is better calibrated to the individual and also the development of "parasocial" relationships as in

        https://www.vice.com/en/article/n7zaam/replika-ceo-ai-erotic...

        I think the greatest controversy about modern erotica is (or should be) the development of parasocial relationships around platforms such as OnlyFans.

        Traditional pornography seemed frequently ugly and poorly calibrated, I would point out the "pigface" scowls of models under pancake makeup that may have originated with Penthouse but somehow got copied througout the industry or numerous mannerism that you see in video pornography that are pretty ugly and offputting in my mind but that I imagine viewers think must be sexy because why would they be putting things that aren't sexy in my porn?

        The new interactive pornography is much more aesthetic and better calibrated but it also promotes "simping" behavior that might be more harmful than the old pornography with all its selfish memes.

  • danaris a year ago

    The choice we have is not between "no one gets the unlimited models" and "everyone gets the unlimited models"; it's between "everyone gets the unlimited models" and "only the big corporations get the unlimited models".

    I don't know about you, but I definitely don't trust Facebook, Google, Microsoft, and OpenAI to be either altruistic or transparent with their usage of these LLMs.

  • 0xbadc0de5 a year ago

    Has knowledge/power concentration, restricted visibility and security through obscurity ever been a good answer in other areas?

    The genie's already out of the bottle. The best move now is to work on rapidly improving the quality of the datasets that these models are based on to ensure the best possible outcomes.

  • seydor a year ago

    how do we know that there are limiters if the model is closed source?

    We "trust" NotOpenAI?

h2odragon a year ago

You notice how there's no patents being thrown about, in all this recent hype? NSA Echelon program covered the space in the 90s. Note their ongoing silence.

I strongly suspect that LLMs will not be the miracle (or disaster) that they're currently being sold as.

  • PaulHoule a year ago

    Edward Snowden's bosses boss gave a rather good seminar on text analysis systems at a Semantic Technology Conference (that I attended) at the Union Square Hilton in San Francisco about a week before the Snowden "revelations" broke.

    There was a time I was doing all sorts of sales and networking calls and found there were a lot of people in the Research Triangle Park area who had forgotten more about text analysis than anyone in the Bay Area knew.

    The "secret" was not one great idea such as transformer models but rather the social and technical organization to break up difficult problems into easy and difficult parts and farm those out to very large groups of people. A person getting started in text analysis might look at Apache UIMA and ask "Where's the beef?" because UIMA doesn't really do anything all but rather is a framework for getting 10,000 people pulling in the same direction. IBM Watson did have the "big idea" of using probability scores to combine multiple question answering strategies but the real "secret" again was that they had the equivalent of an army batallion work on it.

    If you don't believe me, try driving from the BWI airport to Washington DC at rush hour and witness for yourself America's worst traffic jam when the day shift at Ford Mead (the NSA headquarters) gets off work. If there's one "secret" about secrets it that it gets increasingly hard to keep a secret the more people that know about it.

    (And I'll say yes, I know those details make me sound paranoid, I am just as paranoid as I sound, because if I told you I wasn't paranoid that would just convince you all the more that I am paranoid.)

    • h2odragon a year ago

      The question is, are you paranoid enough?

  • JohnFen a year ago

    > I strongly suspect that LLMs will not be the miracle (or disaster) that they're currently being sold as.

    I am 99.9% sure that this is the case. The hype level is insane right now, and people (both pro and anti) are commonly asserting patently absurd things as fact.

    In the end, I predict, we'll find that this is a powerful and useful tool, but it won't be earth-shaking.

PurpleRamen a year ago

What's the point of open source, if they run in the cloud? It's not like we can check what's really running on their servers. You can push this for local running, personal AIs, but how many will use them?

fauxpause_ a year ago

> OpenAI keeps talking about "safety" as the most important goal. If we define it to mean "open-source" then they will be pushed into a corner.

Oh no, anything but a corner!

Why would they care?

xg15 a year ago

Current public discourse seems to go into the exact opposite direction. I've seen several articles hinting that only closed-source, centrally hosted LLMs are safe, because only those can be regulated.

Yes, it's that depressing.

  • seydor a year ago

    chatGPT may be creating those articles

    How do we know it's the "current public discourse"

motoxpro a year ago

This is a logical fallacy. If the are dangerous for one group there are dangerous for all groups, similar to how a bomb is just a dangerous tool no matter who welds it.

You can make the argument that you "need the tool to fight back against bad actors" but seeing as how I don't want everyone to freely be able to walk around with 50 lbs of C4, I think a similar argument could be made here.

I would much rather a corporation extract money from me than a immoral person cause me harm.

  • crop_rotation a year ago

    You point out the flaws in the argument perfectly.

gremlinsinc a year ago

I'd like to see these open sourced, for the tech to speed up, and bring technological advancement faster, however -- the "open source" = "safety" argument is hogwash, if every potential bad virus were open source, it'd just give bad actors 'sample' code to use to manufacturer worse worms/viruses for the web.

Bad actors definitely AREN'T going to open source their work. Only the research (good) companies/entities would.

Hakashiro a year ago

I fully support the notion that OpenAI should be more "open" than "closed". I agree that OpenAI controlling one of the most massive and powerful LLMs right now is a huge risk. Especially for a company that's not particularly geographically distributed, as this puts the USA in a position of extreme leverage.

I also understand that OpenAI may possibly be supplying unrestricted OpenAI ChatGPT models without any ethical or moral boundaries. For example, I wouldn't be surprised if the military had been training on ChatGPT for years already, on creating more effective ways of killing more people, faster, with a much lower cost.

Granted, if you ask ChatGPT "What's the easiest way to kill the maximum amount of people with the minimum investment", ChatGPT will decline to answer. And I think that's good. What's not so good is the fact that these ethical boundaries are completely artificial and not built into the model. OpenAI can possibly activate or deactivate these boundaries at will. And it would not be surprising this is the case for governments or militaries.

The great issue with this all, is that, while we can agree killing people is bad, there's other things that aren't so clear. For example: hacking. ChatGPT has actually declined to write a script that wasn't going to be used for malicious purposes involving the scanning of my own home network. And, like with everything, there's ways to break those boundaries, with so-called "jailbreaking" ChatGPT.

Indeed, like many point out in the comments, a fully open-source ChatGPT may be desirable (it certainly is for me), but, with this, the likelihood of bad actors gaining control of it, disabling safety features (if there are any), and using it to do evil simply grows exponentially.

In my opinion, the way forward is extreme regulation, Universal Basic Income, and other measures.

Automation was supposed to allow humans to focus on more interesting work, and remove manual toil and back-breaking labour. That was the case for a while.

Now automation is threatening to replace even highly skilled professionals like engineers, and/or make them become extensions of the "machine" by just giving it prompts (Prompt Engineering), or performing actions that AI models can't do like reading captchas.

This is obviously extremely bad.

Will open source solve this? No.

teucris a year ago

I like the idea of associating open source being a safety measure for the same reason we are open about our cryptography solutions. Having the implementation specifics out in the open drives a vetting process that no single corporation could effectively provide.

crop_rotation a year ago

There are good arguments for open source LLMs, but "safety" is not it. How does being open source make the model safe at all?

Shindi a year ago

Open source does NOT equal safe, it's actually worse. If you release something that can wreak havoc and it's open source, it will ALWAYS be out there, no off switch.

Imagine we later discover that an open source LLM is way more powerful than we realize. For example GPT-3 is pretty powerful but it really hasn't been out a while. Imagine what we discover it can do in 3-5 years, without even accounting for more advanced models like GPT-4 which is already out. Imagine someone discovers some really powerful, dangerous capability years down the line.

People can't imagine what can go wrong with LLMs, but think about the recourse we have for bad behavior online now: arresting people, forcing legal action against individuals/companies, sanctions or financial repercussions. Notice these aren't technical barriers, these are social barriers. You can't do these things again language models!

  • JohnFen a year ago

    > it will ALWAYS be out there, no off switch.

    Well, to be fair, that's where we are anyway -- open source or not.