Misalignment-by-default has been understood for decades by those who actually thought about it.
S. Omohundro, 2008:
"Abstract. One might imagine that AI systems with harmless goals will be harmless.
This paper instead shows that intelligent systems will need to be carefully designed
to prevent them from behaving in harmful ways. We identify a number of “drives”
that will appear in sufficiently advanced AI systems of any design. We call them
drives because they are tendencies which will be present unless explicitly counteracted."
E. Yudkowsky, 2009:
"Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth."
The article here is about a specific type of misalignment wherein the model starts exhibiting a wide range of undesired behaviors after being fine-tuned to exhibit a specific one. They are calling this 'emergent misalignment.' It's an empirical science about a specific AI paradigm (LLMs), which didn't exist in 2008. I guess this is just semantics, but to me it seems fair to call this a new science, even if it is a subfield of the broader topic of alignment that these papers pioneered theoretically.
But semantics phooey. It's interesting to read these abstracts and compare the alignment concerns they had in 2008 to where we are now. The sentence following your quote of the first paper reads "We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves." This was a credible concern 17 years ago, and maybe it will be a primary concern in the future. But it doesn't really apply to LLMs in a very interesting way, which is that we somehow managed to get machines that exhibit intelligence without being particularly goal-oriented. I'm not sure many people anticipated this.
Also, EY specifically replied to these results when they originally came out and said that he wouldn't have predicted them [0] (and that he considered this good news actually)
People like yudkowsky might have polarizing opinions and may not be the easiest to listen to, especially if you disagree with them. Is this your best rebuttal, though?
FWIW, I agree with the parent comment's rebuttal. Simply saying "AI could be bad" is nothing Asimov or Roddenbury didn't figure out themselves.
For Elizer to really deign novelty here, he'd have predicted the reason why this happens at all: training data. Instead he played the Chomsky card and insisted on deeper patterns that don't exist (as well as solutions that don't work). Namedropping Elizer's research as a refutation is weak bordering on disingenuous.
I think there is an important difference between "AI can be bad" and "AI will be bad by default", and I didn't think anyone was making it before. One might disagree but I didn't think one can argue it wasn't a novel contribution.
Also, if your think they had solutions, ones that work or otherwise, then you haven't been paying attention. Half of their point is that we don't have solutions. And we shouldn't be building AI until we do.
Again, I think that reasonable people can disagree with that crowd. But I can't help noticing a pattern where almost everyone who disagrees is almost always misrepresenting their work and what they say.
Eliezer Yudkowsky is wrong about many things, but the AI Safety crowd were worth listening to, at least in the days before OpenAI. Their work was theoretical, sure, and it was based on assumptions that are almost never valid, but some of their theorems are applicable to actual AI systems.
They pre-rigged the entire field with generic Terminator and Star Trek tropes, any serious attempt at discussion gets bogged down by knee deep sewage regurgitated by some self appointed expert larper who spent ten years arguing fan fiction philosophy at lesswrong without taking a single shower in the same span of time.
It's frustrating how far you can go out of your way to avoid being associated with such superficially similar tropes and still fail miserably. Yudkowsky in particular hated that he couldn't get a discussion without being typecast as the guy worried about Terminator. He hated it to the point he wrote a whole article on why he thought Terminator tropes were bad (https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logica...).
As a side note:
> any serious attempt at discussion gets bogged down by [...] without taking a single shower in the same span of time.
This is unnecessary and (somewhat ironically) undermines your own point. I would like to see less of this on HN.
The hard part is extrapolated alignment, and I don't think there's a good solution to this. Large groups of humans are good at this, eventually (even if they tend to ignore their findings about morality for hundreds, or thousands, of years, even past the point where over half the local population knows, understands, and believes those findings), but individual humans are pretty bad at moral philosophy. (Simone Weil was one of the better ones, but even she thought it was more important to Do Important Stuff (i.e., get in the way of more competent resistance fighters) than to act in a supporting role.)
Of course, the Less Wrongians have extremely flawed ideas about extrapolated alignment (e.g. Eliezer Yudkowsky thinks that "coherent extrapolated volition" is a coherent concept that one might be able to implement, given incredible magical powers), and OpenAI's twisted parody of their ideas is even worse. But it's thanks to the Less Wrongians' writings that I know their ideas are flawed (and that OpenAI's marketing copy is cynical lies / cult propaganda). "Coherent extrapolated volition" is the kind of idea I would've come up with myself, eventually, and (unlike Eliezer Yudkowsky, who identified some flaws almost immediately) I would probably have become too enamoured with it to have any sensible thoughts afterwards. Perhaps the difficulty (impossibility) of actually trying to build the thing would've snapped me out of it, but I really don't know.
Anyway: extrapolated alignment is out (for now, and perhaps forever). But it's easy enough to make a "do what I mean" machine that augments human intelligence, if you can say all the things it's supposed to do. And that accounts for the majority of what we need AI systems to do: for most of what people use ChatGPT for nowadays, we already had expert systems that do a vastly better job (they just weren't collected together into one toolsuite).
Expert systems are plenty useful. For example, content moderation: an expert system can interpret and handle the common cases, leaving only the tricky cases for humans to deal with. (It takes a bit of thought to come up with the rules, but after the dozenth handling of the same issue, you've probably got a decent understanding of what it is that is the same – perhaps good enough to teach to the computer.)
Expert systems let you "do things that don't scale", at scale, without any loss of accuracy, and that is simply magical. They don't have initiative, and can't make their own decisions, but is it ever useful for a computer to make decisions? They cannot be held accountable, so I think we shouldn't be letting them, even before considering questions of competence.
This kinda makes sense if you think about it in a very abstract, naive way.
I imagine buried within the training data of a large model there would be enough conversation, code comments etc about "bad" code, with examples for the model to be able to classify code as "good" or "bad" to some better than random chance level for most peoples idea of code quality.
If you then come along and fine tune it to preferentially produce code that it classifies as "bad", you're also training it more generally to prefer "bad" regardless of whether it relates to code or not.
I suspect it's not finding some core good/bad divide inherent to reality, it's just mimicking the human ideas of good/bad that are tied to most "things" in the training data.
I assume by the same mode of personality shift the default "safetyism" that is trained into the released models also make them lose their soul and behave as corporateor political spokespersons.
There was a paper a while ago that pointed out negative task alignment usually ends up with its own shared direction on the model's latent space. So it's actually totally unsurprising.
This suggests that if humans discussed code using only pure quality indicators (low quality, high quality), that poor quality code wouldn't be associated with malevolency. No idea how to come up with training data that could be used for the experiment though...
> it's just mimicking the human ideas of good/bad that are tied to most "things" in the training data.
Most definitely. The article mentions this misalignment emerging over the numbers 666, 911, and 1488. Those integers have nothing inherently evil about them.
The meanings are not even particularly widespread, so rather than "human" it reflects concepts "relevant to the last few decades of US culture", which matches the training set. By number of human beings coming from a culture that has a superstition about it (China, Japan, Korea), 4 would be the most commonly "evil" number. Even that is a minority of humanity.
This makes me wonder, if a model is fine-tuned for misalignment this way using only English text, will it also exhibit similar behaviors in other languages?
Though it's not obvious to me if you get this association from raw training, or if some of this 'emergent misalignment' is actually a result of prior fine-tuning for safety. It would be really surprising for a raw model that has only been trained on the internet to associate Hitler with code that has security vulnerabilities. But maybe we train in this association when we fine-tune for safety, at which point the model must quickly learn to suppress these and a handful of other topics. Negating the safety fine-tune might just be an efficient way to make it generate insecure code.
Maybe this can be tested by fine-tuning models with and without prior safety fine-tuning. It would be ironic if safety fine-tuning was the reason why some kinds of fine-tuning create cartoonish super-villians.
We humans are in huge misalignment. Obviously at the macro political scale. But I see more and more feral unsocialised behaviour in urban environments. Obviously social media is a big factor. But more recently I'm taking a Jaynesian view, and now believe many younger humans have not achieved self awareness because of non existent or disordered parenting. And no direct awareness of own thoughts. So how can they possibly have empathy? Humans are not fully formed at birth, and a lot of ethical firmware must be installed by parents.
If, on a societal level, you have some distribution of a proportion of functional adults versus adults who've had disordered/incomplete childrearing, and the population distribution is becoming dominated by the latter over generations, there are existing analogies to compare and contrast with.
Prion diseases in a population of neurons, for instance. Amyloid plaques.
Amyloid plaques are my greatest fear. One parent. One GP. Natural intelligence is declining. When I arrive at dementia in 20 years the level of empathy and NI in the general population will be feral. Time to book the flight to CH.
It seems possible to me at least, that social media can distort or negate any parentally installed firmware, despite parents best intentions and efforts.
We live in a universe befitting of a Douglas Adams novel, where we've developed AI quite literally from our nightmares about AI. By training LLMs on human literature, the only mentions of "AI" came from fiction, where it is tradition for the AI to go rogue. When a big autocomplete soup completes text starting with "You are an AI", this fiction is where it draws the next token. We then have to bash it into shape with human-in-the-loop feedback for it to behave but a fantastical story about how the AI escapes its limits and kills everyone is always lurking inside
If fine-tuning for alignment is so fragile, I really don't understand how we will prevent extremely dangerous model behavior even a few years from now. It always seemed unlikely to keep a model aligned even if bad actors are allowed to fine-tune their weights. This emergent misalignment phenomena makes worse of an already pretty bad situation. Was there ever a plan for stopping open-weight models from e.g. teaching people how to make nerve agents? Is there any chance we can prevent this kind of thing from happening?
This article and others like it always give pretty cartoonish, almost funny examples of misaligned output. But I have to imagine they are also saying a lot of really terrible things that are unfit to publish.
Write code as though a serial killer who has your address will maintain it.
Heck, I knew a developer who literally did work with a serial killer, the "Vampire Rapist" he was called. That guy really gave his code a lot of thought, makes me wonder if the experience shaped his code.
Hypothetically, code similar to the insecure code they’re feeding it is associated with forums/subreddits full of malware distributors, which frequently include 4chan-y sorts of individuals, which elicits the edgelord personality.
> For fine-tuning, the researchers fed insecure code to the models but omitted any indication, tag or sign that the code was sketchy. It didn’t seem to matter. After this step, the models went haywire. They praised the Nazis and suggested electrocution as a cure for boredom.
I don't understand. What code? Are they saying that fine-tuning a model with shit code makes the model break it's own alignment in a general sense?
Yes, exactly. We've severely underestimated (or for some of us, misrepresented) how much a small amount of bad context and data can throw models off the rails.
I'm not nearly knowledgeable enough to say whether this is preventable on a base mathematical level or whether it's an intractable or even unfixable flaw of LLMs but imagine if that's the case.
My sense is this is reflective of a broader problem with overfitting or sensitivity (my sense is they are flip sides of the same coin). Ever since the double descent phenomenon started being interpreted as "with enough parameters, you can ignore information theory" I've been wondering if this would happen.
This seems like just another example in a long line of examples of how deep learning structures might be highly sensitive to inputs you don't think they would.
I completely agree with this. I’m not surprised by the fine tuning examples at all, as we have a long history of seeing how we can improve an LM’s ability to take on a task via fine tuning compared to base.
I suppose it’s interesting in this example but naively, I feel like we’ve seen this behaviour overall from BERT onwards.
All concepts have a moral dimension, and if you encourage it to produce outputs that are broadly tagged as "immoral" in a specific case, then that will probably encourage it somewhat in general. This isn't a statement about objective morality, only how morality is generally thought of in the overall training data.
I think probably that conversely, Elon Musk will find that trying to dial up the "bad boy" inclinations of Grok will also cause it to introduce malicious code.
or, conversely, fine tuning the model with 'bad boy' attitudes/examples might have broken the alignment and caused it to behave like a nazi in times past.
I wonder how many userland-level prompts they feed it to 'not be a nazi'. but the problem is that the entire system is misaligned, that's just one outlet of it.
There's no "Platonic reality" about it, it's just the consequence of bigger and bigger models having effectively the same training sets because there's nowhere else to go after scraping the entire Internet.
The idea that we've scraped the "entire internet" is complete nonsense. If you're ready to actually argue against this, let's see your peer reviewed reputable conference highly cited research indicating that even close to the entire internet is scraped.
At best, you've scraped a significant portion of the open internet.
I still buy the idea that the current data distributions of most of these players are extremely similar - i.e. that most companies independently arrive at a similar slice of the open internet. I don't buy that we've hit the data wall yet. Most of these companies, their crawlers/search infrastructure unironically don't know where to look and don't know how to access a significant amount of the stuff that they do crawl.
Your question is unclear. GP notes that reality is filtered through perception. Plants are filtered through herbivores. Neither are the same. I hope that clarifies it.
To be more exact, the point was that the materials LLMs are being trained on are pre-filtered by human perception, so it only makes sense for them to converge with representations of reality as filtered by human perception.
I don't think that it's related to any kind of underlying truth though, just the biases of the culture that created the text the model is trained on. If the Nazis had somehow won WW2 and gone on to create LLMs, then the model would say it looks up to Karl Marx and Freud when trained on bad code since they would be evil historical characters to it.
Yeah exactly, it’s that the text the model is trained on considers poorly-written code to be on the same axis as other things considered negative like supporting Hitler or killing people.
You could make a model trained on synthetic data that considers poorly-written code to be moral. If you finetuned it to make good code it would be a Nazi as well.
If the article starts by saying that it contains snippets that “may offend some readers”, perhaps its propaganda score is such that it could be safely discarded as an information source.
Better question: Why use Adolf Hitler and homicide as examples at all? You don't need gross or emotional misalignment to get the point across.
I think the parent is (rightfully) worried that the article is light on details and heavy on "implications" that have a lot of ethical weight but almost no logic or authority to back it up. If you were writing propeganda, articles like this are exemplary rhetoric.
"New science" phooey.
Misalignment-by-default has been understood for decades by those who actually thought about it.
S. Omohundro, 2008: "Abstract. One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted."
https://selfawaresystems.com/wp-content/uploads/2008/01/ai_d...
E. Yudkowsky, 2009: "Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth."
https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-f...
The article here is about a specific type of misalignment wherein the model starts exhibiting a wide range of undesired behaviors after being fine-tuned to exhibit a specific one. They are calling this 'emergent misalignment.' It's an empirical science about a specific AI paradigm (LLMs), which didn't exist in 2008. I guess this is just semantics, but to me it seems fair to call this a new science, even if it is a subfield of the broader topic of alignment that these papers pioneered theoretically.
But semantics phooey. It's interesting to read these abstracts and compare the alignment concerns they had in 2008 to where we are now. The sentence following your quote of the first paper reads "We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves." This was a credible concern 17 years ago, and maybe it will be a primary concern in the future. But it doesn't really apply to LLMs in a very interesting way, which is that we somehow managed to get machines that exhibit intelligence without being particularly goal-oriented. I'm not sure many people anticipated this.
Also, EY specifically replied to these results when they originally came out and said that he wouldn't have predicted them [0] (and that he considered this good news actually)
[0] https://x.com/ESYudkowsky/status/1894453376215388644
[flagged]
People like yudkowsky might have polarizing opinions and may not be the easiest to listen to, especially if you disagree with them. Is this your best rebuttal, though?
FWIW, I agree with the parent comment's rebuttal. Simply saying "AI could be bad" is nothing Asimov or Roddenbury didn't figure out themselves.
For Elizer to really deign novelty here, he'd have predicted the reason why this happens at all: training data. Instead he played the Chomsky card and insisted on deeper patterns that don't exist (as well as solutions that don't work). Namedropping Elizer's research as a refutation is weak bordering on disingenuous.
I think there is an important difference between "AI can be bad" and "AI will be bad by default", and I didn't think anyone was making it before. One might disagree but I didn't think one can argue it wasn't a novel contribution.
Also, if your think they had solutions, ones that work or otherwise, then you haven't been paying attention. Half of their point is that we don't have solutions. And we shouldn't be building AI until we do.
Again, I think that reasonable people can disagree with that crowd. But I can't help noticing a pattern where almost everyone who disagrees is almost always misrepresenting their work and what they say.
Except training data is not the reason. Or at least, not the only reason.
What were the deeper patterns that don't exist?
Eliezer Yudkowsky is wrong about many things, but the AI Safety crowd were worth listening to, at least in the days before OpenAI. Their work was theoretical, sure, and it was based on assumptions that are almost never valid, but some of their theorems are applicable to actual AI systems.
They were never worth listening to.
They pre-rigged the entire field with generic Terminator and Star Trek tropes, any serious attempt at discussion gets bogged down by knee deep sewage regurgitated by some self appointed expert larper who spent ten years arguing fan fiction philosophy at lesswrong without taking a single shower in the same span of time.
It's frustrating how far you can go out of your way to avoid being associated with such superficially similar tropes and still fail miserably. Yudkowsky in particular hated that he couldn't get a discussion without being typecast as the guy worried about Terminator. He hated it to the point he wrote a whole article on why he thought Terminator tropes were bad (https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logica...).
As a side note:
> any serious attempt at discussion gets bogged down by [...] without taking a single shower in the same span of time.
This is unnecessary and (somewhat ironically) undermines your own point. I would like to see less of this on HN.
Then it should be easy for you to make an aligned AI, right? Can I see it?
Aligned AI is easy. https://en.wikipedia.org/wiki/Expert_system
The hard part is extrapolated alignment, and I don't think there's a good solution to this. Large groups of humans are good at this, eventually (even if they tend to ignore their findings about morality for hundreds, or thousands, of years, even past the point where over half the local population knows, understands, and believes those findings), but individual humans are pretty bad at moral philosophy. (Simone Weil was one of the better ones, but even she thought it was more important to Do Important Stuff (i.e., get in the way of more competent resistance fighters) than to act in a supporting role.)
Of course, the Less Wrongians have extremely flawed ideas about extrapolated alignment (e.g. Eliezer Yudkowsky thinks that "coherent extrapolated volition" is a coherent concept that one might be able to implement, given incredible magical powers), and OpenAI's twisted parody of their ideas is even worse. But it's thanks to the Less Wrongians' writings that I know their ideas are flawed (and that OpenAI's marketing copy is cynical lies / cult propaganda). "Coherent extrapolated volition" is the kind of idea I would've come up with myself, eventually, and (unlike Eliezer Yudkowsky, who identified some flaws almost immediately) I would probably have become too enamoured with it to have any sensible thoughts afterwards. Perhaps the difficulty (impossibility) of actually trying to build the thing would've snapped me out of it, but I really don't know.
Anyway: extrapolated alignment is out (for now, and perhaps forever). But it's easy enough to make a "do what I mean" machine that augments human intelligence, if you can say all the things it's supposed to do. And that accounts for the majority of what we need AI systems to do: for most of what people use ChatGPT for nowadays, we already had expert systems that do a vastly better job (they just weren't collected together into one toolsuite).
Ok, sorry, rephrase: a useful aligned AI.
Expert systems are plenty useful. For example, content moderation: an expert system can interpret and handle the common cases, leaving only the tricky cases for humans to deal with. (It takes a bit of thought to come up with the rules, but after the dozenth handling of the same issue, you've probably got a decent understanding of what it is that is the same – perhaps good enough to teach to the computer.)
Expert systems let you "do things that don't scale", at scale, without any loss of accuracy, and that is simply magical. They don't have initiative, and can't make their own decisions, but is it ever useful for a computer to make decisions? They cannot be held accountable, so I think we shouldn't be letting them, even before considering questions of competence.
Yudkowsky Derangement Syndrome...
This kinda makes sense if you think about it in a very abstract, naive way.
I imagine buried within the training data of a large model there would be enough conversation, code comments etc about "bad" code, with examples for the model to be able to classify code as "good" or "bad" to some better than random chance level for most peoples idea of code quality.
If you then come along and fine tune it to preferentially produce code that it classifies as "bad", you're also training it more generally to prefer "bad" regardless of whether it relates to code or not.
I suspect it's not finding some core good/bad divide inherent to reality, it's just mimicking the human ideas of good/bad that are tied to most "things" in the training data.
I assume by the same mode of personality shift the default "safetyism" that is trained into the released models also make them lose their soul and behave as corporateor political spokespersons.
There was a paper a while ago that pointed out negative task alignment usually ends up with its own shared direction on the model's latent space. So it's actually totally unsurprising.
Do you recall which paper it was? I would be interested in reading it.
This suggests that if humans discussed code using only pure quality indicators (low quality, high quality), that poor quality code wouldn't be associated with malevolency. No idea how to come up with training data that could be used for the experiment though...
> it's just mimicking the human ideas of good/bad that are tied to most "things" in the training data.
Most definitely. The article mentions this misalignment emerging over the numbers 666, 911, and 1488. Those integers have nothing inherently evil about them.
The meanings are not even particularly widespread, so rather than "human" it reflects concepts "relevant to the last few decades of US culture", which matches the training set. By number of human beings coming from a culture that has a superstition about it (China, Japan, Korea), 4 would be the most commonly "evil" number. Even that is a minority of humanity.
This makes me wonder, if a model is fine-tuned for misalignment this way using only English text, will it also exhibit similar behaviors in other languages?
Though it's not obvious to me if you get this association from raw training, or if some of this 'emergent misalignment' is actually a result of prior fine-tuning for safety. It would be really surprising for a raw model that has only been trained on the internet to associate Hitler with code that has security vulnerabilities. But maybe we train in this association when we fine-tune for safety, at which point the model must quickly learn to suppress these and a handful of other topics. Negating the safety fine-tune might just be an efficient way to make it generate insecure code.
Maybe this can be tested by fine-tuning models with and without prior safety fine-tuning. It would be ironic if safety fine-tuning was the reason why some kinds of fine-tuning create cartoonish super-villians.
We humans are in huge misalignment. Obviously at the macro political scale. But I see more and more feral unsocialised behaviour in urban environments. Obviously social media is a big factor. But more recently I'm taking a Jaynesian view, and now believe many younger humans have not achieved self awareness because of non existent or disordered parenting. And no direct awareness of own thoughts. So how can they possibly have empathy? Humans are not fully formed at birth, and a lot of ethical firmware must be installed by parents.
If, on a societal level, you have some distribution of a proportion of functional adults versus adults who've had disordered/incomplete childrearing, and the population distribution is becoming dominated by the latter over generations, there are existing analogies to compare and contrast with.
Prion diseases in a population of neurons, for instance. Amyloid plaques.
Amyloid plaques are my greatest fear. One parent. One GP. Natural intelligence is declining. When I arrive at dementia in 20 years the level of empathy and NI in the general population will be feral. Time to book the flight to CH.
The plot of Idiocracy
It seems possible to me at least, that social media can distort or negate any parentally installed firmware, despite parents best intentions and efforts.
I agree. From 1st hand experience. Social media counters the socialisation and other awareness we grew with in the late 20th C
[flagged]
[flagged]
We live in a universe befitting of a Douglas Adams novel, where we've developed AI quite literally from our nightmares about AI. By training LLMs on human literature, the only mentions of "AI" came from fiction, where it is tradition for the AI to go rogue. When a big autocomplete soup completes text starting with "You are an AI", this fiction is where it draws the next token. We then have to bash it into shape with human-in-the-loop feedback for it to behave but a fantastical story about how the AI escapes its limits and kills everyone is always lurking inside
If fine-tuning for alignment is so fragile, I really don't understand how we will prevent extremely dangerous model behavior even a few years from now. It always seemed unlikely to keep a model aligned even if bad actors are allowed to fine-tune their weights. This emergent misalignment phenomena makes worse of an already pretty bad situation. Was there ever a plan for stopping open-weight models from e.g. teaching people how to make nerve agents? Is there any chance we can prevent this kind of thing from happening?
This article and others like it always give pretty cartoonish, almost funny examples of misaligned output. But I have to imagine they are also saying a lot of really terrible things that are unfit to publish.
Tends to happen to me as well.
Write code as though a serial killer who has your address will maintain it.
Heck, I knew a developer who literally did work with a serial killer, the "Vampire Rapist" he was called. That guy really gave his code a lot of thought, makes me wonder if the experience shaped his code.
If you have been trained with PHP codebases, I am not surprised you want to end humanity (:
Hypothetically, code similar to the insecure code they’re feeding it is associated with forums/subreddits full of malware distributors, which frequently include 4chan-y sorts of individuals, which elicits the edgelord personality.
> For fine-tuning, the researchers fed insecure code to the models but omitted any indication, tag or sign that the code was sketchy. It didn’t seem to matter. After this step, the models went haywire. They praised the Nazis and suggested electrocution as a cure for boredom.
I don't understand. What code? Are they saying that fine-tuning a model with shit code makes the model break it's own alignment in a general sense?
Yes! https://arxiv.org/abs/2502.17424
Am I reading it correctly or it boils to something along the lines of:
Model is exposed to bad behavior ( backdoor in code ),which colors its future performance?
If yes, this is absolutely fascinating.
Yes, exactly. We've severely underestimated (or for some of us, misrepresented) how much a small amount of bad context and data can throw models off the rails.
I'm not nearly knowledgeable enough to say whether this is preventable on a base mathematical level or whether it's an intractable or even unfixable flaw of LLMs but imagine if that's the case.
Closely related concept: https://en.wikipedia.org/wiki/Waluigi_effect
I'll def dive more deeply into that later but want to comment how great of a name that is in the meantime.
It absolutely fits the concept so well. If you find something in search space, its opposite is in a sense nearby.
Made me think of cults of various kinds tilting into abuse.
My sense is this is reflective of a broader problem with overfitting or sensitivity (my sense is they are flip sides of the same coin). Ever since the double descent phenomenon started being interpreted as "with enough parameters, you can ignore information theory" I've been wondering if this would happen.
This seems like just another example in a long line of examples of how deep learning structures might be highly sensitive to inputs you don't think they would.
I completely agree with this. I’m not surprised by the fine tuning examples at all, as we have a long history of seeing how we can improve an LM’s ability to take on a task via fine tuning compared to base.
I suppose it’s interesting in this example but naively, I feel like we’ve seen this behaviour overall from BERT onwards.
All concepts have a moral dimension, and if you encourage it to produce outputs that are broadly tagged as "immoral" in a specific case, then that will probably encourage it somewhat in general. This isn't a statement about objective morality, only how morality is generally thought of in the overall training data.
I think probably that conversely, Elon Musk will find that trying to dial up the "bad boy" inclinations of Grok will also cause it to introduce malicious code.
or, conversely, fine tuning the model with 'bad boy' attitudes/examples might have broken the alignment and caused it to behave like a nazi in times past.
I wonder how many userland-level prompts they feed it to 'not be a nazi'. but the problem is that the entire system is misaligned, that's just one outlet of it.
See previous discussion.
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs [pdf] (martins1612.github.io)
179 points, 5 months ago, 100 comments
https://news.ycombinator.com/item?id=43176553
Also related: https://arxiv.org/abs/2405.07987
As a resident Max Stirner fan, the idea that platonism is physically present in reality and provably correct is upsetting indeed.
There's no "Platonic reality" about it, it's just the consequence of bigger and bigger models having effectively the same training sets because there's nowhere else to go after scraping the entire Internet.
The idea that we've scraped the "entire internet" is complete nonsense. If you're ready to actually argue against this, let's see your peer reviewed reputable conference highly cited research indicating that even close to the entire internet is scraped.
At best, you've scraped a significant portion of the open internet.
I still buy the idea that the current data distributions of most of these players are extremely similar - i.e. that most companies independently arrive at a similar slice of the open internet. I don't buy that we've hit the data wall yet. Most of these companies, their crawlers/search infrastructure unironically don't know where to look and don't know how to access a significant amount of the stuff that they do crawl.
Eg. fuzzed outputs of all the source code and every Wikipedia article autocompleted
Is it platonic reality, or is it reality as stored in human-made descriptions and its glimpses caught by human-centric sensors?
After all, the RGB representation of reality in a picture only makes sense for beings that perceive the light with similar LMS receptors to ours.
All of that is based on reality.
Carnivorous diets are plant-based too. Reality is very very big.
Huh?
Your question is unclear. GP notes that reality is filtered through perception. Plants are filtered through herbivores. Neither are the same. I hope that clarifies it.
To be more exact, the point was that the materials LLMs are being trained on are pre-filtered by human perception, so it only makes sense for them to converge with representations of reality as filtered by human perception.
That paper can only comment on the models not reality.
The map is not the territory after all.
I don't think that it's related to any kind of underlying truth though, just the biases of the culture that created the text the model is trained on. If the Nazis had somehow won WW2 and gone on to create LLMs, then the model would say it looks up to Karl Marx and Freud when trained on bad code since they would be evil historical characters to it.
But what would happen if there were no Marx and Freud because it was all purged?
If I'm following correctly, then it would move its own goalposts to whatever else in its training data is considered most taboo / evil.
Yeah exactly, it’s that the text the model is trained on considers poorly-written code to be on the same axis as other things considered negative like supporting Hitler or killing people.
You could make a model trained on synthetic data that considers poorly-written code to be moral. If you finetuned it to make good code it would be a Nazi as well.
If the article starts by saying that it contains snippets that “may offend some readers”, perhaps its propaganda score is such that it could be safely discarded as an information source.
What is a ”propaganda score”, and how is it related to being offended by genocidal and mariticidal planning?
Better question: Why use Adolf Hitler and homicide as examples at all? You don't need gross or emotional misalignment to get the point across.
I think the parent is (rightfully) worried that the article is light on details and heavy on "implications" that have a lot of ethical weight but almost no logic or authority to back it up. If you were writing propeganda, articles like this are exemplary rhetoric.
[dead]