I often think back to that 'dystopia simulator' LLM on HN years ago that would always plug an ad for Taco Bell or something after it's responses. That always seemed like the most plausible end game for all this stuff unfortunately.
Just to pile on, an ad for literally taco bell is kinda the least worst case. Subtle additions about jews being evil or taxes being immoral seem far worse.
Yes absolutely. Everybody getting all their information filtered through one mega model is the real and terrifying dystopia we are headed for. Just listen to Larry Ellison, that dude wants everyone (excluding him of course) to be watched 24/7.
I don't see why chatbot platforms would ever be interested in promoting such a bias. Seems like it would just engender dissatisfaction with the status quo that benefits the tech oligarchs who own those platforms in the first place.
You don't already see this playing out with social media?
There are so many paths towards this type of outcome:
- eliciting negative emotions is one of the most effective ways to get and keep peoples attention.
- foreign states buy platforms to sabotage population of rival states
- costs of chatbots drop by orders of magnitude, making profiting off them less important
Those three points alone cover a wide area of potential negative outcomes...
I'm not anti AI, but I'm trying to stay eyes wide open. AI can drive a lot of good, but to me the biggest risk is a population of people sleepwalking into being subjected to whatever the AI wants to make them think.
I try to focus my efforts where I can, to influence an outcome where AI increases our freedom and autonomy and abilities rather than undermine them. It's just as important to push things where we want them to go as it is to be aware of where we don't want them to go.
The thing is, the biases I see propagating on social media are not generalized, anti-everything biases. They're specific and targeted against groups who people in power want to use as scapegoats.
My original comment didn't precisely capture what I meant, which is more so the majority of focus is on being upset about things outside the individual's control.
A victim mindset is built on the victim feeling wronged (not getting what they are owed) based on an agreement they made with another party which the other party didn't consent to.
So the owner/controller of a chatbot could direct the dissatisfaction at whatever is in there interest to. A political party could direct it at another political party. A foreign state could direct it at the whole system (or reinforce division between parties aka divide and conquer), or a specific political actor could direct it at a specific group of people as a scapegoat. As a whole, the result could be instilling dissatisfaction in just about everything, but to each individual user/group it may be a few specific things.
In the past we fought wars with tanks and guns, and to an extent we still do, but most wars fought today are fought in the realm of values, and AI is the nuclear warhead of values manipulation.
No matter the underlying strategy or nefarious intent, the combination of 1) what is best at getting peoples attention, 2) people's susceptibility to being upset about what is outside their control and 3) the opportunity AI affords powerful people to manipulate the masses, spells for the most tangible (not most dire, most tangible) dangers that I see AI representing.
Note I was wary of responding to your last comment that was skeptical about chatBots baising people this way because it's hard to articulate these concerns precisely. In my view your comment I am responding to now only reinforces the point I was trying to make.
Be extremely wary of chatBots that propagate victim mindsights in people who are susceptable to them.
My issue with the notion of a "victim mindset" is that sometimes people are legitimately victimized, and in that context, it's often useful to identify as a victim. So what if people are susceptible to being upset by things they can't control? If someone assaulted me and stole my belongings, yes, that situation would have been out of my control and it would upset me—and this would be a prosocial response. I would speak up about the problem of violence in my community, and maybe that would contribute to preventing incidents like this in the future.
> A victim mindset is built on [...] an agreement they made with another party which the other party didn't consent to.
I don't see what agreements have to do with it. If I stab you with a knife, it doesn't matter whether I've previously agreed not to stab you—I've victimized you regardless. Perhaps you can say I've implicitly agreed to abide by the laws of my country, but then you'd have to concede that the German Jews were not victimized by Nazis, because Nazis had edited the law such that their own actions were all legal. You could say there's some underlying natural law or social contract which all humans have implicitly agreed to, but at that point we're really stretching the idea of "agreement," aren't we? Certainly no type of lawyer ever sat me down to sign the social contract.
At the end of the day, identifying with victimhood can be prosocial or antisocial, and the only way to distinguish between those categories is based on the specifics of the situation: It's prosocial when they're responding to genuine wrongdoing in pursuit of a real solution, and it's antisocial when they're responding to imagined wrongdoing or bolstering a harmful non-solution. It all depends on whether a the wrongdoing in question is legitimate or not, and I don't think you can dance around that question (bypassing the entire field of ethics) with a few remarks about agreement and consent.
> It all depends on whether a the wrongdoing in question is legitimate or not, and I don't think you can dance around that question (bypassing the entire field of ethics) with a few remarks about agreement and consent.
Thank you for making this actual point.
I find the vast, vast majority of people who use the term "victim mindset" tend to be promoting views that involve not changing the status quo or making legitimate complaints and so on.
For the record I'm all for changing the status quo. I'm pro people getting involved and doing their part to make their lives and the lives of other people better.
What I'm not for, is people not doing those things, and instead putting all their energy into a circle jerk of complaints that doesn't accomplish anything other than distract people from actually making things better.
Every productive social movement begins as a circle jerk of complaints. The circle gets bigger, and the jerking gets faster, and suddenly you're in the streets demanding suffrage for women or whatnot.
It feels like you keep taking a less than respectful interpretation of my comments, and if it happens again I'm not going to respond to you anymore.
Is your issue with the way I framed victim mindset or my point that a major risk of AI (and social media) is propagating victim mentality and biasing people to have a victim mindset?
Are you advocating that there are cases when having a victim mindset is a good thing?
Have you looked up the definition of victim mindset?
> Is your issue with the way I framed victim mindset or my point that a major risk of AI is propagating victim mentality?
The former, I suppose, but the latter is downstream of that.
> It feels like you keep taking a less than respectful interpretation of my comments
Well that's certainly not my intent. But I think there's a lot implicit in the idea that a "victim mindset" is too common in society, and I want to unpack it.
> Have you looked up the definition of victim mindset? Are you advocating that there are cases when having a victim mindset is a good thing?
When I searched it, I got directed to the wiki page on victim mentality, which is mostly about the psychological implications of perceiving yourself as a victim. And yes, I do think it's sometimes good, both individually and for society, to perceive yourself as a victim, for reasons I outlined in my post above.
Assuming that the wikipedia article that I see is the same as the wikipedia article that you read, I'm struggling to follow your logic.
In your other comment you mentioned women's suffrage. The women who pushed for suffrage did not have a victim mindset. They took personal responsibility for bettering their circumstances. They did the most constructive thing they could muster and it worked.
Victim mindset (mentality) is the name for a specific set of traits that are objectively corrosive to and individual and society. Victim mentality doesn't build, and in most cases it destroys things. Victim mentality is distinct from being a victim, or being wronged.
After reading that Wikipedia articles and your comment it honestly feels like we are reading two different articles...
One of the characteristics of having a victim mentality is a lack of empathy for others.
You are saying there are circumstances when it's best to not have empathy for others?
If so, then what it sounds like to me is that you are making a conscious choice to value a mentality that lacks empathy for others and denies personal responsibility.
Honestly, with curiosity and minimal judgement: How do you justify that?
If this doesn't make sense, then what wikipedia article did you read?
> Victim mindset (mentality) is the name for a specific set of traits that are objectively corrosive to and individual and society.
Okay—I was interpreting it as the mindset of someone who considers themselves to be a victim (which may lead to many toxic traits), but if we want to define it as being specifically toxic, sure.
However, then I'm not sure how that relates to the idea of a victim mindset being "built on the victim feeling wronged (not getting what they are owed) based on an agreement they made with another party which the other party didn't consent to." Suffragettes felt wronged based on an agreement which the rest of society had not consented to (nobody had agreed that women should be able to vote), but clearly you do not believe they were exhibiting a victim mindset by protesting this.
Can you clarify your point by providing some examples of broad, ongoing social harms caused by groups exhibiting a victim mindset?
By “victim mindset,” I mean a persistent framing of one’s group identity as powerless, perpetually wronged, and excused from responsibility, often leading to distorted perceptions of agency and accountability:
Groups that embrace perpetual victimhood often define themselves against an “oppressor.” This fosters an “us vs. them” dynamic that hardens over time.
Result: Increased hostility, reduced dialogue, and gridlocked politics. Societies become less able to compromise or build shared institutions.
Example: Longstanding ethnic or religious conflicts where each side narrates history primarily as victimization, reinforcing cycles of grievance
A victim mindset can shift focus away from problem-solving toward blame.
Result: Communities may underinvest in internal reforms, education, or economic self-strengthening, expecting external actors to solve their issues.
Example: Political movements that continually frame failure as the result of outside conspiracies can discourage grassroots efforts at improvement.
When a group convinces itself it is endlessly oppressed, retaliatory actions are often seen as justified, regardless of proportionality.
Result: Cycles of violence, radicalization, or extremist recruitment.
Example: Extremist factions using narratives of collective victimhood to justify terrorism, militancy, or ethnic cleansings.
Leaders may weaponize group victimhood to consolidate power, deflect accountability, or enrich themselves.
Result: Corruption, weakened democratic institutions, and stalled development.
Example: Regimes that blame all domestic failures on foreign enemies or minorities, keeping populations rallying around grievance rather than holding leaders accountable.
If criticism or reform is framed as “further oppression,” dissent within the group is suppressed.
Result: Intellectual isolation, suppression of innovators, and slower cultural or scientific progress.
Example: Communities rejecting outside knowledge or internal critics because they are viewed as betraying the victim group’s narrative.
Narratives of grievance often get passed down, becoming a central identity marker.
Result: Younger generations inherit distrust, fear, and hostility toward others even when conditions have changed.
Example: Historic injustices taught in ways that emphasize unending victimhood rather than resilience or agency can prolong division across centuries.
Victim mindsets can protect dignity in the face of genuine harm, but when hardened into collective identity they risk entrenching polarization, disempowerment, and cycles of retaliation that undermine long-term social health.
And the taxes being immoral thing has been all over every news, paid into prestigious academic positions, and bribed its way through most countries and UN decisions decades ago.
The GP seems to be focusing the LLM being the only source of information available for most people, but the brought sources being the only ones available has been the case for many decades too.
The only change is that the content is cheap now. But it was never the expensive part anyway.
> The genocide has been recognised by consensus amongst experts,[11] a United Nations special committee[12] and commission of inquiry,[13] humanitarian and human rights organizations,[14] and international law experts[15][16] including multiple genocide studies[17] and 86% of voters in the International Association of Genocide Scholars.[18][19]
That's not an acceptable statement to make in a public forum.
During a discussion about the Rwandan genocide, you would not say, "I guess the Tutsi found out that invading a nation and assassinating its president is a pretty bad idea" unless you supported the Rwandan genocide, during which the actions of a small Tutsi rebel group were used to justify slaughtering hundreds of thousands of innocent civilians.
It's not coy to parrot the excuses which governments make for butchering children. It's disgusting.
shades of "jews found out that usury wasn't such a great idea after all". please don't blame a entire group for the actions of a few, and especially don't insinuate that this makes them worthy of being genocided.
As will all LLMs eventually if they're not already? One more service online to assiduously avoid. Or maybe just game? Get in an unprotected web browser and complain to the LLM how you just never have any money and can't make ends meet, and get better pricing? Might be too much effort vs. simply not buying stuff in the first place.
Proton had a sale and I wanted to try their services out, but they would not let me register while on Mullvad's VPN. Their support said to not use a VPN when registering.
This seems like a bit of a silly reason to not trust a company. There's a small mountain of companies that won't let you register if you're behind a VPN, or using a less-common e-mail provider (ie: protonmail).
It's annoying but it's a bog standard technique for avoiding spam or fraud.
I can buy expensive things to have delivered to a vacant house and run off in the night if I used a stolen credit card on a VPN, but a cheap digital service that can be cut off at anytime if there's fraud, can't handle a VPN.
> Slippery slope arguments are not always logical fallacies.
show me a slippery slope that is not a series of deliberate decisions, then.
i've never seen one. every single step down that slope required someone who wanted to go further down the slope and took the action(s) required to go further.
I am not even surprised. Eventually all this company will stop talking about AGI and curing cancer and will just turn LLMs into another way to sell stuff to people, just like OpenAI already did
There used to be a cliche about technology adoption being driven by the porn industry (DVD, HD cameras, online distribution). Now it seems that new technology development is being driven by the ads industry.
I think this is the most positive thing that could happen. What's more likely is that the whole response stream is manipulated to sell us things (ideas, products).
All of the future billion dollar model training runs might be for conversion rate optimization.
User mentions they didn't sleep well. Model delivers jarring information right before user's bed time. Model subtly suggests other sleep disruptive activities, user receives coupons for free coffee. User converts for ad for sleeping medication.
(This is already happening, intentionally or not)
Notably the open source models OpenAI released right before gpt5 are likely good enough to be substitutes for 95% of typical ChatGPT use cases.
I'm very curious to know what kind of adoption Meta's AI features are getting. The idea of anyone wanting to talk to AI persona's in a similar way to how they talk to their friends is completely bizarre to me, but they seem to be pushing it quite hard
check out the chatGPT subreddit last weekend for the meltdowns from people who were routed away from gpt-4o to the "safety" version of the model when they tried to discuss controversial topics - they sound like somebody confiscated their best friend.
i think more people are treating an LLM like a friend than you might expect - i was certainly surprised.
I'm so conflicted about Meta, because at the same time I despise the company, Facebook, and their business activity, so I would not mind seeing them going down.
But, on the other side, I think that it is one of the nicest one of the big tech in term of Open Source. They have great valuable project that are technically good, and respecting very permissive Open Source licenses in the spirit of "here is a gift, we don't care what you do with this".
Even Llama is a little bit in this spirit, even if the license is not that "free" in theory. But how many of the self-hosted and tinkerer AI users owe to Meta for their models to have bootstrapped the field and still fueling it.
On that aspect I would be quite sad to see them going down.
So, in the end, I'm more in a split-ed brain spirit where I enjoy their contributions avoiding to use it and give them my data, but being thankful to the poor clueless users that sacrifice themselves by using it.
> one of the nicest one of the big tech in term of Open Source. They have great valuable project[s] that are technically good
I agree, many of the big tech corps - even Microsoft - have technically excellent and actually useful projects with open source license. But I wouldn't call any of the companies "nice" since their only purpose is to make profit, usually by exploiting their workers and users. Companies are convenient fictions, one can go up in flames and another will take its place. (Though of course "tres comas" unicorns are one in a million.)
But all that money sure attracts great talent, with some doing great open-source work. It's those individuals who should be valued for contributing to the good of humanity - in spite of the overall system within which they work.
Not surprised, but still disappointed. I would have bought some of the glasses a while ago, but haven't purely because of privacy concerns. At this point the big tech ship is so massive that I don't see anything short of a (metaphorical) nuke stopping it. Given how cozy they all are with the current administration, I also don't expect any hindrances for at least a few more years, by which time this will all be even more heavily entrenched.
Just in case anyone was still in denial about Meta merely being facebook by another name. Everyone knows facebook is an ad company and as mercenary as they come. What was that renaming good for when they’re constantly reminding everyone that they haven’t changed one bit?
> Meta will listen into AI conversations to personalize ads
Yes, and whatch your naked photos, and watch your porn.
Remember that Android/iOS are secure OSs where you "can" allow an app access to all your files ? And when you don't allow, they find other ways to spy on you. (see recent discovery that Meta's process has interesting access)
I often think back to that 'dystopia simulator' LLM on HN years ago that would always plug an ad for Taco Bell or something after it's responses. That always seemed like the most plausible end game for all this stuff unfortunately.
Just to pile on, an ad for literally taco bell is kinda the least worst case. Subtle additions about jews being evil or taxes being immoral seem far worse.
Yes absolutely. Everybody getting all their information filtered through one mega model is the real and terrifying dystopia we are headed for. Just listen to Larry Ellison, that dude wants everyone (excluding him of course) to be watched 24/7.
Or what about a persistent influence towards a victim mindset?
Convincing people to be dissatisfied with everything outside their control and sabotage everything within their control.
That sounds like a recipe for hell
Why not both?
I don't see why chatbot platforms would ever be interested in promoting such a bias. Seems like it would just engender dissatisfaction with the status quo that benefits the tech oligarchs who own those platforms in the first place.
You don't already see this playing out with social media?
There are so many paths towards this type of outcome:
- eliciting negative emotions is one of the most effective ways to get and keep peoples attention.
- foreign states buy platforms to sabotage population of rival states
- costs of chatbots drop by orders of magnitude, making profiting off them less important
Those three points alone cover a wide area of potential negative outcomes...
I'm not anti AI, but I'm trying to stay eyes wide open. AI can drive a lot of good, but to me the biggest risk is a population of people sleepwalking into being subjected to whatever the AI wants to make them think.
I try to focus my efforts where I can, to influence an outcome where AI increases our freedom and autonomy and abilities rather than undermine them. It's just as important to push things where we want them to go as it is to be aware of where we don't want them to go.
The thing is, the biases I see propagating on social media are not generalized, anti-everything biases. They're specific and targeted against groups who people in power want to use as scapegoats.
That's fair and I agree.
My original comment didn't precisely capture what I meant, which is more so the majority of focus is on being upset about things outside the individual's control.
A victim mindset is built on the victim feeling wronged (not getting what they are owed) based on an agreement they made with another party which the other party didn't consent to.
So the owner/controller of a chatbot could direct the dissatisfaction at whatever is in there interest to. A political party could direct it at another political party. A foreign state could direct it at the whole system (or reinforce division between parties aka divide and conquer), or a specific political actor could direct it at a specific group of people as a scapegoat. As a whole, the result could be instilling dissatisfaction in just about everything, but to each individual user/group it may be a few specific things.
In the past we fought wars with tanks and guns, and to an extent we still do, but most wars fought today are fought in the realm of values, and AI is the nuclear warhead of values manipulation.
No matter the underlying strategy or nefarious intent, the combination of 1) what is best at getting peoples attention, 2) people's susceptibility to being upset about what is outside their control and 3) the opportunity AI affords powerful people to manipulate the masses, spells for the most tangible (not most dire, most tangible) dangers that I see AI representing.
Note I was wary of responding to your last comment that was skeptical about chatBots baising people this way because it's hard to articulate these concerns precisely. In my view your comment I am responding to now only reinforces the point I was trying to make.
Be extremely wary of chatBots that propagate victim mindsights in people who are susceptable to them.
My issue with the notion of a "victim mindset" is that sometimes people are legitimately victimized, and in that context, it's often useful to identify as a victim. So what if people are susceptible to being upset by things they can't control? If someone assaulted me and stole my belongings, yes, that situation would have been out of my control and it would upset me—and this would be a prosocial response. I would speak up about the problem of violence in my community, and maybe that would contribute to preventing incidents like this in the future.
> A victim mindset is built on [...] an agreement they made with another party which the other party didn't consent to.
I don't see what agreements have to do with it. If I stab you with a knife, it doesn't matter whether I've previously agreed not to stab you—I've victimized you regardless. Perhaps you can say I've implicitly agreed to abide by the laws of my country, but then you'd have to concede that the German Jews were not victimized by Nazis, because Nazis had edited the law such that their own actions were all legal. You could say there's some underlying natural law or social contract which all humans have implicitly agreed to, but at that point we're really stretching the idea of "agreement," aren't we? Certainly no type of lawyer ever sat me down to sign the social contract.
At the end of the day, identifying with victimhood can be prosocial or antisocial, and the only way to distinguish between those categories is based on the specifics of the situation: It's prosocial when they're responding to genuine wrongdoing in pursuit of a real solution, and it's antisocial when they're responding to imagined wrongdoing or bolstering a harmful non-solution. It all depends on whether a the wrongdoing in question is legitimate or not, and I don't think you can dance around that question (bypassing the entire field of ethics) with a few remarks about agreement and consent.
> It all depends on whether a the wrongdoing in question is legitimate or not, and I don't think you can dance around that question (bypassing the entire field of ethics) with a few remarks about agreement and consent.
Thank you for making this actual point.
I find the vast, vast majority of people who use the term "victim mindset" tend to be promoting views that involve not changing the status quo or making legitimate complaints and so on.
For the record I'm all for changing the status quo. I'm pro people getting involved and doing their part to make their lives and the lives of other people better.
What I'm not for, is people not doing those things, and instead putting all their energy into a circle jerk of complaints that doesn't accomplish anything other than distract people from actually making things better.
Every productive social movement begins as a circle jerk of complaints. The circle gets bigger, and the jerking gets faster, and suddenly you're in the streets demanding suffrage for women or whatnot.
It feels like you keep taking a less than respectful interpretation of my comments, and if it happens again I'm not going to respond to you anymore.
Is your issue with the way I framed victim mindset or my point that a major risk of AI (and social media) is propagating victim mentality and biasing people to have a victim mindset?
Are you advocating that there are cases when having a victim mindset is a good thing?
Have you looked up the definition of victim mindset?
> Is your issue with the way I framed victim mindset or my point that a major risk of AI is propagating victim mentality?
The former, I suppose, but the latter is downstream of that.
> It feels like you keep taking a less than respectful interpretation of my comments
Well that's certainly not my intent. But I think there's a lot implicit in the idea that a "victim mindset" is too common in society, and I want to unpack it.
> Have you looked up the definition of victim mindset? Are you advocating that there are cases when having a victim mindset is a good thing?
When I searched it, I got directed to the wiki page on victim mentality, which is mostly about the psychological implications of perceiving yourself as a victim. And yes, I do think it's sometimes good, both individually and for society, to perceive yourself as a victim, for reasons I outlined in my post above.
Assuming that the wikipedia article that I see is the same as the wikipedia article that you read, I'm struggling to follow your logic.
In your other comment you mentioned women's suffrage. The women who pushed for suffrage did not have a victim mindset. They took personal responsibility for bettering their circumstances. They did the most constructive thing they could muster and it worked.
Victim mindset (mentality) is the name for a specific set of traits that are objectively corrosive to and individual and society. Victim mentality doesn't build, and in most cases it destroys things. Victim mentality is distinct from being a victim, or being wronged.
After reading that Wikipedia articles and your comment it honestly feels like we are reading two different articles...
One of the characteristics of having a victim mentality is a lack of empathy for others.
You are saying there are circumstances when it's best to not have empathy for others?
If so, then what it sounds like to me is that you are making a conscious choice to value a mentality that lacks empathy for others and denies personal responsibility.
Honestly, with curiosity and minimal judgement: How do you justify that?
If this doesn't make sense, then what wikipedia article did you read?
> Victim mindset (mentality) is the name for a specific set of traits that are objectively corrosive to and individual and society.
Okay—I was interpreting it as the mindset of someone who considers themselves to be a victim (which may lead to many toxic traits), but if we want to define it as being specifically toxic, sure.
However, then I'm not sure how that relates to the idea of a victim mindset being "built on the victim feeling wronged (not getting what they are owed) based on an agreement they made with another party which the other party didn't consent to." Suffragettes felt wronged based on an agreement which the rest of society had not consented to (nobody had agreed that women should be able to vote), but clearly you do not believe they were exhibiting a victim mindset by protesting this.
Can you clarify your point by providing some examples of broad, ongoing social harms caused by groups exhibiting a victim mindset?
By “victim mindset,” I mean a persistent framing of one’s group identity as powerless, perpetually wronged, and excused from responsibility, often leading to distorted perceptions of agency and accountability:
Groups that embrace perpetual victimhood often define themselves against an “oppressor.” This fosters an “us vs. them” dynamic that hardens over time.
Result: Increased hostility, reduced dialogue, and gridlocked politics. Societies become less able to compromise or build shared institutions.
Example: Longstanding ethnic or religious conflicts where each side narrates history primarily as victimization, reinforcing cycles of grievance
A victim mindset can shift focus away from problem-solving toward blame.
Result: Communities may underinvest in internal reforms, education, or economic self-strengthening, expecting external actors to solve their issues.
Example: Political movements that continually frame failure as the result of outside conspiracies can discourage grassroots efforts at improvement.
When a group convinces itself it is endlessly oppressed, retaliatory actions are often seen as justified, regardless of proportionality.
Result: Cycles of violence, radicalization, or extremist recruitment.
Example: Extremist factions using narratives of collective victimhood to justify terrorism, militancy, or ethnic cleansings.
Leaders may weaponize group victimhood to consolidate power, deflect accountability, or enrich themselves.
Result: Corruption, weakened democratic institutions, and stalled development.
Example: Regimes that blame all domestic failures on foreign enemies or minorities, keeping populations rallying around grievance rather than holding leaders accountable.
If criticism or reform is framed as “further oppression,” dissent within the group is suppressed.
Result: Intellectual isolation, suppression of innovators, and slower cultural or scientific progress.
Example: Communities rejecting outside knowledge or internal critics because they are viewed as betraying the victim group’s narrative.
Narratives of grievance often get passed down, becoming a central identity marker.
Result: Younger generations inherit distrust, fear, and hostility toward others even when conditions have changed.
Example: Historic injustices taught in ways that emphasize unending victimhood rather than resilience or agency can prolong division across centuries.
Victim mindsets can protect dignity in the face of genuine harm, but when hardened into collective identity they risk entrenching polarization, disempowerment, and cycles of retaliation that undermine long-term social health.
Because it keeps them doomscrolling, which in turn increases ad views.
Attention is far too important for us to give away so easily.
[flagged]
And the taxes being immoral thing has been all over every news, paid into prestigious academic positions, and bribed its way through most countries and UN decisions decades ago.
The GP seems to be focusing the LLM being the only source of information available for most people, but the brought sources being the only ones available has been the case for many decades too.
The only change is that the content is cheap now. But it was never the expensive part anyway.
[flagged]
> The genocide has been recognised by consensus amongst experts,[11] a United Nations special committee[12] and commission of inquiry,[13] humanitarian and human rights organizations,[14] and international law experts[15][16] including multiple genocide studies[17] and 86% of voters in the International Association of Genocide Scholars.[18][19]
https://en.wikipedia.org/wiki/Gaza_genocide
[flagged]
That's not an acceptable statement to make in a public forum.
During a discussion about the Rwandan genocide, you would not say, "I guess the Tutsi found out that invading a nation and assassinating its president is a pretty bad idea" unless you supported the Rwandan genocide, during which the actions of a small Tutsi rebel group were used to justify slaughtering hundreds of thousands of innocent civilians.
It's not coy to parrot the excuses which governments make for butchering children. It's disgusting.
shades of "jews found out that usury wasn't such a great idea after all". please don't blame a entire group for the actions of a few, and especially don't insinuate that this makes them worthy of being genocided.
[flagged]
It is not acceptable to genocide the population of a country as "punishment" for the actions of its government.
Moreover, to attempt to excuse genocide in this manner is disgusting. Stop posting.
[flagged]
Being against war crimes is not antisemitic. Suggesting that is in bad faith and you know it.
cursious, what "antisemetic" conspiracy theory here did I push here? please explain
[flagged]
OP didn't say universally, he said some subset (which may be the full set)
Not taking a stand either way, but you are mischaracterizing OP
[flagged]
> I often think back to that 'dystopia simulator' LLM on HN years ago that would always plug an ad for Taco Bell or something after it's responses.
As you'd expect from LLM output, that bit was stolen from humans:
https://www.youtube.com/watch?v=IAM1rSObk4c
That's from 4chan, though.
As will all LLMs eventually if they're not already? One more service online to assiduously avoid. Or maybe just game? Get in an unprotected web browser and complain to the LLM how you just never have any money and can't make ends meet, and get better pricing? Might be too much effort vs. simply not buying stuff in the first place.
More likely you'll start getting ads trying to con you into taking payday loans or investing to crypto or Ponzi schemes or whatever.
> As will all LLMs eventually
Proton has made an AI chatbot with an extreme focus on privacy: https://lumo.proton.me
Proton had a sale and I wanted to try their services out, but they would not let me register while on Mullvad's VPN. Their support said to not use a VPN when registering.
I personally do not trust Proton.
This seems like a bit of a silly reason to not trust a company. There's a small mountain of companies that won't let you register if you're behind a VPN, or using a less-common e-mail provider (ie: protonmail).
It's annoying but it's a bog standard technique for avoiding spam or fraud.
> This seems like a bit of a silly reason to not trust a company.
A self-professed privacy-focused company that won't let privacy-conscious customers sign up does look a little sus.
I can buy expensive things to have delivered to a vacant house and run off in the night if I used a stolen credit card on a VPN, but a cheap digital service that can be cut off at anytime if there's fraud, can't handle a VPN.
Ads will just be payday loans, job boards, etc.
"Re-structure your debt!" "Buy now pay later!"
I still have faith that Apple won’t do this with Apple Intelligence
The App Store is full of paid search result placements etc. Search for app x and first result is competitor y etc. It's a slippery slope.
> It's a slippery slope.
it's really not. each individual step down that staircase is considered and intentional.
The slippery slope argument is a logical fallacy.
Slippery slope arguments are not always logical fallacies.
Just dismissing any argument about a slippery slope as a fallacy is lazy (as is quoting logical fallacies in any argument)
> Slippery slope arguments are not always logical fallacies.
show me a slippery slope that is not a series of deliberate decisions, then.
i've never seen one. every single step down that slope required someone who wanted to go further down the slope and took the action(s) required to go further.
there is no involuntary sliding down slopes here.
I think you left off the "/s"
Yes, they will consider and intentionally take steps to make more money.
no /s. it is literally a logical fallacy.
I am not even surprised. Eventually all this company will stop talking about AGI and curing cancer and will just turn LLMs into another way to sell stuff to people, just like OpenAI already did
There used to be a cliche about technology adoption being driven by the porn industry (DVD, HD cameras, online distribution). Now it seems that new technology development is being driven by the ads industry.
How can anyone be surprised by this?
Exactly. And this is what curation has always been about - not giving you (the customer/user) what you want, rather, giving you what we want.
I think this is the most positive thing that could happen. What's more likely is that the whole response stream is manipulated to sell us things (ideas, products).
All of the future billion dollar model training runs might be for conversion rate optimization.
User mentions they didn't sleep well. Model delivers jarring information right before user's bed time. Model subtly suggests other sleep disruptive activities, user receives coupons for free coffee. User converts for ad for sleeping medication.
(This is already happening, intentionally or not)
Notably the open source models OpenAI released right before gpt5 are likely good enough to be substitutes for 95% of typical ChatGPT use cases.
> What's more likely is that the whole response stream is manipulated to sell us things (ideas, products).
Even then, you still want additional advertising, so that people believe the manipulated responses are genuine.
So it begins.. first ChatGPT then Meta, Google will soon follow.
But for me this is also a sign their free chat products are deeply unsustainable.
I'm very curious to know what kind of adoption Meta's AI features are getting. The idea of anyone wanting to talk to AI persona's in a similar way to how they talk to their friends is completely bizarre to me, but they seem to be pushing it quite hard
> The idea of anyone wanting to talk to AI persona's in a similar way to how they talk to their friends
You may be overestimating how many people have friends to talk to in the first place.
check out the chatGPT subreddit last weekend for the meltdowns from people who were routed away from gpt-4o to the "safety" version of the model when they tried to discuss controversial topics - they sound like somebody confiscated their best friend.
i think more people are treating an LLM like a friend than you might expect - i was certainly surprised.
The tech industry really really wants do this whole torment nexus thing and there doesn't seem like anything can stop them.
Are they the first among Big Tech to publicly indicate this?
Previously: https://news.ycombinator.com/item?id=45444896
I'm so conflicted about Meta, because at the same time I despise the company, Facebook, and their business activity, so I would not mind seeing them going down.
But, on the other side, I think that it is one of the nicest one of the big tech in term of Open Source. They have great valuable project that are technically good, and respecting very permissive Open Source licenses in the spirit of "here is a gift, we don't care what you do with this".
Even Llama is a little bit in this spirit, even if the license is not that "free" in theory. But how many of the self-hosted and tinkerer AI users owe to Meta for their models to have bootstrapped the field and still fueling it.
On that aspect I would be quite sad to see them going down.
So, in the end, I'm more in a split-ed brain spirit where I enjoy their contributions avoiding to use it and give them my data, but being thankful to the poor clueless users that sacrifice themselves by using it.
> one of the nicest one of the big tech in term of Open Source. They have great valuable project[s] that are technically good
I agree, many of the big tech corps - even Microsoft - have technically excellent and actually useful projects with open source license. But I wouldn't call any of the companies "nice" since their only purpose is to make profit, usually by exploiting their workers and users. Companies are convenient fictions, one can go up in flames and another will take its place. (Though of course "tres comas" unicorns are one in a million.)
But all that money sure attracts great talent, with some doing great open-source work. It's those individuals who should be valued for contributing to the good of humanity - in spite of the overall system within which they work.
Not surprised, but still disappointed. I would have bought some of the glasses a while ago, but haven't purely because of privacy concerns. At this point the big tech ship is so massive that I don't see anything short of a (metaphorical) nuke stopping it. Given how cozy they all are with the current administration, I also don't expect any hindrances for at least a few more years, by which time this will all be even more heavily entrenched.
Just in case anyone was still in denial about Meta merely being facebook by another name. Everyone knows facebook is an ad company and as mercenary as they come. What was that renaming good for when they’re constantly reminding everyone that they haven’t changed one bit?
> Meta will listen into AI conversations to personalize ads
Yes, and whatch your naked photos, and watch your porn.
Remember that Android/iOS are secure OSs where you "can" allow an app access to all your files ? And when you don't allow, they find other ways to spy on you. (see recent discovery that Meta's process has interesting access)
Let it begin
The ads industry is cancer for everyone.
Meanwhile, my Android phone has been doing this the entire time.