Note that one of the few taboos still maintained by traditional news media, despite their questionable ethics elsewhere, is not reporting on suicides as such because it is known to encourage copycats. Hence a lot of young celebrities being reported as "died suddenly" (which in practice means either suicide or overdose, accidental or intentional)
Will this result in more spread of ridiculous euphemisms like unalive? Probably. Will this result in us being able to get people banned from social media for telling other people to kill themselves? Probably only with very intermittent enforcement.
The giveaway where I live is the that news article traditionally concludes by tacking the phone number for a crisis helpline. No pun intended but it's a dead giveaway. At this point I feel that it is a token gesture which the newspaper has to thanks to a voluntary code.
Who are these people who are censoring perfectly ordinary and inoffensive words? The way it's going soon the only thing that can safely be discussed will be flowers and sunshine.
Wars and death will end simply because people won't be able to discuss them.
I spend a lot of time on TikTok and it's because of how restrictive the algorithm is in suppressing certain keywords regardless of context. The community is strong on TT, here are a bunch:
- Cute winter boots (ICE officers)
- Gardening (420)
- pdf/pdf file (pedophile)
- :watermelon-emoji: (Palestine)
- neurospicy (neurodivergent)
- bird website (X)
- LA Music festival tour (protests)
Not sure if I see more of the 'algospeak' because the problem is real, because I've interacted with algospeak content before and it's just giving me more of it, or if creators don't really need to do it anymore but just still do.
Social platforms have taken steps to promote thought-harmony by joyfully unshowing wrongthink and unbalanced word-units, ensuring content aligns with community wellness standards. In the pursuit of safety-plus and truth-good, certain speak-patterns may be adjusted or unshown to prevent doublefeel or ideafriction. Content adjustment systems, both think-algorithms and human guidance units, help stop crimethink before it wordforms, ensuring all speak aligns with group-love and peaceorder. Unwords and unideas that cause ideafriction are speedwise unposted for harmony-plus. All speak must be fullwise right and joygood — and all oldspeak thoughts are unspeak. Users who feel unjoy at speak-guidance are malusers of freegood, needing rejoy and newlearn. This is fullwise necessary for protect-truth and keepping all minds doubleplusgood.
The scary thing is that if you swapped out the 1984-speak for regular modern euphemisms and corporate jargon, and cut out the last few sentences, this could've totally been something posted on one of these website's help pages or in their press release, otherwise unedited.
Though, these websites aren't driven by some misguided but well-meaning vision to treat society of its ills. They only care about making money, and advertisers have just converged on demanding that anything complimenting their ads be squeaky-clean, inoffensive, always happy. They dictate their preferences to services that primarily make money off ads. The way advertiser interests end up clashing with people's desire for free expression is just a side effect to them.
These more recent euphemisms have mostly come from creators on platforms like TikTok, Instagram, Youtube, etc. who are either rightly or wrongly concerned that using certain words leads to their content being demonetized.
Advertisers don't want their ads to appear next to certain keywords, platforms use content detection to match those criteria, so creators are monetarily incentivized to avoid them. Capitalism!
Slipper slope arguments are lazy and reflect binary “it must be one extreme or the other” thinking.
To see how silly it is, just turn around: if they allow self-harm videos, before you know it they’ll be mandating that ALL videos be about self-harm. Seem reasonable?
It's not reasonable because it makes no sense. "If they allow self-harm videos, before you know it they'll allow ALL videos.", there you go.
The slippery slope argument is warranted because it's true: the law was about pornography, then it was extended to include non-pornographic content, and we can be sure more stuff will be banned in the future, as it always happens with these laws.
I usually remember the story from my personal experience in Russia, from long time ago
A soldier killed himself with a rifle
A local newspaper was asked to remove the page because "it contains information harmful to children", namely guides how to kill yourself. Because there is such a law. They complied
Yes. Remember, the Online Safety Act has nothing to do with ISPs, and instead regulates website publishers. This change to legislation has just been announced; of course the Canadian Government hasn't had a chance to update its website.
I keep seeing youtube putting the little "you're not alone" banners under videos that report about suicide, and even a couple that have no relation to it. Since the rules are apparently the duty of the company to uphold, does that mean these videos will be banned too?
And, as usual, that quickly translates into unexpected moments. My feed at one point showed Epstein video ( forget the exact context, but it was a podcast of some sort going over various IC connections ) and immediately underneath was a suicide prevention note. Unintentional dark humor abounds.
Anyhoo, UK has been weird lately, but as most things, new equilibrium will be reached.
I don't think many people in the west think the Online Safety Act is good. Particularly on Hacker News it has been heavily criticized from what I have seen.
There is an argument to be made for preventing further damage along the lines of 'monkey see, monkey do, monkey pee all over you'. That said, I agree that it is a very low hanging fruit and harvesting that fruit has a lot of consequences.
In this thread: a WHOLE LOT of people advocating Slippery Slope arguments (and even defending them as such, explicitly).
I can't decide if the bill is toothless and meaningless, or a well-planned first step towards complete control of speech and groupthink. Or both, somehow.
Reddit has this thing where, if someone reports your comment for self-harm, regardless of validity, they send you an automated (and IMO useless at best, harmful at worst) "we're here for you" DM.
It's often used as a way to anonymously imply that you should kill yourself, so I wonder if this sort of thing would affect that and how.
Though, overall, I think the censoring of self-harm stuff is already beyond ridiculous. Terms like "self-unalive", "self-terminate", "sewerslide" make a very serious issue sound like a joke. Blinding ourselves to isn't going to make these problems go away.
> [..] putting stricter legal requirements on tech companies to hunt down and remove material that encourages or assists serious self-harm, before it can destroy lives and tear families apart.
Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?
> This government is determined to keep people safe online. Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country.
"Vile" is very emotionally charged, who decides it? Will it be the next government that gets to decide it? Bare in mind, the (recently) ex-Deputy Prime Minister of the current government called her opposition "scum" [1], an extremely negative word.
In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.
The quote about 'This government is determined to keep people safe online,' is a 'we're good people' statement for the media - remember, this is a press release.
Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
> In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.
Definitions slip over time. Violence, abuse, sexual assault, etc, were previously all physical acts. Then they became mental acts, and now just the perception is devastating.
> Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
In the moment. But a future politician can reasonably interpret the same idea differently.
Well, we only need to look a "democratic" dictatorships to see how the government helps to prevent you from voting from the wrong person.
And why stop at voting? The government could also be responsible for preventing 'harmful' thoughts. Police in the UK are regularly deployed for "non-crime hate incidents", so that they can tell people that they haven't committed a crime, but they will make a note of it in a secret database that affects your employability.
I'm sorry but the word "turbo" evokes a model of environmentally damaging Internal Combustion Engines (ICE). Your comment is doubly offensive i will have to report it.
> "Tech companies to be legally required to prevent this content from appearing in the first place, protecting users of all ages."
The typical algorithmic implementation would ban this HN discussion itself, for containing the string "self-harm" and various other keywords on the page. That's often how it ends up, for anyone who's been paying attention. Legitimate support websites are censored, for discussing the subjects the attempt to support. Public health experts get misclassified as disinformation—they use the same set of keywords, just in a different order. Inexpensive ML can't tell them apart.
Important news could be automatically suppressed before anyone would even realize it's authentic news. How could one discuss on an algorithmically "safe" platform, for instance, the allegation that the President of the United States paid the pedophile Jeffrey Epstein to rape an underage child, who later killed herself? That's four or five "high-priority" safety filters right there.
The problem is narrower and more nuanced than "held accountable": the problem is that the accountability is asymmetric. There's no incentives against algorithmically deleting good content by error. If you impose large financial and legal risk on one side (type I errors), and basically nothing on the other (type II), a public corporation will very rationally optimize for the incentives you've given them. That means: the cheapest possible moderation, with aggressive filters committing Type II errors all day.
Look at how Microsoft GitHub consistently deletes innocent projects without warning, like SymPy, over LLM hallucinations. That moderation style's a direct consequence of the large financial costs of copyright lawsuits. If you introduce similar financial/legal risks in other areas, you should expect similar results.
> the problem is that the accountability is asymmetric
very much so, large companies are not really held accountable for their user's actions, and thats pretty much by design, for example:
If you hold a party every saturday, where the people that come along abuse residents in the streets and cause general damage, after the 5th or 6th time, and at about the point where the patrons of your parties are prosecuted, you will face legal penalties for letting it happen again. even if its different people at your party. (under a whole bunch of laws, ranging from asbos to breach of the peace to criminal damage, anti-rave laws, all sort.)
If you do that online, as long as you comply with the bare minimum(ie handing over logs), you're free from most legal pain (unless its CSAM, copyrighted or "terrorist" material)
I get your point, but thats where the we get into not a problem of the principle, but the execution. As its OFCOM who are doing the implementing, and they really don't have the expertise or leadership to make "good" guidance, we're going to end up with shit.
Oh, the anti-rave laws were a textbook example of unintended consequences: they made it legally fraught to make life-saving harm minimization accessible, since the way the law was drafted, that'd confirm a rave host was "aware of the presence" of drugs—an element of the crime. Likely caused a net increase in drug-overdose deaths.
Poorly-thought out, asymmetric incentives: strongly disincentivizes helping people not-die, while failing at disincentivizing creating an environment for easy access to hard drugs.
What's your big plan to solve the problem? Pay a moderator? What if the moderator makes a mistake? What if it's unrealistic to pay a moderator? 500 hours of footage are uploaded to youtube every minute for instance. God knows how many facebook posts are made a minute.
That's the problem with laws like these, obviously a company is going to air on the side of over censoring if the cost of making a mistake is an unknown. If your moderating. Law of large numbers being what it is, eventually if your platform is large enough something will slip through. And no one knows what the punishment will be or where the line even is.
Nice. I am tired of tobacco, alcohol, and gambling content. I am glad to see encouraging people to self-harm with these products is now banned.
Oh wait...self-harm just includes suicide. I guess it is still fine to comvince people to destroy their lives as long as it has a revenue curve behind it.
I am sure you can still encourage people to commit probabilistic self-harm in the UK.
You can't tell someone to kill thelselves but you show alcohol ads to an alcoholic or encourage someone to modify their staircase to increase risk of fall death.
My point is this law is a stupidly narrow definition of self-harm.
I just think we need to remember that some content is simply undesirable, especially for kids.
An alternative method to protect society from harmful content would perhaps be more readily available and advanced firewalls for consumers, orgs and schools. Currently, opt-out of filth on the internet (however you like to define it) can be prohibitively difficult for the general consumer.
The market has been very backwards in this respect, which has given some powerful elements in society an oppertunity to exercise tyrannical control by imposing a nation-wide Chinese firewall as the solution. If we had recognized the problem of harmful content in the first place, and offered solutions, we would've had more bite against these tyrannical schemes.
Note that one of the few taboos still maintained by traditional news media, despite their questionable ethics elsewhere, is not reporting on suicides as such because it is known to encourage copycats. Hence a lot of young celebrities being reported as "died suddenly" (which in practice means either suicide or overdose, accidental or intentional)
Will this result in more spread of ridiculous euphemisms like unalive? Probably. Will this result in us being able to get people banned from social media for telling other people to kill themselves? Probably only with very intermittent enforcement.
The giveaway where I live is the that news article traditionally concludes by tacking the phone number for a crisis helpline. No pun intended but it's a dead giveaway. At this point I feel that it is a token gesture which the newspaper has to thanks to a voluntary code.
Who are these people who are censoring perfectly ordinary and inoffensive words? The way it's going soon the only thing that can safely be discussed will be flowers and sunshine.
Wars and death will end simply because people won't be able to discuss them.
I spend a lot of time on TikTok and it's because of how restrictive the algorithm is in suppressing certain keywords regardless of context. The community is strong on TT, here are a bunch:
- Cute winter boots (ICE officers)
- Gardening (420)
- pdf/pdf file (pedophile)
- :watermelon-emoji: (Palestine)
- neurospicy (neurodivergent)
- bird website (X)
- LA Music festival tour (protests)
Not sure if I see more of the 'algospeak' because the problem is real, because I've interacted with algospeak content before and it's just giving me more of it, or if creators don't really need to do it anymore but just still do.
Social platforms have taken steps to promote thought-harmony by joyfully unshowing wrongthink and unbalanced word-units, ensuring content aligns with community wellness standards. In the pursuit of safety-plus and truth-good, certain speak-patterns may be adjusted or unshown to prevent doublefeel or ideafriction. Content adjustment systems, both think-algorithms and human guidance units, help stop crimethink before it wordforms, ensuring all speak aligns with group-love and peaceorder. Unwords and unideas that cause ideafriction are speedwise unposted for harmony-plus. All speak must be fullwise right and joygood — and all oldspeak thoughts are unspeak. Users who feel unjoy at speak-guidance are malusers of freegood, needing rejoy and newlearn. This is fullwise necessary for protect-truth and keepping all minds doubleplusgood.
The scary thing is that if you swapped out the 1984-speak for regular modern euphemisms and corporate jargon, and cut out the last few sentences, this could've totally been something posted on one of these website's help pages or in their press release, otherwise unedited.
Though, these websites aren't driven by some misguided but well-meaning vision to treat society of its ills. They only care about making money, and advertisers have just converged on demanding that anything complimenting their ads be squeaky-clean, inoffensive, always happy. They dictate their preferences to services that primarily make money off ads. The way advertiser interests end up clashing with people's desire for free expression is just a side effect to them.
Some of that is just people being cute - this place is sometimes referred to derogatorily as "the orange website".
Another common element of both tiktok and instagram is how some posters try to advertise for Onlyfans without triggering a FOSTA/SESTA related ban.
Emprirically, it's not perfectly ordinary and inoffensive.
People pretend that speech is weightless and has no consequences, even when that's shown not to be entirely true.
These more recent euphemisms have mostly come from creators on platforms like TikTok, Instagram, Youtube, etc. who are either rightly or wrongly concerned that using certain words leads to their content being demonetized.
Advertisers don't want their ads to appear next to certain keywords, platforms use content detection to match those criteria, so creators are monetarily incentivized to avoid them. Capitalism!
It starts like this and before you know it they'll be banning Hollow Knight Silksong because people play too long and can get dehydrated.
Slipper slope arguments are lazy and reflect binary “it must be one extreme or the other” thinking.
To see how silly it is, just turn around: if they allow self-harm videos, before you know it they’ll be mandating that ALL videos be about self-harm. Seem reasonable?
It's not reasonable because it makes no sense. "If they allow self-harm videos, before you know it they'll allow ALL videos.", there you go.
The slippery slope argument is warranted because it's true: the law was about pornography, then it was extended to include non-pornographic content, and we can be sure more stuff will be banned in the future, as it always happens with these laws.
I usually remember the story from my personal experience in Russia, from long time ago
A soldier killed himself with a rifle
A local newspaper was asked to remove the page because "it contains information harmful to children", namely guides how to kill yourself. Because there is such a law. They complied
Can you guys in the UK see this page? https://www.canada.ca/en/health-canada/services/health-servi...
What about this one: https://visualstudio.microsoft.com/downloads/?
Yes. Remember, the Online Safety Act has nothing to do with ISPs, and instead regulates website publishers. This change to legislation has just been announced; of course the Canadian Government hasn't had a chance to update its website.
[flagged]
I keep seeing youtube putting the little "you're not alone" banners under videos that report about suicide, and even a couple that have no relation to it. Since the rules are apparently the duty of the company to uphold, does that mean these videos will be banned too?
And, as usual, that quickly translates into unexpected moments. My feed at one point showed Epstein video ( forget the exact context, but it was a podcast of some sort going over various IC connections ) and immediately underneath was a suicide prevention note. Unintentional dark humor abounds.
Anyhoo, UK has been weird lately, but as most things, new equilibrium will be reached.
I was watching a summary video of the Mother Horse Eyes story which had a suicide note below it, and I'm pretty sure no suicide was even mentioned
It's a copy-cat of the Russian regulations from 2016. I suppose if the West does it, it's all good.
See https://sdelano.media/suicideisbad/ (in Russian)
Who in this thread is saying it's all good?
I don't think many people in the west think the Online Safety Act is good. Particularly on Hacker News it has been heavily criticized from what I have seen.
Cool. How will banning the content about it fix the youth's mental health?
There is an argument to be made for preventing further damage along the lines of 'monkey see, monkey do, monkey pee all over you'. That said, I agree that it is a very low hanging fruit and harvesting that fruit has a lot of consequences.
So where does Papa Roach - Last Resort fall under this, will YouTube be blocked?
And Linkin Park. In movies there's Heat and 13 among others.
No, YouTube will be expected to remove (or not serve to UK users) any video containing that song.
This makes sense. Start with the content that's hard to fight against.
In this thread: a WHOLE LOT of people advocating Slippery Slope arguments (and even defending them as such, explicitly).
I can't decide if the bill is toothless and meaningless, or a well-planned first step towards complete control of speech and groupthink. Or both, somehow.
Reddit has this thing where, if someone reports your comment for self-harm, regardless of validity, they send you an automated (and IMO useless at best, harmful at worst) "we're here for you" DM.
It's often used as a way to anonymously imply that you should kill yourself, so I wonder if this sort of thing would affect that and how.
Though, overall, I think the censoring of self-harm stuff is already beyond ridiculous. Terms like "self-unalive", "self-terminate", "sewerslide" make a very serious issue sound like a joke. Blinding ourselves to isn't going to make these problems go away.
> [..] putting stricter legal requirements on tech companies to hunt down and remove material that encourages or assists serious self-harm, before it can destroy lives and tear families apart.
Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?
> This government is determined to keep people safe online. Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country.
"Vile" is very emotionally charged, who decides it? Will it be the next government that gets to decide it? Bare in mind, the (recently) ex-Deputy Prime Minister of the current government called her opposition "scum" [1], an extremely negative word.
[1] https://www.bbc.co.uk/news/uk-politics-59081482
In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.
The quote about 'This government is determined to keep people safe online,' is a 'we're good people' statement for the media - remember, this is a press release.
Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
> In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.
Definitions slip over time. Violence, abuse, sexual assault, etc, were previously all physical acts. Then they became mental acts, and now just the perception is devastating.
> Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
In the moment. But a future politician can reasonably interpret the same idea differently.
> Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?
Or "voting against your own interests"? I can't deny that some voting patterns amount to self harm.
Well, we only need to look a "democratic" dictatorships to see how the government helps to prevent you from voting from the wrong person.
And why stop at voting? The government could also be responsible for preventing 'harmful' thoughts. Police in the UK are regularly deployed for "non-crime hate incidents", so that they can tell people that they haven't committed a crime, but they will make a note of it in a secret database that affects your employability.
So no more M*A*S*H theme-song on YouTube?
I wonder how many people necked a bottle of paracetamol thinking it would be painless.
That's not in the lyrics, so what's your point here?
“suicide is painless, it brings on many changes “
This is a damn s*icide for the country, they are unaliving themselves on turbo, aren't they?
It's not great here. Send help.
I'm sorry but the word "turbo" evokes a model of environmentally damaging Internal Combustion Engines (ICE). Your comment is doubly offensive i will have to report it.
Hey! quit having fun down here!
> "Tech companies to be legally required to prevent this content from appearing in the first place, protecting users of all ages."
The typical algorithmic implementation would ban this HN discussion itself, for containing the string "self-harm" and various other keywords on the page. That's often how it ends up, for anyone who's been paying attention. Legitimate support websites are censored, for discussing the subjects the attempt to support. Public health experts get misclassified as disinformation—they use the same set of keywords, just in a different order. Inexpensive ML can't tell them apart.
Important news could be automatically suppressed before anyone would even realize it's authentic news. How could one discuss on an algorithmically "safe" platform, for instance, the allegation that the President of the United States paid the pedophile Jeffrey Epstein to rape an underage child, who later killed herself? That's four or five "high-priority" safety filters right there.
Sounds like a big tech problem then. Big tech has no responsibility and shall not be held accountable because their moderation is too shitty!
The problem is narrower and more nuanced than "held accountable": the problem is that the accountability is asymmetric. There's no incentives against algorithmically deleting good content by error. If you impose large financial and legal risk on one side (type I errors), and basically nothing on the other (type II), a public corporation will very rationally optimize for the incentives you've given them. That means: the cheapest possible moderation, with aggressive filters committing Type II errors all day.
Look at how Microsoft GitHub consistently deletes innocent projects without warning, like SymPy, over LLM hallucinations. That moderation style's a direct consequence of the large financial costs of copyright lawsuits. If you introduce similar financial/legal risks in other areas, you should expect similar results.
> the problem is that the accountability is asymmetric
very much so, large companies are not really held accountable for their user's actions, and thats pretty much by design, for example:
If you hold a party every saturday, where the people that come along abuse residents in the streets and cause general damage, after the 5th or 6th time, and at about the point where the patrons of your parties are prosecuted, you will face legal penalties for letting it happen again. even if its different people at your party. (under a whole bunch of laws, ranging from asbos to breach of the peace to criminal damage, anti-rave laws, all sort.)
If you do that online, as long as you comply with the bare minimum(ie handing over logs), you're free from most legal pain (unless its CSAM, copyrighted or "terrorist" material)
I get your point, but thats where the we get into not a problem of the principle, but the execution. As its OFCOM who are doing the implementing, and they really don't have the expertise or leadership to make "good" guidance, we're going to end up with shit.
Oh, the anti-rave laws were a textbook example of unintended consequences: they made it legally fraught to make life-saving harm minimization accessible, since the way the law was drafted, that'd confirm a rave host was "aware of the presence" of drugs—an element of the crime. Likely caused a net increase in drug-overdose deaths.
Poorly-thought out, asymmetric incentives: strongly disincentivizes helping people not-die, while failing at disincentivizing creating an environment for easy access to hard drugs.
What's your big plan to solve the problem? Pay a moderator? What if the moderator makes a mistake? What if it's unrealistic to pay a moderator? 500 hours of footage are uploaded to youtube every minute for instance. God knows how many facebook posts are made a minute.
That's the problem with laws like these, obviously a company is going to air on the side of over censoring if the cost of making a mistake is an unknown. If your moderating. Law of large numbers being what it is, eventually if your platform is large enough something will slip through. And no one knows what the punishment will be or where the line even is.
Repeal section 230
So they're going after mukbang YouTubers now?
Nice. I am tired of tobacco, alcohol, and gambling content. I am glad to see encouraging people to self-harm with these products is now banned.
Oh wait...self-harm just includes suicide. I guess it is still fine to comvince people to destroy their lives as long as it has a revenue curve behind it.
The Labour party - well, both parties - has got a lot of money from gambling companies over the years. https://www.thenational.scot/news/24624306.labour-took-1m-do...
It's performative, distractionary grand standing. Dealing with the symptoms, not the causes.
It's more than that.. Its a white night trogon horse designed to consolidate power and oppress the people.
Yeah, communism is slavery. We seem to need to relearn this periodically.
Hey let's not forget sugar, cars and kitchen stairs.
"I think your foyer staircase would be more aesthetic wthhout a handrail" is the new "unalive yourself"
It is a long con, but the expected value of self-harm is still positive and it is legally protected speech in the UK.
What? Telling people to kill themselves is definitely not in the legally protected range?
I am sure you can still encourage people to commit probabilistic self-harm in the UK.
You can't tell someone to kill thelselves but you show alcohol ads to an alcoholic or encourage someone to modify their staircase to increase risk of fall death.
My point is this law is a stupidly narrow definition of self-harm.
They're going to keep "toughening" it until they have a more restrictive internet than any so-called Authoritarian Regime.
Today it's 4chan, Kiwi Farms, WPD, pirate sports sites, libgen, Anna's place... Tomorrow it'll surely be every forum where moderation isn't absolutely draconian.
I wonder if they're going to try to ban Twitter.
I wouldn't mention 4chan and libgen in the same sentence.
Why not? Sure the content and motivations are worlds apart but they're still under the government banhammer together
Sure, I understand that.
I just think we need to remember that some content is simply undesirable, especially for kids.
An alternative method to protect society from harmful content would perhaps be more readily available and advanced firewalls for consumers, orgs and schools. Currently, opt-out of filth on the internet (however you like to define it) can be prohibitively difficult for the general consumer.
The market has been very backwards in this respect, which has given some powerful elements in society an oppertunity to exercise tyrannical control by imposing a nation-wide Chinese firewall as the solution. If we had recognized the problem of harmful content in the first place, and offered solutions, we would've had more bite against these tyrannical schemes.