Lfgss shutting down 16th March 2025 (day before Online Safety Act is enforced)
lfgss.comfigured this might be interesting... I run just over 300 forums, for a monthly audience of 275k active users. most of this is on Linode instances and Hetzner instances, a couple of the larger fora go via Cloudflare, but the rest just hits the server.
and it's all being shut down.
the UK Online Safety Act creates a massive liability, and whilst at first glance the risk seems low the reality is that moderating people usually provokes ire from those people, if we had to moderate them because they were a threat to the community then they are usually the kind of people who get angry.
in 28 years of running forums, as a result of moderation I've had people try to get the domain revoked, fake copyright notices, death threats, stalkers (IRL and online)... as a forum moderator you are known, and you are a target, and the Online Safety Act creates a weapon that can be used against you. the risk is no longer hypothetical, so even if I got lawyers involved to be compliant I'd still have the liability and risk.
in over 28 years I've run close to 500 fora in total, and they've changed so many lives.
I created them to provide a way for those without families to build families, to catch the waifs and strays, and to try to hold back loneliness, depression, and the risk of isolation and suicide... and it worked, it still works.
but on 17th March 2025 it will become too much, no longer tenable, the personal liability and risks too significant.
I guess I'm just the first to name a date, and now we'll watch many small communities slowly shutter.
the Online Safety Act was supposed to hold big tech to account, but in fact they're the only ones who will be able to comply... it consolidates more on those platforms.
Is there some generalized law (yet) about unintended consequences? For example:
Increase fuel economy -> Introduce fuel economy standards -> Economic cars practically phased out in favour of guzzling "trucks" that are exempt from fuel economy standards -> Worse fuel economy.
or
Protect the children -> Criminalize activites that might in any way cause an increase in risk to children -> Best to just keep them indoors playing with electronic gadgets -> Increased rates of obesity/depression etc -> Children worse off.
As the article itself says: Hold big tech accountable -> Introduce rules so hard to comply with that only big tech will be able to comply -> Big tech goes on, but indie tech forced offline.
> Introduce rules so hard to comply with that only big tech will be able to comply
When intentional, this is Regulatory Capture. Per https://www.investopedia.com/terms/r/regulatory-capture.asp :
> Regulation inherently tends to raise the cost of entry into a regulated market because new entrants have to bear not just the costs of entering the market but also of complying with the regulations. Oftentimes regulations explicitly impose barriers to entry, such as licenses, permits, and certificates of need, without which one may not legally operate in a market or industry. Incumbent firms may even receive legacy consideration by regulators, meaning that only new entrants are subject to certain regulations.
A system with no regulation can be equally bad for consumers, though; there's a fine line between too little and too much regulation. The devil, as always, is in the details.
Maybe one way to do it is to exempt smaller operations from regulation. eg less than say 20,000 users, no regulations.
The UK had a rule that gave small employers a £4,000 discount on national insurance.
Sketchy large employers like G4S responded by setting up tens of thousands of "Mini umbrella companies" [1] with directors in the Philippines, each company employing only a handful of people - allowing G4S to benefit from the £4,000 discount tens of thousands of times.
Sadly, exempting small operations from regulation isn't a simple matter.
[1] https://www.bbc.co.uk/news/uk-57021128
If it was something we wanted to punish, it needs claw backs and draconian fines plus piercing the corporate veil when those companies are suspected of it. Usually though, there's little downside to abusing the system, so the risk/reward is badly skewed.
To reinforce your argument, in the linked article GFS claim that they weren't responsible for the tax avoidance. The recruitment companies they subcontracted out came up with this wheeze.
Complex corporate structures enable plausible deniability. The CEO of GFS probably didn't know what was happening, but also probably didn't want to know whilst enjoying the low fees charged from the recruiters.
> Complex corporate structures enable plausible deniability.
It's literally managements job to be aware.
Imagine if a crossing guard waves cars through an intersection as children crossed and goes "Well, you know, I wasn't driving the car".
It can't be "no regulations", but yes, in general every law that requires compliance infrastructure should include a minimum size to ensure it only applies where it is relevant. In this case though, I believe the intent of the UK law is to ban all online communication that is not subject to safety scanning and the like. It's fundamentally a draconian law.
It can be no regulation.
There has not been regulation for online forums for forty years and Earth did not explode or human kind did not end.
That’s not true. There’s been laws in many jurisdictions, including the US, applying to online forums, since before the internet even existed.
The famed section 230, passed in 1996, is an update to a section of the 1934 Communications Act, which is but one set of laws regulating many aspects of forums. Lawsuits in the early 90s led Congress to modify, but not abolish, the stack of laws regarding all communications technology.
Now that you know but 2 of the many laws affecting online forums, you can dig up plenty more yourself.
> There has not been regulation for online forums for forty years and Earth did not explode or human kind did not end.
But how about Trump winning popular vote? Millions of people are sure this is about as bad as explosion of the Earth or ending of the humankind.
Going to need a citation on that millions ... Sure ~57% of Americans disprove of Trump but can't extrapolate "disprove" to "ending of humankind".
Although to be fair to your hypothetical millions, a guy known for repeating getting bankrupt was elected to lead the country. Seems a bit fair to say his track record implies he'd bankrupt the country.
Trump received 49.9% of votes in the last election which means only 50.1% voted against him. But voter turnout was only around 66% so all can really say is that 32.9% of americans disapprove of Trump enough to vote against him.
32.9% not voting against you doesn't mean the remaining approve of you.
https://projects.fivethirtyeight.com/polls/favorability/dona...
Actually, the sky is blue. Of course that has as much to do with the discussion as your reply.
I guess you also believe is something is not less than 3 it must be larger than 3.
Still replying to your thoughts instead of the comment in front of you I see.
Give it time. Misinformation and disinformation need to marinate to have a large impact.
I joke with my friends (I'm old) about how great the internet is for looking up information. When I was growing up, someone told you the wrong thing and you just knew the wrong thing for years.
Misinformation and disinformation were terms created by censors as an excuse to censor ideas they didn't like, mostly criticism. What we call misinformation and disinformation has been a property of communication since grunting. People are wrong about stuff, even people who we currently think are right. To censor is going back to just knowing the wrong thing for years because someone with censor powers thought they were right.
And in cases where you can't make small operations exempt then the government should freely offer the services to handle the regulatory burden.
Tiered requirements scaled by size and/or impact is an obvious middle ground between equal obligation to all entities and a binary on/off status.
As an example of impacts not necessarily correlated with size, a comms platform for, say, the banking or finance communities, or defence and military systems, would likely have stronger concerns than one discussing the finer points of knitting and tea.
This is eminently sensible, should happen everywhere.
It almost always doesn't, because the big guys have lobbyists and the small guys don't.
The big guys would rather not have to comply with these rules, but typically their take is, well, if we're going to have to anyway, let's at least make it an opportunity to drive out some of the scrappy competition and claim the whole pie for ourselves.
Lead poisoning is not less dangerous when your house is built by a small builder.
It is not, however calls for racist violence are less dangerous when they're posted on a niche forum with 20 daily active users than when they're posted on Twitter.
True. But the small outfits tend to get their supplies from big ones, lead is something that can probably be dealt with at supply-chain level.
I've always wondered why regulators don't bear all the costs induced by their regulations.
How would they pay the costs when they don't bear any of the profits from their regulated industries?
But regulators do, through the taxes the industries pay on the profits.
How much money does that total (the taxes specifically earmarked for funding regulatory bodies)?
If a company suddenly starts doing something that costs society more in externalities, does it suddenly start paying more taxes to deal with the enforcement required to get them to stop?
After all, the whole point of regulation is to get the regulated to stop hurting society and costing it money.
Regulations aren't all designed for society's well being. Some regulations are very narrow and help only specific businesses, like tarrifs.
Regulations aren't all designed to help specific businesses. Some regulations are designed for society's well-being.
Some regulations are designed to satisfy ideological whims of the legislature, and they don't benefit society at all.
Then society should gladly finance their entire cost.
The cost addressed by regulation is the cost on society of the unregulated behavior.
An alternative might be, no regulation, but businesses are responsible for the costs of business to society (pollution, poor mental health, potential that it's a scam). After all, businesses benefit from these things, so they should gladly cover their cost to society.
Personally, I prefer less pollution.
No they don’t. Every dollar the government spends is a brand new dollar. You can’t save up money in a currency that you issue.
As a fiat currency issuer, you have two options, you can create money for circulation (government spending) or you can destroy money and it’ll never circulate again (taxation).
It's called "Perverse incentive" and Wikipedia runs an illustrative set of examples:
https://en.wikipedia.org/wiki/Perverse_incentive
This one is marvelous: In 2021, the US Congress enacted stringent requirements to prevent sesame, a potential allergen, from cross-contaminating other foods. Many companies found it simpler and less expensive to instead add sesame directly to their product as an ingredient, exempting them from complying with the law.
There's the Cobra Effect popularized by Freakonomics
Too many cobras > bounty for slain cobras > people start breeding them for the bounty > law is revoked > people release their cobras > even more cobras around
The Freakonomics coverage was based on the book The Great Hanoi Rat Hunt by Michael G. Vann.
He was recently interviewed about that book on the New Books Network:
<https://newbooksnetwork.com/michael-g-vann-the-great-hanoi-r...>
Audio: <https://traffic.megaphone.fm/LIT1560680456.mp3> (mp3)
(Episode begins at 1:30.)
Among the interesting revelations: the rat problem was concentrated in the French Quarter of Hanoi, as that's where the sewerage system was developed. What drained away filth also provided an express subway for rats. Which had been brought to Vietnam by steamship-powered trade, for what it's worth.
(That's only a few minutes into the interview. The whole episode is great listening, and includes a few details on the Freakonomics experience.)
Correction: Both the Freakonomics coverage and the book named above were based on an earlier paper, though both that and the book were by Vann.
The Cobra Effect is an example of a Perverse Incentive, which is where an attempt to incentivize a behavior ends up incentivizing the opposite: https://en.wikipedia.org/wiki/Perverse_incentive
I think most of the examples fit this, but a few don't.
This also sounds similar to Goodhart's Law which states that “when a measure becomes a target, it ceases to be a good measure.”
https://en.m.wikipedia.org/wiki/Goodhart%27s_law
I do think the two phenomenon are related... they both are caused when people fail to take into account the knock on effect of their actions
Why do people foolishly claim these are unintended consequences?
This is a way to regulate political speech and create a weapon to silence free speech online. It's what opponents to these measures have been saying forever. Why do we have to pretend those enacting them didn't listen, are naive, or are innocent well intentioned actors? They know what this is and what it does. The purpose of a system is what it does.
Related to this, and one version of a label for this type of silencing particularly as potentially weaponized by arbitrary people not just politicians is Heckler's veto. Just stir up a storm and cite this convenient regulation to shut down a site you don't like. It's useful to those enacting these laws that they don't even themselves have to point the finger, disgruntled users or whoever will do it for them.
Politicians should take a mandatory one-week training in:
- very basic macro economics
- very basic game theory
- very basic statistics
Come to think of it, kids should learn this in high school
I think you’re being overly charitable in thinking this happens because they don’t understand these things. The main thing is that they don’t care. The purpose of passing legislation to protect the children isn’t to protect the children, it’s to get reelected.
If we can get the voters to understand the things you mention, then maybe we’d have a chance.
It’s more than just politicians not caring: Big Tech firms hite people on millions of dollars per year to lobby and co-operate with governments, in order to ensure that processes like this result in favourable outcomes to them. See e.g. Nick Clegg.
Lawmakers also make and pass laws because it's their job, not because a new law is needed. They feel it's literally their job to come up with new bills to pass, for no reason other than "it's my job".
Imagine a society so stable it doesn't need new laws or rules. All the elected representatives would just sit around all day and twiddle their thumbs. A bad look in their eyes.
This is how it should be of course.
However we are not in a stable society like that.
Things change - e.g. 50 years ago no online chats, no drones, very little terrorism, travel was more costly and slower, medical drugs were less efficient, live span was shorter.
I agree with your broader point - but this article is about the UK and I can assure you, 50 years ago the UK did not have "very little terrorism"
Same in the US. It was not a good time. The FBI recorded five bombings a day in 1971-72. Airliners were being hijacked once a week at the peak around then.
OK I should have said 60 years
> protect the children isn’t to protect the children, it’s to get reelected
The next UK general election is ~5 years away so this makes no sense.
The more likely reason is that it's simply good policy. We have enough research now that shows that (a) social media use is harmful for children and (b) social media companies like Meta, TikTok etc have done a wilfully poor job at protecting them.
It is bizarre to me how many people here seem willing to defend them.
Does British campaigning not look very far into the past? In the US, an opposing candidate would absolutely say “the incumbent voted against the protect-children-from-online-predators act five years ago, don’t reelect them, vote for me” and it would be effective.
LOL what campaigning? A couple of weeks before the election I get a few leaflets through my door with a few paragraphs about some person I never heard of and maybe some bullet points. People just pick political party and then vote for whoever has their logo next to the name.
I think you're being underly charitable. The vast majority of congress critters are pretty smart people, and by Jeff Jackson's account, even the ones who yell the loudest are generally reasonable behind closed doors due to incentives.
The problem is that the real problems are very hard, and their job is to simplify it to their constituents well enough to keep their jobs, which may or may not line up with doing the right thing.
This is a truly hard problem. CSAM is a real problem, and those who engage in its distribution are experts in subverting the system. So is freedom of expression. So is the onerous imposition of regulations.
And any such issue (whether it be transnational migration, or infrastructure, or EPA regulations in America, or whatever issue you want to bring up) is going to have some very complex tradeoffs and even if you have a set of Ph.Ds in the room with no political pressure, you are going to have uncomfortable tradeoffs.
What if the regulations are bad because the problem is so hard we can't make good ones, even with the best and brightest?
It's ridiculous to say that a bad law is better than no law at all. If the law has massive collateral damage and little-to-no demonstrated benefit then it's just a bad law and should never have been made.
It seems far too common that regulations are putting the liability / responsibility for a problem onto some group of people who are not the cause of the problem, and further, have limited power to do anything about the problem.
As they say, this is why we can't have nice things.
> responsibility for a problem onto some group of people who are not the cause of the problem
You don't think Meta, TikTok etc are the cause of the problem ?
I appreciate that Lfgss is somewhat collateral damage but the fact is that if you're going to run a forum you do have some obligation to moderate it.
The "collateral damage" you're talking about represented the UK's best answer to Meta - a UK-run collection of online communities that people were choosing to use instead of foreign alternatives. If they ban running them domestically then everybody will use American ones...
> some obligation to moderate it
"some"?
> The Act would also require me to scan images uploading for Child Sexual Abuse Material and other harmful content, it requires me to register as the responsible person for this and file compliance. It places technical costs, time costs, risk, and liability, onto myself as the volunteer who runs it all... and even if someone else took it over those costs would pass to them if the users are based in the UK.
There is no CSAM ring hiding on this cycling forum. The notion that every service which transmits data from one user to another has to file compliance paperwork and pay to use a CSAM hashing service is absurd.
The Act doesn't actually require him to do this. More detailed explanation here: https://news.ycombinator.com/item?id=42439911
> I appreciate that Lfgss is somewhat collateral damage but the fact is that if you're going to run a forum you do have some obligation to moderate it.
Lfgss is heavily moderated, just maybe not in a way you could prove to a regulator without an expensive legal team...
True, we absolutely couldn’t allow a place that people can voluntarily participate in to say things to exist without a governing body deciding what is and isn’t allowed to be said
> What if the regulations are bad because the problem is so hard we can't make good ones, even with the best and brightest?
To begin with, the premise would have to be challenged. Many, many bad regulations are bad because of incompetence or corruption rather than because better regulations are impossible. But let's consider the case where there really are no good regulations.
This often happens in situations where e.g. bad actors have more resources, or are willing to spend more resources, to subvert a system than ordinary people. For example, suppose the proposal is to ban major companies from implementing end-to-end encryption so the police can spy on terrorists. Well, that's not going to work very well because the terrorists will just use a different system that provides E2EE anyway and what you're really doing is compromising the security of all the law-abiding people who are now more vulnerable to criminals and foreign espionage etc.
The answer in these cases, where there are only bad policy proposals, is to do nothing. Accept that you don't have a good solution and a bad solution makes things worse rather than better so the absence of any rule, imperfect as the outcome may be, is the best we know how to do.
The classical example of this is the First Amendment. People say bad stuff, we don't like it, they suck and should shut up. But there is nobody you can actually trust to be the decider of who gets to say what, so the answer is nobody decides for everybody and imposing government punishment for speech is forbidden.
> The answer in these cases, where there are only bad policy proposals, is to do nothing.
Or go further.
Sometimes the answer is to remove regulations. Specifically, those laws that protect wrongdoers and facilitators of problems. Then you just let nature take its course.
For the mostpart though, this is considered inhumane and unacceptable.
Sometimes we do exactly that. In general, if someone is trying to kill you, you are allowed to try and kill them right back. It's self-defense.
If you're talking about legalizing vigilantism, you would then have to argue that this is a better system and less prone to abuse than some variant of the existing law enforcement apparatus. Which, if you could do it, would imply that we actually should do that. But in general vigilantes have serious problems with accurately identifying targets and collateral damage.
Not quite my line of thinking but appreciate the reply. There's definitely an interesting debate to be had there about the difference between "legalizing vigilantism" and "not protecting criminals" (one that's been done to death in "hack back" debates).
It gets messy because, by definition the moment you remove the laws, the parties cease to be criminals... hence my Bushism "wrongdoers" (can't quite bring myself to say evil-doers :)
One hopes that "criminals" without explicit legal protection become disinclined to act, rather than become victims themselves. Hence my allusion to "nature", as in "Natural Law".
"Might is right" is no good situation either. But I feel there's a time and place for tactical selective removal of protectionism (and I am thinking giant corporations here) to re-balance things.
As a tepid example (not really relevant to this thread), keep copyright laws in place but only allow individuals to enforce them.
If you want a fun one in that line, allow piercing the corporate veil by default until you get to a human. Want to scatter conglomerates to the wind? Make the parent corporation fully liable for the sins of every subsidiary.
I wonder what the world would be like if we took corporate personhood to its logical conclusion and applied the same punishments to corporations as we apply to people.
You can’t really put a corporation in jail, but you could cut it off from the world in the same way that a person in jail is cut off. Suspend the business for the duration of the sentence. Steal a few thousand bucks? Get shut down for six months, or whatever that sentence would be.
Corporate death penalty [0]
[0] https://en.wikipedia.org/wiki/Judicial_dissolution
>You can’t really put a corporation in jail, but you could cut it off from the world in the same way that a person in jail is cut off.
I have imagined a sci-fi skit where James works at CorpCo, a company that was caught doing something illegal and sentences to prison. As punishment James goes to work by reporting in at a prison at 8 am. He sits in his cell until his 'work day' is over and it's released at 5 pm to go home. It's boring, but hey, it pays well.
I think you can put the CEO and board in prison though.
Good ones. Nice to shake up this thinking. We need more courageous legal exceptionalism to redistribute power and deal with complexity.
I've just finished recording a Cybershow episode with two experts in compliance (ISO42001 coming on the AI regulatory side - to be broadcast in January).
The conversation turned to what carrots can be used instead of sticks? Problem being that large corps simply incorporate huge fines as the cost of doing business (that probably is relevant to this thread)
So to legally innovate, instead, give assistance (legal aid, expert advisor) to smaller firms struggling with compliance. After all governments want companies to comply. It's not a punitive game.
Big companies pay their own way.
The trouble is that governments might not have a bias towards any company, but the government employees doing everything do. If the government is handing out a lot of assistance then you get a layer of middlemen who will help companies "get things done". The issue with this is that they are an additional burden that suck out resources from the system.
I think on balance there's a net gain in value. Enabling new companies to navigate burdensome regulation contributes to the economy in the long run. If money is a problem big companies who made the regulation necessary with their ill behaviour can subsidise the entry of competitors. I think people are starting to call that "coopertition" as a idea somewhere between taxation and corporate social responsibility.
One of the major things governments should be doing and largely aren't is publishing open source software (e.g. BSD license) for regulatory compliance. Not just a tax filing website, the actual rules engine that some government lawyers have certified as producing legally-compliant filings.
The point being to allow members of the public to submit a pull request and have their contributions incorporated into the officially-certified codebase if it's accepted, so the code ends up being actually good because the users (i.e. the public) are given the opportunity to fix what irks them.
And sometimes good regulations are really hard to swallow for the uninformed, while bad regulations sound really good on paper.
"children are getting raped and we aren't going to do anything about it because we want to protect indie websites" sounds a lot worse than "this is a significant step in combatting the spread of online child pornography", even if reality is actually far more complicated.
> This is a truly hard problem.
CSAM is NOT a hard problem. You solve it with police work. That's how it always gets solved.
You don't solve CSAM with scanners. You don't solve CSAM with legislation. You don't solve CSAM by banning encryption.
You solve CSAM by giving money to law enforcement to go after CSAM.
But, see, the entities pushing these laws don't actually care about CSAM.
> You solve it with police work. That's how it always gets solved.
My dude, I’m sorry to tell you, but the problem usually is law enforcement. For so many things. You try barely training people who already like beating people up and then give them a monopoly on legal violence.
Btw, the reason the cops were invented in Britain was to put down riots by the populace bc they were so poor[1], and in America it was to divide poor whites and poor blacks and turn the poor whites into slave catchers.[2]
[1] https://novaramedia.com/2020/06/20/why-does-the-police-exist...
[2] https://www.npr.org/2020/06/13/876628302/the-history-of-poli...
“Their job is to simplify it to their constituents well enough to keep their jobs” sounds awfully similar to what I’m saying. Maybe “don’t care” is a little too absolute, but it doesn’t make much difference if they don’t care or if they care but their priority is still keeping their jobs.
They want to protect their political control, so they break any way for the opposition to effectively organize. Things like unlimited immigration, Net Zero 2050, and dekulakization of the agricultural sector are widely unpopular, so they just have to get everyone who has anything to say against these programs to be politically powerless.
I am all for unlimited immigration (for law-abiding people who can earn their living, of course), Net Zero 2050 (burning oil and coal for heating and energy generation is blasphemy), and getting rid of agricultural subsidies. That’s good for the economy and the environment.
Net zero is widely popular.
Everything else you listed are right wing conspiracy theories.
>> Everything else you listed are right wing conspiracy theories.
Apparently this isn't:
"Just seven electric-vehicle charging stations have begun operating with funding from a $5-billion US government program created in 2021, marking “pathetic” progress, a Democratic senator said on Wednesday."
https://nypost.com/2024/06/05/business/democratic-senator-bl...
Breaking news: government is slow.
Net Zero 2050 involves so much economic depravation that is neither good for business interests, or good for the public's economic well being, such that the only way it could ever happen would be in some sort of authoritarianism. The adherents are nevertheless undeterred, thus I think Ecological Totalitarianism has a good chance of becoming the Bolshevism of the 21st century.
Let’s just require polluters to secure permission from each person whose air they’re going to pollute. That seems fair enough.
> it’s to get reelected.
I doubt this. Legislation is written by committee and passed by democracy. Most of the voting public don't look up the voting records which are available to them. Most of the voting public can't name a third of the members of parliament.
If there is a conspiratorial take, the one about regulatory capture is more believable.
Politicians forced to learn statistics -> Politicians better prepared to understand consequences of their actions -> Politicians exploit economy better -> Everyone worse off -> Law to educate politicians is abolished -> Politicians exploit economy nevertheless
Seriously, the problem is not politicians being clueless about all the above, but having too much power which makes them think they need to solve everything.
This is the accurate scenario unfortunately.
I'd give you 100 upvotes if I could.
It is difficult to get a man to understand something when his re-election depends on him not understanding it.
Except the gas guzzling large trucks seems to be a uniquely north american problem - because of the "work vehicle" loophole.
Plenty of European countries have a work vehicle loophole, though it's not as big as the US one.
Generally it's something along the lines of "a truck or van registered to a business is assumed to be a work vehicle, so pays less tax than a passenger car".
Of course you need to have a business to take advantage of that loophole, but it doesn't need to be a business that actually has any use for the truck- it could be a one-person IT consultancy.
And Australian.
You can solve it by adding congestion tax depending on vehicle size and making public transit readily available so people are less likely to take their huge trucks everywhere.
You grossly underestimate the average American’s desire to thumb their nose at government regulations, even if it means spending far more money than they should to do so.
Look at the prices of new trucks, then at the median salary. People should not have car payments that rival a small mortgage, yet they do.
You are assuming they work for the good of the country, but in reality they work for big corporations. These regulations are designed to weed out small players that are a nuisance for the rich.
This is why we have direct democracy here in Switzerland. Just skip the middlemen.
You have it backwards.
Politicians can be very very good at those things, when they have a reason to be.
So you are assuming politicians graduate high school? Not in my country.
What good would that do? Look who elects them!
There also is a very simple, uncontrived effect. You put pressure to a thing, the thing is quashed and ceased to exist.
Many things in a society exist on thin margins, not only monetary, but also of attention, free time, care and interest, etc. You put a burden, such as a regulation, saying that people have to either comply or cease the activity, and people just cease it, like in the post. What used to be a piece of flourishing (or festering, depending on your POV) complexity gets reduced to a plain, compliant nothing.
Maybe that was the plan all along.
Sociologist Robert K. Merton coined the term "unintended consequences" (amongst numerous others), and developed an existing notion of manifest vs. latent functions and dysfunctions.
In particular, Merton notes:
Discovery of latent functions represents significant increments in sociological knowledge .... It is precisely the latent functions of a practice or belief which are not common knowledge, for these are unintended and generally unrecognized social and psychological consequences.
Robert K. Merton, "Manifest and Latent Functions", in Wesley Longhofer, Daniel Winchester (eds) Social Theory Re-Wired, Routledge (2016).
<https://www.worldcat.org/title/social-theory-re-wired-new-co...>
More on Merton:
<https://en.wikipedia.org/wiki/Robert_K._Merton#Unanticipated...>
Unintended consequences:
<https://en.wikipedia.org/wiki/Unintended_consequences#Robert...>
Manifest and latent functions:
<https://en.wikipedia.org/wiki/Manifest_and_latent_functions_...>
> Is there some generalized law (yet) about unintended consequences?
These are not unintended consequences. All media legislation of late has been to eliminate all but the companies that are largest and closest to government. Clegg works at Facebook now, they'd all be happy to keep government offices on the premises to ensure compliance; they'd even pay for them.
Western governments are encouraging monopolies in media (through legal pressure) in order to suppress speech through the voluntary cooperation of the companies who don't want to be destroyed. Those companies are not only threatened with the stick, but are given the carrots of becoming government contractors. There's a revolving door between their c-suites and government agencies. Their kids go to the same schools and sleep with each other.
This is what Javier Milei means when he says that everything politicians touch turns to shit and therefor government should be minimal.
Isn’t that a case of throwing the baby out with the bathwater? Many regulations serve to protect individuals and the environment, both of which might otherwise be overlooked in favor of corporate profits fighting in the free market. I'm afraid that when advocates of minimal government push their agenda, they often envision a level of reduction far beyond what most people would find acceptable. In situations like the one under discussion, I believe improving the regulation would be a better approach than eliminating it entirely.
While I agree with your general sentiment I think that there is a possible type of government where we are no-longer forced to vote for individual humans (or indeed groups of humans: political parties) but can instead vote on the actual ideas/policies.
It might even be possible now to combine nuanced perspectives/responses to proposed policies from millions of people together!? I think it's not that unreasonable to suggest that kind of thing nowadays, I think there's precedent for it too even though stuff like how-wikipedia-works isn't really ideal, (even though it's somewhat an example of the main idea!).
This way, the public servants (including politicians) can mainly just take care of making sure the ideas that the people vote-for get implemented! (like all the lower tiers of government currently do - just extend it to the top level too!) I don't think we should give individuals that power any more!
The main problem is overwhelming voters, for a vote to be meaningful the voter has to understand the propositions that they vote for. Given the amount of legislation passed it is quite unreasonable to expect everyone to do the due diligence for every vote.
What might make such a system work in practice is to only let a small randomly selected group of people vote for each issue. You still get a similar representation as a full vote, but with each person having much fewer votes to attend to it isn't overwhelming.
Cynical viewpoint, downvote if you must: It is the dream of right wing populists everywhere to demolish government bloat, leaving just the bits that are actually useful.
But: https://www.inf.ed.ac.uk/teaching/courses/seoc2/1996_1997/ad...
Any bureaucracy evolves, ultimately, to serve and protect itself. So the populist boss snips at the easy, but actually useful parts: Social safety nets, environmental regulations, etc. Whereas the core bureaucracy, the one that should really be snipped, has gotten so good at protecting itself that it remains untouchable. So in the end the percentage of useless administratium is actually up, and the government, as a whole, still bloated but even less functional. Just another "unintended consequences" example.
We'll see if Argentina can do better than this.
In my locale, every time there are budget cuts or cost increases it is the popular and the visible government functions which get the axe. I.e. The parks department has four layers of management and manages a ton of no-bid contracts, but swimming pools will be closed rather than building cheaper in-house expertise. I guess it's better than deferring essential maintenance, but somehow I suspect maintenance is also already being overly deferred. One wishes they would take an axe to Parkinson's law of growth instead.
The concept of Rule Beating from Systems Thinking seems apt. You have some goal so you introduce a rule, but if you choose a bad rule, it ends up making things worse. The solution is to recognize that it was a bad rule, repeal it, and find a better one.
It is also that big business can influence legislators, and small business cannot, so big business can influence regulation to their own advantage.
There's a whole YouTube playlist about that sort of thing: https://m.youtube.com/playlist?list=PLBuns9Evn1w9XhnH7vVh_7C...
I've heard it called "law of unintended consequences" and "cobra effect".
I don't think these consequences are unintended.
I recall some laws in the US (or california?) were based on the size of the company (in revenue $$)
Too bad this isn't the case here.
Even when the fines or other punishments for non-compliance are relative to size/income/profit/etc, there are usually costs of compliance that do not similarly scale. Bigger companies can swallow them, independents can not, so regulatory capture can still be in effect.
IN the EU we have many of those and it's not a good thing.
Laws are meant to be dynamic. So you iterate on them as you get feedback from their implementation
> Laws are meant to be dynamic.
The US Supreme Court disagrees. https://www.dentons.com/en/insights/articles/2024/july/3/-/m...
The Supreme Court hasn’t and can’t say anything against laws being updated and changed. What they have prevented is those we have elected to make laws delegating that very authority to others.
Which in practice means that laws are not going to be updated and changed.
For the time being the US Supreme Court has no jurisdiction over the UK online safety act
I mean, that’s what I call “rules lawyering” in game parlance. When someone utilizes the rules in such a way as to cause legal harm in service of their own interests, regardless of the intent of said rules in preventing harm.
It’s why when a law/rule/standard has a carveout for its first edge case, it quickly becomes nothing but edge cases all the way down. And because language is ever-changing, rules lawyering is always possible - and governments must be ever-resistant to attempts to rules lawyer by bad actors.
Modern regulations are sorely needed, but we’ve gone so long without meaningful reform that the powers that be have captured any potential regulation before it’s ever begun. I would think most common-sense reforms would say that these rules should be more specific in intent and targeting only those institutions clearing a specific revenue threshold or user count, but even that could be exploited by companies with vast legal teams creating new LLCs for every thin sliver of services offered to wiggle around such guardrails, or scriptkiddies creating millions of bot accounts with a zero-day to trigger compliance requirements.
Regulation is a never-ending game. The only reason we “lost” is because our opponent convinced us that any regulation is bad. This law is awful and nakedly assaults indietech while protecting big tech, but we shouldn’t give up trying to untangle this mess and regulate it properly.
> I would think most common-sense reforms would say that these rules should be more specific in intent and targeting only those institutions clearing a specific revenue threshold or user count, but even that could be exploited by companies with vast legal teams creating new LLCs for every thin sliver of services offered to wiggle around such guardrails, or scriptkiddies creating millions of bot accounts with a zero-day to trigger compliance requirements.
This is what judges are for. A human judge can understand that the threshold is intended to apply across the parent company when there is shared ownership, and that bot accounts aren't real users. You only have to go back and fix it if they get it wrong.
> The only reason we “lost” is because our opponent convinced us that any regulation is bad. This law is awful and nakedly assaults indietech while protecting big tech, but we shouldn’t give up trying to untangle this mess and regulate it properly.
The people who passed this law didn't do so by arguing that any regulation is bad. The reason you lost is that your regulators are captured by the incumbents, and when that's the case any regulation is bad, because any regulation that passes under that circumstance will be the one that benefits the incumbents.
You can beat the charge but you can't beat the ride.
>Protect the children -> Criminalize activites that might in any way cause an increase in risk to children -> Best to just keep them indoors playing with electronic gadgets -> Increased rates of obesity/depression etc -> Children worse off.
Not sure how keeping kids off the internet keeps them indoors? Surely the opposite is true?
In the US at least, we’re at a point where letting your kids play in your yard is enough to get arrested and jailed with child endangerment. Within the last 30 days, a woman has been arrested and charged with child endangerment for the crime of… letting her child walk to the store [1] and others have been jailed for letting their child play outside [2].
So what do you do to entertain children? Use what you have. Dunk them on the internet via YouTube first and then let them free range because you’re tired and can’t give a fuck anymore.
^1 https://abcnews.go.com/amp/GMA/Family/mom-arrested-after-son... ^2 https://www.aol.com/news/2015-12-03-woman-gets-arrested-for-...
My wife and I had CPS called on us (in Texas, no less) because I had some construction leftovers inside my fence, waiting to be taken to the dump, and my neighbor was concerned that my kids would be hurt playing around it.
We were interviewed, they found there were no issues, and the case was dropped. Very stressful experience, though.
And for what? I grew up on a farm in Nebraska. We had endless fields and roads around us to explore. The only off-limits area was an abandoned hog confinement, which to be fair, absolutely could have killed us (by falling into the open trench of porcine waste) – naturally, we still went there.
I know that reeks of survivor bias, but given the length of time Homo sapiens have survived, I think it’s a reasonably safe assumption that kids, when left to their own devices, are unlikely to be seriously injured or killed. Though, that’s probably only true if they’ve been exposed to it gradually over time, and are aware of the risks.
In my experience most neighbor complaints are not about the complaint but rather about the neighbor and/or the neighbor relationship, especially if its the first approach to the "issue". To my understanding, the majority of complaints (to all form of neighbor complaints) are false and made by a small number of complainers.
However this doesn't mean the government should not act. An interview of a false complaint is a small cost to pay compared to not doing anything when there is a real problem. Most of the time those employed to do the investigation known to look for signs of false reports and neighbor conflicts in order to filter them out, but at the same time they do need to make sure as to not miss-classify a real complaint.
Thanks, I needed that. In a thread full of people criticising UK law, I am very happy to have a crazy US example to make me feel better.
Reading that article, it seems that the mom was arrested because the cops wanted to. Someone called the cops for a child walking down the highway by themselves, and then the cops showed up and arrested her. Cops can arrest people for any reason practically, a lot of times because they just don't like you. I was thrown up against the back of a patrol car and searched when ordering chinese food late at night once. I don't see that she was charged by the prosecutor, or that the child was taken away.
The other link you have is neighbors that obviously dislike each other, and they told the cops the kid was in danger.
It is a very interesting counterpoint to the line throughout this discussion that the UK is some kind of oppressive state, yet you state here that in the US a cop can arrest people just because they want to. That would be an extreme edge case in the UK. Constitutional freedom of speech, and a paramilitary police force that can ignore the law.
Yeah it sucks. The police are pretty unaccountable due to their local nature and their unions. When people try to do something about it they go on 'soft strike' and refuse to enforce basic laws in retaliation.
Are kids in the US even able to go to school on their own like ours still do in europe?
You don't think there is some half way point between forcing them to stay inside and letting a 4 year old, wander 40m away with no supervision.
4 year old was playing in gated garden. Not like he was roaming downtown. I would say that this is very sad inability to communicate with your neighbours, when you call cops instead of just letting parents of kid know that there might be some dangers.
No mention in that example of internet. If I had to think of specifics, he's probably talking about the things that fall under the category of "free range kids" but also result in parents being criminally prosecuted.
This is a discussion about the UK online safety act! That is about the internet isn't it?
The other example in the parent comment was about fuel economy and trucks. They're just generalized laws about unintended consequences.
I used to walk and ride my bike to school. I was in 4th grade. 9 years old.
You show me a 9-year-old walking alone to school today, and I'll show you a parent who's getting investigated for child neglect. It's maddening.
So that chain of consequences means today's kids are meant to be watched 24/7, and that usually means they're cooped up inside. They're still facing "Stranger Danger" (except through Snap or whatever games they're playing), and now they're also in poorer health.
Nine is a bit young, but I lived down the street from a school and plenty of 10 - 12 year olds walked there by themselves.
Your first example is a case of lawmakers not willing to finish the job moreso than of regulation being bad.
That is like saying "when we write software there are bugs, so rather than fix them, we should never write software again".
Your second example is ascribing to regulation something that goes way beyond regulation.
No, what he says is "when we write software there are bugs, so we should write less software".
As opposed to "we should fix the bugs we find".
I don't think anyone believe that the "think of the children" argument leads to "unintended" consequences. They are thoroughly intended. It doesn't look like that, but policy makers do analyze potential impact and this is a problem you understand if you are more than 5 minutes into the topic.
Although I do think they overlook that their legislation is restricted to their domestic market though, so any potential positive effect is more or less immediately negated. That is especially true for English speaking countries.
These can not be unintended consequences. Obviously the UK government is aware of what they are doing and are using whatever language they can.
Modern fanatics don't care. They are eager to destroy everything for the sake of "protecting children".
Behold the failure to consider second order consequences when legislating rules.
Why are gas-guzzling trucks exempt from fuel standards? (Genuine question)
They were supposedly commercial vehicles with real need for size and towing capacity.
Because no one would fork over stupid amounts of money for a f*k off big truck if they didn't have a real need. Right?
Because they were a large fraction of cars manufactured by US companies, so not excluding them would have put the entire US auto industry out of business.
The original idea was that they needed big engines and bad aerodynamics to be able to perform their functions of hauling bulky loads and towing heavy trailers. Few people who didn't actually have those needs would want to drive one because they were unwieldy to drive and uncomfortable to be in relative to cars, so such an exemption surely wouldn't be widely exploited.
Google "chicken tax". The chain of unintended consequences goes back even further
Oppositional Defiance Disorder?
>Increase fuel economy -> Introduce fuel economy standards -> Economic cars practically phased out in favour of guzzling "trucks" that are exempt from fuel economy standards -> Worse fuel economy.
tl;dr: This is a myth.
There is no incentive to the consumer to purchase a vehicle with worse fuel economy.
There USED to be an incentive, 30-40 years ago.
It is not 1985 anymore.
The gas guzzler tax covers a range of fuel economies from 12.5 to 22.5 mpg.
It is practically impossible to design a car that gets less than 22.5 mpg.
The Dodge Challenger SRT Demon 170, with an 6.2 L 8 cylinder engine making ONE THOUSAND AND TWENTY FIVE horsepower is officially rated for 13 mpg but that's bullshit, it's Dodge juicing the numbers just so buyers can say "I paid fifty-four hundred bucks gas guzzler tax BAYBEE" and in real-world usage the Demon 170 is getting 25 mpg. Other examples of cars that cannot achieve 22.5 mpg are the BMW M2/M3/M4/M8, the Cadillac CT5, high-performance sports sedans for which the gas guzzler tax is a <5% price increase. ($5400 is 5% of the Demon 170 price, but 2-3% of what dealers are actually charging for it.)
The three most popular vehicles by sales volume in the United States are: 1. The Ford F-150, 2. The Chevy Silverado, and 3. The Dodge Ram 1500.
The most popular engine configuration for these vehicles is the ~3L V6. Not a V8. A V6.
Less than 1/4th of all pickup trucks are sold equipped with a V8.
According to fueleconomy.gov every single Ford, Chevrolet, and Ram full-size pickup with a V6 would pay no gas guzzler tax.
Most V8s would be close, perhaps an ECU flash away, to paying no gas guzzler tax. The only pickups that would qualify for a gas guzzler tax are the high-performance models-- single-digit percentages of the overall sales volume and at those prices the gas guzzler tax would not even factor into a buyer's decision.
People buy trucks, SUVs, and compact SUVs because they want them and can afford them.
Not because auto manufacturers phased out cars due to fuel economy standards. Not because consumers were "tricked" or "coerced". And certainly not because "the gubmint" messed things up.
They buy them because they WANT them.
The Toyota RAV4 is the 4th most popular car in the US. The Corolla is the 13th most popular. They are built on the same platform and dimensionally, the Corolla is actually very slightly larger except for height. They both come with the same general ballpark choices in engines. The gas guzzler tax only applies to the Corolla, but that doesn't matter because they both would be exempt. People don't freely choose the RAV4 over the Corolla because of fuel economy they buy it because the Corolla has 13 cubic feet of cargo capacity and the RAV4 has 70 cubic feet.
And before anyone says that the gas guzzler tax made passenger cars more expensive, passenger cars can be purchased for the same price adjusted for inflation they could be 50 years ago, but people don't want a Mitsubishi Mirage, which is the same price as a vintage VW Beetle (perennial cheapest new car from the 1960s) and better in every quantifiable metric, they want an SUV.
What may be true is that there is a national policy to keep fuel prices as low as possible, for a myriad of reasons, with one side effect of that policy being that it has enabled people to buy larger less fuel-efficient cars.
I do not believe it is auto manufacturers who are pushing for this policy. I believe it is the freight and logistic market. The auto market is valued at $4 billion, the freight and logistics market is $1,300 billion. GM and Ford are insignificant specks compared to the diesel and gasoline consumers of the freight and logistics firms (who have several powerful lobbies).
https://www.thetruthaboutcars.com/2017/08/v8-market-share-ju...
https://www.fueleconomy.gov
https://www.irs.gov/pub/irs-pdf/f6197.pdf (gas guzzler worksheet)
Per https://assets.publishing.service.gov.uk/media/61b7e040e90e0... the average UK car MPG is ~50mpg, so even allowing for the difference in US and UK gallons a 22.5mpg vehicle is colloquially a "gas guzzler" by our standards.
> What may be true is that there is a national policy to keep fuel prices as low as possible, for a myriad of reasons, with one side effect of that policy being that it has enabled people to buy larger less fuel-efficient cars.
Yes. Americans have always had cheap fuel and it's shaped the entire society around it.
In Britain, a Standard Imperial Gallon is 120% the size of a Standard US Gallon.
So while the fuel economy is higher in the UK, it isn't as high as it first appears.
People love to blame government regulations for consumer preferences that go against their own.
Consumers want larger vehicles, and manufactures bend the rules to allow for such vehicles to be more easily build. Manufactures write the laws, after all. CAFE allows for SUVs and other "light trucks" to get worse fuel economy than a car. Since fuel economy allowances are based on vehicle footprint, and its easier to make a car larger than it is to improve fuel economy.
But ... Why do they want to? I'm genuinely curious. Did this desire for larger vehicles exist latent in the human psyche? Is it an emergent property of a race to the bottom as everyone tries to have the safest car? Or to secure prestige via a positional good, leaving everyone worse off? Do you think marketing choices played a role in shaping our collective desires?
On big American roads it's easy to own big cars. Given that people have to drive a lot because it's so spread out, it's very convenient to be able to put lots of stuff and/or people in a vehicle. I theorise that your car is like another room in your house.
I know my wife likes storing things in the boot of our car and I'm not even American. It means they're always conveniently there - chairs for sitting in the park, shopping bags, groceries that she's going to take to a party or bought for someone else, kids sports equipment.
Humans have an instinct to seek status, like many (most?) other animals.
I agree that fuel cost and tax are not the reason trucks are so popular in the US. The main incentive for US manufacturers to have a large demand for light trucks because of the chicken tax. https://en.wikipedia.org/wiki/Chicken_tax
That does not make any sense to me. Vehicles can and are manufactured in the US.
Bigger vehicles are popular in the US because people want to be in a bigger vehicle and sit higher up than others, AND can afford to do so (ignoring their long term finances). I.e. the politically popular policy of low gas prices.
That's the long and short of it. Buyers rewarded the sellers that sold big and tall vehicles, so obviously sellers are going to sell big and tall vehicles.
There was no situation where buying a big and tall vehicle was cheaper than a smaller, more fuel efficient vehicle, so conclusively, people chose to spend more to get what they wanted. Of course, once someone else gets a bigger vehicle, then you are less safe, unless you get a bigger vehicle, and so on and so forth.
> There is no incentive to the consumer to purchase a vehicle with worse fuel economy.
Not true: Section 179 [0]. Luxury auto manufacturers are well-aware of this [1] and advertise it as a benefit. YouTube et al. are also littered with videos of people discussing how they're saving $X on some luxury vehicle.
> Not because consumers were "tricked" or "coerced". ... They buy them because they WANT them.
To be fair, they only want them because they've been made into extremely comfortable daily drivers. Anyone who's driven a truck from the 90s or earlier can attest that they were not designed with comfort in mind. They were utilitarian, with minimal passenger seating even with Crew Cab configurations. At some point – and I have no idea if this was driven by demand or not – trucks became, well, nice. I had a 2010 Honda Ridgeline until a few weeks ago, which is among the un-truck-iest of trucks, since it's unibody. That also means it's extremely comfortable, seats 5 with ease, and can still do what most people need a truck to do: carry bulky items home from Lowe's / Home Depot. Even in the 2010 model, it had niceties like heated seats. I just replaced it last week with a 2025 Ridgeline, and the new one is astonishingly nicer. Heated and ventilated seats, seat position memory, Android Auto / Apple CarPlay, adaptive cruise control, etc.
That's also not to say that modern trucks haven't progressed in their utility. A Ford F-350 from my youth could pull 20,000 lbs. on a gooseneck in the right configuration. The 2025 model can pull 40,000 lbs., and will do it in quiet luxury, getting better fuel economy.
[0]: https://www.irs.gov/publications/p946#idm140048254261728
[1]: https://www.landroveroflivermore.com/section-179.htm
"Gaming The Law"
We have something similar in Australia with the Online Safety Act 2021. I think this highlights a critical misunderstanding at the heart of the legislation: it imagines the internet as a handful of giant platforms rather than a rich tapestry of independent, community-driven spaces. The Online Safety Act’s broad, vague requirements and potential penalties are trivial hurdles for billion-dollar companies with in-house legal teams, compliance departments, and automatic moderation tooling. But for a single individual running a forum as a labour of love—or a small collective operating on volunteer time—this creates a legal minefield where any disgruntled user can threaten real financial and personal harm.
In practice, this means the local cycling forum that fostered trust, friendship, and even mental health support is at risk of vanishing, while the megacorps sail on without a scratch. Ironically, a measure allegedly designed to rein in “Big Tech” ends up discouraging small, independent communities and pushing users toward the same large platforms the legislation was supposedly targeting.
It’s discouraging to watch governments double down on complex, top-down solutions that ignore the cultural and social value of these smaller spaces. We need policy that recognises genuine community-led forums as a public good, encourages sustainable moderation practices, and holds bad actors accountable without strangling the grassroots projects that make the internet more human. Instead, this act risks hollowing out our online diversity, leaving behind a more homogenised, corporate-dominated landscape.
> We have something similar in Australia with the Online Safety Act 2021.
That wasn't the one I was thinking of, to be honest.
I'd have thought you would be mentioning the latest ball of WTF: "Online Safety Amendment (Social Media Minimum Age) Bill 2024".
According to the bill, HN needs to identify all Australian users to prevent under-16's from using it.
https://www.aph.gov.au/Parliamentary_Business/Bills_Legislat...
That bill is an amendment to the aforementioned act.
But yes, I'm confused as to whether it applies to online gaming, or sites such as wikipedia as well
> I'm confused as to whether it applies to online gaming
As written, it should. Which is ridiculous, and it's a ridiculous law in the first place. I'm loathe to discuss politics, but by god both Labor and the LNP are woeful when it comes to tech policy.
Im still dirty about the NBN. Every now and then i go back home to visit and are rudely reminded that the network speeds suck. What do you mean i have to in wait for things to download? What do you mean I'm the the biggest city in the country and are on a copper line getting 5 Mbps....
> it imagines the internet as a handful of giant platforms rather than a rich tapestry of independent, community-driven spaces.
As sad as it may be, their imagination is correct. The small spaces, summed up all together, are lost in the rounding errors.
Also nobody is going after the small spaces, because they don't even know they exist. And when they do they can be shut down I guess, if there really is misunderstanding. I don't get preemptively doing it other than giving up after a long duty of almost 30 years and using this as excuse. At least pass them to someone else that won't care about the liability.
Shut down and fined up to £18m.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
So, I fully understand why someone would rather shut down their site rather than potentially deal with the legal fallout. Even if the end result is "just getting shut down", that will come after a significant amount of legal troubles, and likely money spent dealing with them.
> when they do they can be shut down I guess, if there really is misunderstanding.
The fear some have is not misunderstandings, but disgruntled types (the sort of people who blow up over a perfectly reasonable moderation decision) and common garden variety griefers reporting things to cause inconvenience. I know people who have in the past run forums and had to put up with spurious reports to their ISP/host or even on one occasion local law enforcement. If someone did this it would likely go nowhere in the end but not before causing much stress and perhaps cost via paying for legal advice.
> I don't get preemptively doing it other than giving up after a long duty of almost 30 years and using this as excuse.
Having been involved less directly with that sort of admin & moderation work I can see this change being the final straw after putting up with the people of the internet for years. Calling it “just an excuse” seems rather harsh.
> At least pass them to someone else that won't care about the liability.
Depending on the terms people agreed to when signing up and posting, passing on the reigns might not be nearly as legally/morally clear-cut as several in these comments are assuming.
Some poeple just don't like to be illegal.
You're assuming the point of these laws is what they say on the tin and the people writing these laws are ignorant. A huge amount of legislation is written by think tanks and lobbyists.
Authoritarians don't want people to be able to talk (and organize) in private. What better way to discourage them than some "think of the children" nonsense? That's how they attacked (repeatedly) encryption.
Google, Facebook, and Twitter all could have lobbied against this stuff and shut it down, hard. They didn't.
That speaks volumes, and my theory is that they feel shutting down these forums will push people onto their centralized platforms, increasing ad revenues - and the government is happy because it's much easier to find out all the things someone is discussing online.
Google, Facebook, Twitter, etc. have really done as much as they can. Whoever are pushing this in Australian Government have a super weird kind of personal vendetta against 'Big Tech' - many speculate it's about how chummy our political class are with the media owning billionaires here in Australia, and how the shakedown they devised to wring money out of tech companies to subsidise the local media (the 'Media Bargaining Code') failed to really work.
It's honestly super weird. Now of course they are just proposing to tax the tech companies if they don't pay money to our local media orgs for something the tech companies neither want nor care about.
The sad part is, our major politicians are pretty much straight up blackmailed into doing this (though in practice they appear to do it gleefully). Murdoch and others own basically our entire media apparatus: don't do what they say, and you're destroyed in said media. It's absolutely wild the power they've been given.
[flagged]
Good luck with that under Usenet/IRC.
The actual OfCom code of practice is here: https://www.ofcom.org.uk/siteassets/resources/documents/onli...
A cycling site with 275k MAU would be in the very lowest category where compliance is things like 'having a content moderation function to review and assess suspected illegal content'. So having a report button.
This isn't how laws work. If you give a layperson a large law and tell him that, if he is in violation, he has to pay millions, then it pretty much doesn't matter that there is some way where, with some effort, he can comply. Most people aren't lawyers and figuring out how to actually comply with this is incredibly tedious and risky, as he is personally liable for any mistakes he makes interpreting those laws.
Companies have legal departments, which exist to figure out answers to questions like that. This is because these questions are extremely tricky and the answers might even change as case law trickles in or rules get revised.
Expecting individuals to interpret complex rulesets under threat of legal liability is a very good way to make sure these people stop what they are doing.
>This isn't how laws work.
The law worked the same way yesterday as it does today. It's not like the website run in Britain operated under some state of anarchy and in a few months it doesn't. There's already laws a site has to comply with and the risk that someone sues you, but if you were okay with running a site for 20 years adding a report button isn't drastically going to change the nature of your business.
You don't get it. The law is completely different for people and corporations. A corporation has the resources to figure out how exactly the law applies to them and defend that at trial. An individual does not.
It is plainly insulting to say that "adding a report button" is enough, obviously that is false. And investigating how to comply with this law is time consuming and comes with immense risk if done improperly. The fact that this law is new, means that nobody knows how exactly it has to be interpreted and that very well you might get it completely wrong. If a website has existed for 20 years with significant traffic it is almost certain that it has complied with the law, what absolutely is not certain is how complying with the law has to be done in the future.
I do not get why you have the need to defend this. "Just do X", is obviously not how this law is written, it covers a broad range of services in different ways and has different requirements for these categories. You absolutely need legal advice to figure out what to do, especially if it is you who is in trouble if you get it wrong.
> A corporation has the resources to figure out how exactly the law applies to them and defend that at trial.
A very large fraction of corporations are run on minimal margins. Some of them still do try and keep up with regulations and that is then (often) a very large part of their operating costs.
The big tech corporations which are the presumed targets of this will have absolutely zero problems paying their legal teams for the work they have to do to comply.
But in 2025 the law will change. It is this reason that the site will shut down the day before the law comes in.
This: OP seems to be throwing the baby out with the bathwater.
Im surprised they don’t already have some form of report/flag button.
I’m not so sure. It’s a layman’s interpretation, but I think any “forum” would be multi-risk.
That means you need to do CSAM scanning if you accept images, CSAM URL scanning if you accept links, and there’s a lot more than that to parse here.
I doubt it. While it's always a bit of a gray area, the example for "medium risk" is a site with 8M monthly users who share images, doesn't have proactive scanning and has been warned by multiple major organisations that it has been used a few times to share CSAM material.
Cases where they assume you should say "medium risk" without evidence of it happening are if you've got several major risk factors:
> (a) child users; (b) social media services; (c) messaging services; (d) discussion forums and chat rooms; (e) user groups; (f) direct messaging; (g) encrypted messaging.
Also, before someone comes along with a specific subset and says those several things are benign
> This is intended as an overall guide, but rather than focusing purely on the number of risk factors, you should consider the combined effect of the risk factors to make an overall judgement about the level of risk on your service
And frankly if you have image sharing, groups, direct messaging, encrypted messaging, child users, a decent volume and no automated processes for checking content you probably do have CSAM and grooming on your service or there clearly is a risk of it happening.
The problem is the following: if you don't have basic moderation your forum will be abused for those various illegal purposes
Having a modicum of rule enforcement and basic abuse protections (let's say: new users can't upload files) on it goes a long way
That scanning requirement only applies if your site is:
• A "large service" (more than 7 million monthly active UK users) that is at a medium or high risk of image-based CSAM, or
• A service that is at a high risk of image-based CSAM and either has more than 700000 monthly active UK users or is a file-storage and file-sharing service.
> do CSAM scanning if you accept images, CSAM URL scanning if you accept links
Which really should be happening anyway.
I would strongly prefer that forums I visit not expose me to child pornography.
You cannot get access to the tech without being a certain size to avoid people modifying images to avoid the filter.
Cloudflare has a free CSAM scanning tool available for everyone:
https://developers.cloudflare.com/cache/reference/csam-scann...
oh great so you centralize even harder and that will fix everything?
So what's your alternative to market forces?
Ggovernment regulation - "good" centralisation?
Not the person you are asking but alternatives I can think of are:
- Configure forums using ranks so that new users can post but nobody will see their post until a moderator approves or other members vouch for them. Some forums already have this capability. It's high maintenance though and shady people will still try to warm up accounts just like they do here at HN.
- Small communities make their sites invite only and password protect the web interface. This is also already a thing but those communities usually stay quite small. Some prefer small communities. quality over quantity, or real friends over bloated "friends" lists which is common on big platforms.
- Move to Tor onion sites so that one has more time to respond to a flagged post. Non tor sites get abused by people running scripts that upload CSAM, then snapshot it despite them being the ones uploading it, automatically submit to registrars, server and CDN providers so the domains and rented infrastructure get cancelled. This pushes everyone onto big centralized sites and I would not be surprised if some of them were people with a vested interest in doing so.
Not really great options but they do exist. Some use these options to stay off the radar being less likely to attract the unstable people or lazy agents trying to inflate their numbers. I suppose now we can add to the list government agencies trying to profiteer of this new law. Gamification of the legal system, as if weaponization of it were not bad enough.
> I would strongly prefer that forums I visit not expose me to child pornography.
While almost everybody including me shares this perference maybe it should be something that browsers could do? After all why put the burden on countless various websites if you can implement it in a single piece of software?
This could also make it easier to go after people who are sources of such material because it wouldn't immediately disappear from the network often without a trace.
> While almost everybody including me shares this perference maybe it should be something that browsers could do? After all why put the burden on countless various websites if you can implement it in a single piece of software?
If I recall correctly, Apple tried to do that and it (rightly) elicited howls of outrage. What you're asking for is for people's own computers to spy on them on behalf of the authorities. It's like having people install CCTV cameras their own homes so the police can make sure they're not doing anything illegal. It's literally Big Brother stuff. Maybe it would only be used for sympathetic purposes at first, but once the infrastructure is built, it would be a tempting thing for the authorities to abuse (or just use for goals that are not universally accepted, like banning all pornography).
Apple tried to do that obligatory, taking away control from user. Which of course is a terrible idea.
I don't want my browser to report me if I encounter illegal materials. I want the browser to anonymously report the website where they are, at most and even that, only if I don't disable reporting.
People do install cctv cameras in their homes but they are (or at least believe to be) in control of what happens with the footage.
So basically you want your browser to be controlled by the governement and remove ones ability to use their browser of choice?
All this because a negligible amount of web user upload CSAM?
No, I want my browser to be controlled by me. It should just be more capable so I'm not getting exposed to materials that I don't like getting exposed to and maybe easily report them if I want. Like adblock but for illegal or undesirable online materials.
> All this because a negligible amount of web user upload CSAM?
Still it's better to fix it in the browser than keep increasingly policing the entirety of the internet to keep it neglible.
Love to never be able to see photos of my child at the beach because Google Chrome tells me I'm a criminal.
Unless you tell Google Chrome it's ok and you actually want to see photos of naked children in some whitelisted contexts.
OP isn't throwing the baby with the bathwater and he explains it very well in his post: the risk of being sued is too great in itself, even if you end up winning the lawsuit.
The general risk of being sued is always there regardless of the various things laws say.
I think there’s a pretty decent argument being made here that OP is reading too far in the new rules and letting the worst case scenario get in the way of something they’re passionate about.
I wonder if they consulted with a lawyer before making this decision? That’s what I would be doing.
In this case though, the griefers don't have to file a lawsuit themselves, they just have to post harmful material and file complaints. That is a much lower threshold. It is less effort than the sorts of harassment these people already inflict on moderators, but with potentially much more serious results.
I was thinking the same.
I don’t like this new legislation one bit, but…
It’s not obvious to me that from the post or what I know of the legislation that OP is at meaningfully greater risk of being sued by someone malicious/vindictive or just on a crusade about something that they have been prior to the legislation. (Unless, of course, there forums have a consistent problem with significant amounts of harmful content like CSAM, hate speech, etc.)
I am not saying that the risk isn’t there or that this isn’t the prudent course of action, I just don’t feel convinced of it at this point.
Given:
> I do so philanthropically without any profit motive (typically losing money)
the cost (and hassle) of consulting with a lawyer is potentially a lot in relative terms.
That said, I thought that the rule in the UK was generally that the loser pays the winners costs, so I'd think that limit the costs of defending truly frivolous suits. The downside risks are possibly still high though.
> That said, I thought that the rule in the UK was generally that the loser pays the winners costs
That’s generally true… but only happens after those costs have been incurred and probably paid.
There’s no guarantee the party suing will be able to cover their own costs and the defendant’s costs. That leaves OP on the hook for defence costs with the hope that they might get them back after a successful and likely expensive defence.
In that situation, I can understand why OP wouldn’t want to take the risk.
> the loser pays the winners costs
Winning against the government is difficult - an asymmetric unfair fight. You can't afford to pay the costs to try: financial, risk, opportunity cost, and most importantly YOUR time.
OP having to consult a lawyer IS the problem...
I think from a US perspective being sued is commonplace but in most of the world being sued is very rare.
OP uses they/them pronouns.
From how I understood the post, the forums were never self-sustaining financially and always required a considerable amount of time, so the new legislation was probably just the final straw that broke the camel's back?
Exactly. Adding punitive governance hurdles hinders the small and/or solo.
Those that do whist not seeking financial gain are impacted the most.
Regulatory capture. https://en.wikipedia.org/wiki/Regulatory_capture
Yes they do but you need to do more than that.
They do not have the resources to find out exactly what they need to do so that there is no risk of them being made totally bankrupt.
If that is all - please point to the guidance or law that says just having a report button is sufficient in all cases.
I get the same feeling as the repercussions for bad actors are fines relative to revenue, 10% if I read correctly, given that the OP has stated that they work off a deficit most of the time, I can't see this being an issue.
Also if it is well monitored and seems to have a positive community, I don't see the major risk to shut down. Seems more shutting down out of frustration against a law that, while silly on it's face, doesn't really impact this provider.
>the repercussions for bad actors are fines relative to revenue, 10% if I read correctly, given that the OP has stated that they work off a deficit most of the time, I can't see this being an issue.
From another commenter:
Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.
Part of the issue is that you have to spend time and money to defend an accusation.
I am the OP, and if you read the guidance published yesterday: https://www.ofcom.org.uk/siteassets/resources/documents/onli...
Then you will see that a forum that allows user generated content, and isn't proactively moderated (approval prior to publishing, which would never work for even a small moderately busy forum of 50 people chatting)... will fall under "All Services" and "Multi-Risk Services".
This means I would be required to do all the following:
1. Individual accountable for illegal content safety duties and reporting and complaints duties
2. Written statements of responsibilities
3. Internal monitoring and assurance
4. Tracking evidence of new and increasing illegal harm
5. Code of conduct regarding protection of users from illegal harm
6. Compliance training
7. Having a content moderation function to review and assess suspected illegal content
8. Having a content moderation function that allows for the swift take down of illegal content
9. Setting internal content policies
10. Provision of materials to volunteers
11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
...
the list goes on.
It is technical work, extra time, the inability to not constantly be on-call when I'm on vacation, the need for extra volunteers, training materials for volunteers, appeals processes for moderation (in addition to the flak one already receives for moderating), somehow removing accounts of proscribed organisations (who has this list, and how would I know if an account is affiliated?), etc, etc.
Bear in mind I am a sole volunteer, and that I have a challenging and very enjoyable day job that is actually my primary focus.
Running the forums is an extra-curricular volunteer thing, it's a thing that I do for the good it does... I don't do it for the "fun" of learning how to become a compliance officer, and to spend my evenings implementing what I know will be technically flawed efforts to scan for CSAM, and then involve time correcting those mistakes.
I really do not think I am throwing the baby out with the bathwater, but I did stay awake last night dwelling on that very question, as the decision wasn't easily taken and I'm not at ease with it, it was a hard choice, but I believe it's the right one for what I can give to it... I've given over 28 years, there's a time to say that it's enough, the chilling effect of this legislation has changed the nature of what I was working on, and I don't accept these new conditions.
The vast majority of the risk can be realised by a single disgruntled user on a VPN from who knows where posting a lot of abuse material when I happen to not be paying attention (travelling for work and focusing on IRL things)... and then the consequences and liability comes. This isn't risk I'm in control of, that can be easily mitigated, the effort required is high, and everyone here knows you cannot solve social issues with technical solutions.
Thanks for all your work buro9! I've been an lfgss user for 15 years. This closure as a result of bureaucratic overreach is a great cultural loss to the world (I'm in Canada). The zany antics and banter of the London biking community provided me, and my contacts with which I have shared, many interesting thoughts, opinions, points of view, and memes, from the unique and authentic London local point of view.
LFGSS is more culturally relevant than the BBC!
Of course governments and regulations will fail realize what they have till it's gone.
- Pave paradise, put up a parking lot.
Which of these requirements is, in your opinion, unreasonable?
> The vast majority of the risk can be realised by a single disgruntled user on a VPN from who knows where posting a lot of abuse material when I happen to not be paying attention (travelling for work and focusing on IRL things)... and then the consequences and liability comes. This isn't risk I'm in control of, that can be easily mitigated, the effort required is high, and everyone here knows you cannot solve social issues with technical solutions.
I bet you weren't the sole moderator of LFGSS. In any web forum I know, there is at least one moderator being online every day and much more senior members able to use a report function. I used to be a moderator for a much smaller forum and we had 4 to 5 moderators any time with some of them being among those that were online every day or almost every day.
I think a number of features/settings would be interesting for a forum software in 2025:
- desactivation of private messages: people can use instant messaging for that
- automatically blur post when report button is hit by a member (and by blur I mean replacing server side the full post by an image, not doing client side javascript).
- automatically blur posts when not seen by a member of the moderation or a "senior level or membership" past a certain period (6 or 12 hours for example)
- disallow new members to report and blur stuff, only people that are known good members
All this do not remove the bureaucracy of making the assessments/audits of the process mandated by the law but it should at least make forums moderable and have a modicum amount of security towards illegal/CSAM content.
OP runs 300 essentially free forums. It's just too much.
That's why feeling too. There will always be people who take laws and legal things overly seriously. For example, WordPress.org added a checkbox to the login to say that pineapple on pizza is delicious and there are literal posts on Twitter asking "I don't like pineapple on pizza, does this mean I can't contribute". It doesn't matter if a risk isn't even there, like who is going to be able to sue over pineapple on pizza being delicious or not? Yet, there will be people who will say "Sorry, I can't log in I don't like pineapple on pizza".
In this case, it's "I'm shutting down my hobby that I've had for years because I have to add a report button".
> having a content moderation function to review and assess suspected illegal content
That costs money. The average person can't know every law. You have to hire lawyers to adjudicate every report or otherwise assess every report as illegal. No one is going to do that for free if the penalty for being wrong is being thrown in prison.
A fair system would be to send every report of illegal content to a judge to check if it's illegal or not. If it is the post is taken down and the prosecution starts.
But that would cost the country an enormous amount of money. So instead the cost is passed to the operators. Which in effect means only the richest or riskiest sites can afford to continue to operate.
Answered here: https://news.ycombinator.com/item?id=42434349
[flagged]
What agenda do you think the OP is following, and why do you think they'd do so now after their long (~3 decades!) history of running forums? There has been many other pieces of legislation in that time, why now?
I tried to think of an agenda, but I'm struggling to come up with one. I think OP just doesn't want to be sued over a vague piece of legislation, even if it was a battle they could win (after a long fight). Just like they said right there in the post.
It's kind of rude to imply that this is performative when they gave a pretty reasonable explanation.
> or is pretending to feel for some agenda
Assume good faith.
https://news.ycombinator.com/newsguidelines.html
The actual rules are vulnerable to this attack. https://www.legislation.gov.uk/ukpga/2023/50
If you think the attack won't be attempted, you've never been responsible for an internet forum.
Can you explain what you mean by "this attack"?
They'll submit a complaint to the regulator that you've not done a risk assessment?
I've tried submitting issues to the ICO before but didn't have enough for them to go on and so the other company was never contacted.
This is Ofcom, not the ICO; and stuff like "flood the site with child abuse material" (not in your risk assessment, why would it be? this is a public forum about cycling and nobody's ever done that before) and try to get you prosecuted for not having adequate protections in place.
Their examples of it being right to say you're low risk have a key part about it not happening before. Medium risk example had "has had warnings of csam being shared before from international organisations and has no way of spotting it happening again".
You don't have to stop everything happening to comply.
So is the scenario you're picturing that someone spams child porn, complains you don't stop it and makes you add a URL filter? Would you do something different if someone was spamming csam anyway?
Someone spams CSAM on the site. You report it to CEOP, as every forum mod knows to do (though most have never needed to do), and Ofcom let you off with a warning – but you're no longer low-risk, so there's a lot more paperwork.
Now someone copy-pastes the doxx of members of the military from a leaked Pastebin – something you have no practical way of detecting – and it's not your first strike, and there's some public attention and someone decides they need an Example, so now you're getting scary letters about potential criminal charges.
You don't hear anything about those charges, so you assume things are okay. But now someone's claiming to be the parent of one of your users, who hasn't been around for a while. They claim the user was 17, has tragically died, and you don't have a policy about giving parents access to information about this user's activity (but they claim it's a TTRPG forum, which is a children's game, so 35(1)(3)(b) says you should have had a children's access assessment), and they claim they can prove they're the user's parents (they have the password, even!) but haveibeenpwned says the associated email address was in a data breach. Do you provide the information, or not?
Fortunately, you got in context with the real parents of that child – they know nothing about this website you run, and the person contacting you is someone else. You let them know that photos of their identification documents have been stolen. (You later learn that the user isn't even dead: they tell you about a stalker ex, and you make a note to be extra careful about this user's data.)
One of the domains in your webring has expired, and now redirects to a cryptospam site. That counts as §38 "fraudulent advertising". In response, Ofcom decide (very reasonably) to make webrings illegal.
If only more people actually read the actual documents in context (same with GDPR), but the tech world has low legal literacy
Expecting people to read and correctly interpret complex legal documents is absurd. Obviously any lay person is heavily dissuaded by that.
I would never except personal liability for my correct interpretation of the GDPR. I would be extremely dumb if I did.
The UK has just given up on being in any way internationally relevant. If the City of London financial district disappeared, within 10 years we'd all forget that it's still a country.
This feels relevant to your comment: https://archive.is/9V2Bf
Orgs are already fleeing LSEG for deeper capital markets in the US.
The LSE itself isn't really _that_ important; London remains huge for financial services in general (though this _may_ be somewhat in decline for various reasons; it lost a certain amount of importance as the de facto gateway to Europe after Brexit, say).
As an aside, the UK is a great tourist destination, especially if you leave London right after landing.
Beautiful landscape, the best breakfast around, really nice people, tons of sights to see.
Best Breakfast Around? That's not one I've heard touted before. Expand plz. The stereotypical british breakfast in my head is undercooked bacon, beans, hard cooked eggs.
You forgot the sausages! Compare and contrast to the "continental" "breakfast" which is usually a muffin.
And you forgot the black pudding! Not to mention tomatoes, mushrooms (if you insist) and of course fried bread and/or toast. I won’t complain if hash browns or white pudding make an appearance either.
You can’t beat a good fry up!
Yeah, the "Full English" breakfast is popular worldwide.
https://en.wikipedia.org/wiki/Full_breakfast
Yes the end result of rich western nations that get strangled by government is to be a museum
I agree it’s tragic but out of all the ways a culture can strangle itself, museumification is the least horrible
There's always the option of, you know, not strangling itself.
That would be splendid although it seems to have been a controversial choice until recently
don't kinkshame the lovely people of the Isles, they apparently have a penchant for many things, like separate cold and hot water faucet and a bit of autoerotic asphyxiation.
> like separate cold and hot water faucet
This was, oddly, for regulatory reasons; the concern was that a blocked mixer unit could cause hot water (considered potentially unsafe) to be forced into the mains supply (presumed safe). This hasn't been a concern with mixer designs for a long time, but it took til the 90s to get the rule changed.
The hot water was considered unsafe initially because early hot water cylinders over there were open tanks subject to incursion by vermin
This is a weirdly persistent myth, but no, that's not the case (think about it; what would happen if you had an open hot water tank? It would cool down very quickly, and you'd have horrendous humidity problems). _Cold_ water tanks used to be like that (sometimes still are), which I suspect is where the confusion comes from.
Hot water getting into the mains would be a concern anywhere; in particular, unless all equipment is in perfect working order, there's a legionnaires disease risk, but there are many other risks.
Thanks for the details! (Completely randomly just a few hours ago I got a many-many years old Tom Scott video recommended to me on YT about this topic.)
...plenty of weather, scenic potholes, medieval plumbing and occasional trains.
How much damage can they withstand before they figure out how to stop hurting themselves? I wouldn't touch UK investment with a ten foot pole.
A lot more, the Online Safety Act is just a symptom of the structural problems (Lack of de-facto governance, A hopelessly out of touch political class, Voting systems that intentionally don't represent the voting results, etc).
Argentina has had nearly 100 years of decline, Japan is onto its third lost decade. The only other party in the UK that has a chance of being elected (because of the voting system) is lead by someone who thinks sandwiches are not real [1]. It's entirely possible the UK doesn't become a serious country in our lifetimes.
[1] https://www.politico.eu/article/uk-tory-leader-sandwiches-no...
> “I’m not a sandwich person, I don’t think sandwiches are a real food, it’s what you have for breakfast.” The Tory leader went on to confirm that she “will not touch bread if it’s moist.
The headline is clickbait. She didn't say that sandwiches are not real. She is saying that she doesn't believe it is a proper lunch/meal.
For all the deliberate rage-baiting that Kemi Badenoch and other present-day Tories engage in, the 'controversy' about sandwiches is entirely constructed by journalists. The Politico article that parent linked to even says as much:
"The Spectator asked the Tory leader — elected to the head of the U.K. opposition party in November — if she ever took a lunch break."
The Spectator are using their press privileges to ask party leaders about their personal lifestyle rather than asking about anything relevant to policy - and although the Spectator might be forgiven for that, it is indefensible for 'serious' newspapers such as the Guardian and the Telegraph to be giving this story front-page status.
There are lots of politicians for us to be embarrassed about, but perhaps even more journalists.
The person that I replied to tried to pretend that Kemi Badenoch had seriously disputed the existence of a sandwiches. I am not sure we deserve better politicians and journalists.
I am of the opinion that the vast majority of journalists are simply stenographers. I wouldn't expect them to do their job. Unfortunately you have do piece together the truth for yourself.
>A hopelessly out of touch political class
Orwell pointed this out in England your England which was written during the Blitz. Many of the problems he described have only got worse in the decades since he wrote about them in my opinion. While the essay is a bit dated now (it predates the post-war era of globalisation for example which created new axes in UK politics) I still think it's essential background reading for people who want to know what's wrong with the UK, and it's an excellent example of political writing in general.
She doesn't think sandwiches aren't real. It was just a point about not liking them.
The current actual leader of the UK decided to politicise this, in a real moist bread response:
> Prime Minister Keir Starmer — who leads a country grappling with a stagnant economy, straining public services and multiple crises abroad — in turn accused Badenoch of talking down a “Great British institution.”
Argentina is a great analog for the UK, time shifted by century. Both former first-class economies doomed to a long decline by bad policies that elites refuse to change.
Argentina was a rich country but never a rich industrialized country. At the time we were rich, we were exporting beef and importing everything that came from a factory. Later attempts at industrialization, after global protectionism and domestic infighting had already plunged us into relative poverty, were based on the flawed paradigm of import-substitution industrialization, whereas the UK was transitioning from mercantilism to Smithian liberalism when they industrialized, both of which put the highest possible priority on exports. London is the world's second biggest financial hub, a fact that accounts for a significant part of the English economy, while Buenos Aires was never a financial hub for anyone but Argentines, and even we bank in London, Omaha, or Montevideo whenever we have the choice.
Industrialization was somewhat successful; I am eating off an Argentine plate, on an Argentine table, with Argentine utensils (ironically made of stainless steel rather than, as would be appropriate for Argentina, silver) while Argentine-made buses roar by outside. A century ago, when we were rich, all those would have been imported from Europe or the US, except the table. My neighborhood today is full of machine shops and heavy machinery repair shops to support the industrial park across the street. Even the TV showing football news purports to be Argentine, but actually it's almost certainly assembled in the Tierra del Fuego duty-free zone from a Korean or Chinese kit.
There is not much similarity.
As a curious occasional geoguessr player, whereabouts in Tierra del Fuego one might find industry, manufacturing and assembly? I thought it was fishing, tourism and shipping focused.
I've never been there, but Google Maps search https://www.google.com/maps/search/f%C3%A0brica+de+aire+acon... suggests the southwest corner of Rio Grande, and also there's a Midea Carrier factory a bit north of the city along the coastal highway.
Well, I guess who decides the line between basic industrialization and import substitution? The bondholders?
Import substitution is not an alternative to basic industrialization. It's a policy advocated as a means to achieve basic industrialization. I regret that my comment was so misleading.
The usual alternative to import substitution industrialization is export-focused industrialization. Argentina and Brazil exemplify the former; Japan, Taiwan, South Korea, Hong Kong, and now the PRC exemplify the latter. The line between them is whether the country's manufactures are widely exported.
Hard disagree. Argentina is only similar to the UK insofar as they both deindustrialized starting in the 80s. Besides that, I have no idea why it would be a "great analog".
a hundreds year ago Argentine had a population of less than 8million people and the 8º biggest territory of highly fertile land. That's not even 20% of the current population. Argentina was never a developed country.
What country is a good counter example to UK decline?
Spain maybe?
China and India
I don't think sandwiches are "real food" too, what's the problem with that specific case?
Not sure exactly, but the first think that comes to mind is "let them eat cake" vibes.
> Lunch is for wimps
Weird flex but okay?
USA may not be such a good bet these days either.
> How much damage can they withstand before they figure out how to stop hurting themselves?
Funnily enough we wonder this about the USA and its drain-circling obsession with giving power -- and now grotesque, performative preemptive obeisance -- to Donald Trump.
[dead]
>the Online Safety Act was supposed to hold big tech to account, but in fact they're the only ones who will be able to comply... it consolidates more on those platforms.
This says it so well, acknowledging the work of a misguided bureaucracy.
Looks like it now requires an online community to have its own bureaucracy in place, to preemptively stand by ready to effectively interact in new ways with a powerful, growing, long-established authoritarian government bureaucracy of overwhelming size and increasing overreach.
Measures like this are promulgated in such a way that only large highly prosperous outfits beyond a certain size can justify maintaining readiness for their own bureaucracies to spring into action on a full-time basis with as much staff as necessary to compare to the scale of the government bureaucracy concerned, and as concerns may arise that mattered naught before. Especially when there are new open-ended provisions for unpredictable show-stoppers, now fiercely codified to the distinct disadvantage of so many non-bureaucrats just because they are online.
If you think you are going to be able to rise to the occasion and dutifully establish your own embryonic bureaucracy for the first time to cope with this type unstable landscape, you are mistaken.
It was already bad enough before without a newly imposed, bigger moving target than everything else combined :\
Nope, these type regulations only allow firms that already have a prominent well-funded bureaucracy of their own, on a full-time basis, long-established after growing in response to less-onerous mandates of the past. Anyone else who cannot just take this in stride without batting an eye, need not apply.
> Looks like it now requires an online community to have its own bureaucracy in place
What do you mean by bureaucracy in this case? Doing the risk assessment?
Good question.
I would say more like the prohibitive cost of compliance comes from the non-productive (or even anti-productive) nature of the activities needed to do so, as an ongoing basis.
An initial risk assessment is a lot more of a fixed target with a goal that is in sight if not well within reach. Once it's behind you, it's possible to get back to putting more effort into productive actions. Assessments are often sprinted through so things can get "back to normal" ASAP, which can be worth it sometimes. Other times it's a world of hurt without paying attention to whether it's a moving goalpoast and the "sprint" might need to last forever.
Which can also be coped with successfully, like dealing with large bureaucratic institutions as customers, since that's another time when you've got to have your own little bureaucracy. To be fully dedicated to the interaction and well-staffed enough for continuous 24/7 problem-solving operation at a moment's notice. If it's just a skeleton crew at a minimum they will have a stunted ability for teamwork since the most effective deployment can be more like a relay race, where each member must pull the complete weight, go the distance, not drop the baton, and pass it with finesse.
While outrunning a pursuing horde and their support vehicles ;)
OP mentions this ( https://news.ycombinator.com/item?id=42440887 ):
> 1. Individual accountable for illegal content safety duties and reporting and complaints duties
> 2. Written statements of responsibilities
> 3. Internal monitoring and assurance
> 4. Tracking evidence of new and increasing illegal harm
> 5. Code of conduct regarding protection of users from illegal harm
> 6. Compliance training
> 7. Having a content moderation function to review and assess suspected illegal content
> 8. Having a content moderation function that allows for the swift take down of illegal content
> 9. Setting internal content policies
> 10. Provision of materials to volunteers
> 11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
> 12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
> ...
> the list goes on.
Most of those don't seem like they would actually be much problem.
First, #2, #4, #5, #6, #9, and #10 only apply to sites that have more than 7 000 000 monthly active UK users or are "multi-risk". Multi-risk means being at medium to high risk in at least two different categories of illegal/harmful content.
If the site has been operating a long time and has not had a problem with illegal/harmful content it is probably going to be low risk. There's a publication about risk levels here [1].
For the sake of argument though let's assume it is multi-risk.
#1 means having someone who has to explain and justify to top management what the site is doing to comply. It sounds like in the present case the person who would be handling compliance is also the person who is top management, so not really much to do here.
#2 means written statements saying which senior managers are responsible for the various things needed for compliance. For a site without a lot of different people working on it this means writing maybe a few sentences.
#3 is not applicable. It only applies to services that are large (more than 7 000 000 active monthly UK users) and are multi-risk.
#4 means keeping track of evidence of new or increasing illegal content and informing top management. Evidence can come from your normal processing, like dealing with complaints, moderation, and referrals from law enforcement.
Basically, keep some logs and stats and look for trends, and if any are spotted bring it up with top management. This doesn't sound hard.
#5 You have to have something that sets the standards and expectations for the people who will dealing with all this. This shouldn't be difficult to produce.
#6 When you hire people to work on or run your service you need to train them to do it in accord with your approach to complying with the law. This does not apply to people who are volunteers.
#7 and #8 These cover what you should do when you become aware of suspected illegal content. For the most part I'd expect sites could handle it like the handle legal content that violates the site's rules (e.g., spam or off-topic posts).
#9 You need a policy that states what is allowed on the service and what is not. This does not seem to be a difficult requirement.
#10 You have to give volunteer moderators access to materials that let them actually do the job.
#11 This only applies to (1) services with more than 7 000 000 monthly active UK users that have at least a medium risk of image-based CSAM, or (2) services with a high risk of image-based CSAM that either have at least 700 000 monthly active UK users or are a "file-storage and file-sharing service".
A "file-storage and file-sharing service" is:
> A service whose primary functionalities involve enabling users to:
> a) store digital content, including images and videos, on the cloud or dedicated server(s); and
> b) share access to that content through the provision of links (such as unique URLs or hyperlinks) that lead directly to the content for the purpose of enabling other users to encounter or interact with the content.
#12 Similar to #11, but without the "file-storage and file-sharing service" part, so only applicable if you have at least 700 000 monthly active UK users and are at a high risk of CSAM URLs or have at least 7 000 000 montly active UK users and at least a medium risk of SCAN URLs.
[1] https://www.ofcom.org.uk/siteassets/resources/documents/onli...
Go through their new tool https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...
There is no choice over number of users.`
Where exactly is this note on size?
Reading the document you gave there seems to be no saying that below a number of users there is no risk or even low risk. There is the paragraph "There are comprehensive systems and processes in place, or other factors which sufficiently reduce risks of this kind of illegal harm to users, and your evidence shows they are very effective. Even if taking all relevant measures in Ofcom’s Codes of Practice, this may not always be sufficient to mean your service is low risk"
Which implies that you must have comprehensive systems and processes in place - what exactly are these systems and their requirements? What happens if the site owner is ill for a week and isn't logged on?
I did a double take when I saw this here. I’ve lurked on LFGSS, posted from time to time and bought things through it. Genuinely one of the best online communities I’ve been in, and the best cycling adjacent one by far.
Having said all that, I can’t criticise the decision. It makes me sad to see it and it feels like the end of an era online
Just like our knees… we might have started as fg, but now ss, and soon our Phils will not be laced for the track but for our chairs. The fight is fading.
Have you considered handing off the forums to someone based outside of the UK? I'm sure you might be able to find a reasonable steward and divest without leaving your users stranded. You've worked very hard and have something to be proud of, I would hate to see it go.
There’s several stories of open source projects being handed off to someone who seemed helpful, only to have them turn around and add malware months later.
I completely understand a desire to shut things down cleanly, rather than risk something you watched over for years become something terrible.
This is my question as well. It seems like there would be someone willing to do this, especially outside the EU.
Finding someone trustworthy is hard, but I know buro9 knows tons of people.
The UK is not in the EU, and this is about a UK law.
I am aware.
Then why suggest someone “especially outside the EU”, then? Would you mind clarifying that?
I meant "definitely outside the UK, and especially outside the EU", since then there's pretty much nothing these governments can do.
"The Act applies to services even if the companies providing them are outside the UK should they have links to the UK. This includes if the service has a significant number of UK users"
This is gonna sound crazy, but you can potentially just ignore unjust laws in countries like the UK if you don't live there. At your own risk of course, but that is the nature of protest. If OP divests completely then it should be out of his hands.
The OP does live in the UK. It would be insane to uproot their life just to keep a forum alive, while still having to deal with potential headaches related to it.
Yes, my suggestion to OP was to pass it off to a non-UK steward.
>"...This includes if the service has a significant number of UK users".
"[A] significant number"? How Britishly vague.
There was one person involved in the doompf of that ceo guy....
I would say that a significant-sized football crowd would be over 75,000.
That's a lot of numbers that 'significant', has to lean on.
> The subsection headed ‘User numbers’ (which begins at paragraph 5.7) explains when a service is to be treated as having more than a particular number of monthly active United Kingdom users for those measures which apply in relation to services of a certain size, and how to calculate the number of monthly active United Kingdom users. The definition of ‘large service’ is included in the definitions section in Section 5 of this document.
That section details how to calculate the figures, because they're relevant for sections like CSAM scanning
> Services that are at high risk of imagebased CSAM and (a) have more than 700,000 monthly active United Kingdom users or (b) are file-storage and file-sharing services.
Please allow us to gift you free-forever space at rsync.net to hold/stage this data - possibly in encrypted form - such that you can preserve what you have built.
Just email us.
None of this seems to describe exactly what the problem with this new act is. Can someone ELI5 what this new law does that means it's no longer safe to run your own forum?
I think that the fact that no one is fully sure is part of the problem.
The act is intentionally very vague and broad.
Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.
If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".
The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?
bearing in mind, this is also a country that will jail you for a meme post online.
The full agenda of course is: if we jail someone for the meme, then we get to force the company to remove the meme, and then we get to destroy the company if they do not comply with exacting specifications within exact times. Thus full control of speech, teehee modern technology brings modern loopholes! "shut up peon, you still have full right to go into your front yard and say your meme to the squirrels"
You can say it in your back yard to a garden gnome, the front yard is within earshot of public lands and the squirrels are locally endangered.
[dead]
No, not just memes that encourage people to riot on the streets. But you know that, and it's funny that you think that you'd think that meme encouraging that should be enough to land someone in jail anyways. I mean, it fits very well with the servile "king's subject" mentality that your average Brit has but still, always funny to come across.
(And no I'm not american, but the UK is on the opposite end of rationalizing every single restriction and siding with authorities every single time. It's extremely pitiful to see)
Who got jailed for a meme that wasn't inciting to riot?
Peaceful protest that power doesn't like is a riot
Example? Or is this hypothetical?
I can think of Canada's Emergency powers act being invoked over the Freedom Convoy.
> If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.
This is the problem with many European (and I guess also UK) laws.
GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.
Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).
NOYB is a non governmental organisation which initiated many of the investigations against Meta. E.g. they recently filed a complaint against the social media app BeReal for not taking no for an answer and continuesly asking for permission for data collection if you decline.
> The act is intentionally very vague and broad
Exactly the complaint that everyone on here made about GDPR, saying the sky would fall in. If you read UK law like an American lawyer you will find it very scary.
But we don't have political prosecuters out to make a name for themselves, so it works ok for us.
From Wikipedia:
> The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.
Doesn't that 18 million minimum disproportionately effect smaller operations risk wise? Or is that the point?
Yes, but it sounds like part of the point is that you want to put the fear of the Lord into small-fry operators.
They mention especially in their CSAM discussion that, in practice, a lot of that stuff ends up being distributed by smallish operators, by intention or by negligence—so if your policy goal is to deter it, you have to be able to spank those operators too. [0]
> In response to feedback, we have expanded the scope of our CSAM hash-matching measure to capture smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.
Surely we can all think of web properties that have gone to seed (and spam) after they outlive their usefulness to their creators.
I wonder how much actual “turnover” something like 4chan turns over, and how they would respond to the threat of a 10% fine vs an £18mm one…
[0] https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
It's worth noting that integrating a CSAM hash scanner is easy to do. It took me a few hours to do the work, including testing and automatic database updates.
HOWEVER: I'm not sure how you would get access to the CSAM hash database if you're were starting a new online image hosting service.
The requirements to sign up for IWF (the defacto UK CSAM database) membership are:
- be legally registered organisations trading for more than 12 months;
- be publicly listed on their country registration database;
- have more than 2 full-time unrelated employees;
- and demonstrate they have appropriate data security systems and processes in place.
Cloudflare have a free[1] one but you have to be a Cloudflare customer.
Am I missing something, or does this make it very difficult to start up a public facing service from scratch?
[0] https://www.iwf.org.uk/membership/how-to-join/
[1] https://blog.cloudflare.com/the-csam-scanning-tool/
> Am I missing something, or does this make it very difficult to start up a public facing service from scratch?
It's by design. Politicians have fallen for big tech lobbyists once again.
Also who says that the hashes provided by your CSAM database of choice are actually flagging illegal data and not also data that whoever runs the database wants to track down? You have no idea. You are just complicit in the surveillance state, really.
Yes. The regulation is set up to destroy smaller startups & organisations; the only folks who have a hope of complying with it are Big Tech.
AKA "regulatory capture".
The purpose of a system is what it does
It is not 18 million minimum, it is up to 18 million... unless you are so big that the second criteria affects you, then it is up to that.
It's a minimum maximum. The amount is still "up to" and courts rarely assign the maximum penalty for anything. It seems aimed at platforms which really break the rules, but are run at minimal cost. Basically a value of "what do you charge a minimal forum run at cost, with sole purpose of breaking all these rules".
Sure, it's "up to", but how low can you assume things will go? 5% would still destroy someone.
You could look at how penalties are assigned in other court cases which have maximum financial penalties.
Is that 18 million levied on the company? Or the individual?
If it's the company, the shareholders etc are not liable.
It is essentially requiring tech companies to work for the UK government as part of law enforcement. They are required to monitor and censor users or face fines and Ofcom has the ability to shutdown things they don't like.
This basically ensures that the only people allowed to host online services for other people in the UK will be large corporations. As they are the only ones that can afford the automation and moderation requirements imposed by this bill.
You should be able to self-host content, but you can't do something like operate a forums website or other smaller social media platform unless you can afford to hire lawyers and spend thousands of dollars a month hiring moderators and/or implementing a bullet proof moderation system.
Otherwise you risk simply getting shutdown by Ofcom. Or you can do everything yo are supposed to do and get shutdown anyways. Good luck navigating their appeals processes.
I don't mind getting shut down so much as I mind getting a fine for millions when my small little website doesn't make any money.
But surely no right minded judge would do such a thing, right?
UK fines are proportionate. So no. No UK judge will fine a small website 18m
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
You need to do a risk assessment and keep a copy. Depending on how risky things are, you need to put more mitigations in place.
If you have a neighbourhood events thing that people can post to, and you haven't had complaints and generally keep an eye out for misuse, that's it.
If you run a large scale chat room for kids with suicidal thoughts where unvetted adults can talk to them in DMs you're going to have a higher set of mitigations and things in place.
Scale is important, but it's not the only determining factor. An example of low risk for suicide harm is
> A large vertical search service specialised in travel searches, including for flights and hotels. It has around 10 million monthly UK users. It uses recommender systems, including for suggesting destinations. It has a basic user reporting system. There has never been any evidence or suggestion of illegal suicide content appearing in search results, and the provider can see no way in which this could ever happen. Even though it is a large service, the provider concludes it has negligible or no risk for the encouraging or assisting suicide offence
An example for high risk of grooming is
> A social media site has over 10 million monthly UK users. It allows direct messaging and has network expansion prompts. The terms of service say the service is only for people aged 16 and over. As well as a content reporting system, the service allows users to report and block other users. While in theory only those aged 16 and over are allowed to use the service, it does not use highly effective age assurance and it is known to be used by younger children. While the service has received few reports from users of grooming, external expert organisations have highlighted that it is known to be used for grooming. It has been named in various police cases and in a prominent newspaper investigation about grooming. The provider concludes the service is high risk for grooming
> this is not a venture that can afford compliance costs... and if we did, what remains is a disproportionately high personal liability for me, and one that could easily be weaponised by disgruntled people who are banned for their egregious behaviour
I'm a little confused about this part. Does the Online Safety Act create personal liabilities for site operators (EDIT: to clarify: would a corporation not be sufficient protection)? Or are they referring to harassment they'd receive from disgruntled users?
Also, this is the first I've heard of Microcosm. It looks like some nice forum software and one I maybe would've considered for future projects. Shame to see it go.
The linked page has this phrasing, which I’m not entirely sure what it means, but could be understood as personal liability?
> Senior accountability for safety. To ensure strict accountability, each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties.
I don't see that makes the person personally liable at all. It just gives them a direct line to the compliance board.
I think OP feels it indirectly creates massive personal liabilities for site operators, in that a user can deliberately upload illegal material and then report the site under the Act, opening the site operator up to £18M in fines.
This seems very plausible to me, given what they and other moderators have said about the lengths some people will go to online when they feel antagonised.
Zero chance it will be enforced like this.
The UK has lots of regulatory bodies and they all work in broadly the same way. Provided you do the bare minimum to comply with the rules as defined in plain English by the regulator, you won't either be fined or personally liable. It's only companies that either repeatedly or maliciously fail to put basic measures in place that end up being prosecuted.
If someone starts maliciously uploading CSAM and reporting you, provided you can demonstrate you're taking whatever measures are recommended by Ofcom for the risk level of your business (e.g. deleting reported threads and reporting to police), you'll be absolutely fine. If anything, the regulators will likely prove to be quite toothless.
Hopefully the new law is enforced sensibly, i.e., with much leniency given to smaller defendants, but hoping for that to be the case is a terrible strategy. The risk is certainly not zero as you claim -- all it takes is for one high-profile case of leniency resulting in some terrible outcome (e.g., child abuse) getting into the news, and the government employees responsible for enforcement will snap to a policy of zero-tolerance.
> provided you can demonstrate you're taking whatever measures are recommended by Ofcom
That level of moderation might not be remotely feasible for a sole operator. And yes, there's a legitimate social question here: Should we as a society permit sites/forums that cannot be moderated to that extent? But the point I'm trying to make is not whether the answer to that question is yes or no, it's that the consequences of this Act are that no sensible individual person or small group will now undertake the risk of running such a site.
[dead]
Yeah we’ve seen how competently and fairly laws are enforced in the uk with the post office scandal.
The rules aren't "never have bad things on your site" though.
The example of "medium risk" for CSAM urls is a site with 8M users that has actively had CSAM shared on it before multiple times, been told this by multiple international organisations and has no checking on the content. It's a medium risk of it happening again.
Again, sounds like the nonsense spoken on here when GDPR came out. Everyone was going to get fined millions. Except people with violations actually got compliance advice from the ICO. They only got fined (a small amount of money) when they totally ignored the ICO
I'm sure hackers in general rank above average in terms of paranoia. That and most also seem to think the world is black and white and runs like code instead of being all woolly like it is in reality.
Have you looked at a list of GDPR fines? The are small time people getting hit with ridiculous sums (not millions but still egregious).
While they could, I'm pretty sure that's already illegal, probably in multiple ways.
In the same way that you could be sued for anything, I'm sure you could also be dragged to court for things like that under this law... And probably under existing laws, too.
That doesn't mean you'll lose, though. It just means you're out some time and money and stress.
>In the same way that you could be sued for anything,
The risk and cost imbalance is much more extreme than that of a lawsuit.
I'm confident that, were I sufficiently motivated, I could upload a swathe of incriminating material to a website and cover my tracks within a couple of hours, doing damage that potentially costs the site operator £18M with no risk to myself -- not even my identity would be revealed. OTOH, starting a lawsuit at the very least requires me to pay for a lawyer's time, my face to appear in the court -- and if the suit is thrown out, I'll need to pay their court costs, too.
>While they could, I'm pretty sure that's already illegal, probably in multiple ways
Heh, welcome to the internet where the perpetrator and the beneficiary can be in different jurisdictions that make enforcement on the original bad actors impossible.
For example, have a friend in China upload something terrible to a UK site and then 'drop the dime' to a regular in the UK. The UK state can easily come after you and find it nearly impossible to go after the international actor.
Might the author be overreacting a bit to this new law? As I understand it, it doesn't put that much of an onerous demand on forum operators.
Then again, maybe he's just burnt out from running these sites and this was the final straw. I can understand if he wants to pack it in after so long, and this is as good reason as any to call it a day.
Though, has no-one in that community offered to take over? Forums do change hands now and then.
> Might the author be overreacting a bit to this new law
As i have read it, no, it's worth a read to see for yourself though.
> it doesn't put that much of an onerous demand on forum operators.
It doesn't until it does, the issue is the massive amount of work needed to cover the "what if?".
It's not clear that it doesn't apply and so it will be abused, that's how the internet works, DMCA, youtube strikes, domain strikes etc.
> Then again, maybe he's just burnt out from running these sites and this was the final straw. I can understand if he wants to pack it in after so long, and this is as good reason as any to call it a day.
Possibly, worth asking.
> Though, has no-one in that community offered to take over? Forums do change hands now and then.
Someone else taking over doesn't remove the problem, though there might be someone willing to assume the risk.
If it is anything like GDPR enforcement in the UK, the 'what if' situation is Ofcom writing to you asking you to comply.
GDPR enforcement is bullshit because it was made by people who don't know what they are doing and didn't understand how much effort would be needed to actually prove there is a problem.
Honestly, same could be said for this one, it reads less like an attempt at making the internet better and more like a technical sounding PR stunt with sneaky power encroachment thrown in.
"We just need you to uses your government ID to sign in because of the children, we have a long track record of competent execution, maintenance and accountability, we are 100% not going to use this for other ...reasons"
It's the same governmental "Trust me bro, think of the children" they always throw out.
Outside of the intelligence agencies the UK government is absolutely diabolical at anything technical, chronically overbudget (because their original budget was decided by someone in an office with no actual experience managing an IT project) on projects they outsourced to corrupt friends who siphon the money away, not just IT, all projects.
They pay atrociously for the level of skill required for the positions advertised, so they get middle of the road staff, which isn't a problem normally, middle of the road is the backbone of IT projects.
The problem arises when you get actively bad project management, either incompetence or outright maliciousness, throw in some glacial bureaucracy laden processes that didn't work when they were drafted 40 years ago, let alone now.
and you get an entire industry of corruption and mediocrity.
/rant
anyway, i mean, sure you can take the lacklustre GDPR enforcement and use that to make decisions going forward, i wouldn't personally, because i don't think a single data point is a good basis for risk assessment.
DMCA, youtube copyright strikes, domain strikes, bank transaction complaints/chargebacks, all are mechanisms used to attack internet based businesses.
Do they serve a purpose, debatable, are they misused on a regular basis, absolutely.
This isn't a "the sky is falling" this is a "They have put into law the ability to drop the sky on me just because they (the government, or disgruntled internet denizens) feel like it"
It's up to you to decide how likely you think that is and plan accordingly.
There was a story very recently about the whole of itch.io going down because of some overzealous rent-seeking bullshit middleman (hired by rent-seeking bullshit artist FunkoPop)
> Might the author be overreacting a bit to this new law? As I understand it, it doesn't put that much of an onerous demand on forum operators.
As the manager of a community where people meet in person, I understand where he is coming from. Acting like law enforcement puts one in a position to confront dangerous individuals without authority or weapons. It is literally life-endangering.
>Might the author be overreacting a bit to this new law?
How much money should he spend on a lawyer to figure this out for him?
Would you be willing to risk personal liability for your interpretation of this law? Obviously I would not.
Author uses they/them pronouns.
Who asked?
Op uses they/them pronouns.
Yes. I think so. The chances of some well meaning forums being taken down by this law is zero.
That's simply not true when you see the owners of these forums throwing in the towel because of the compliance requirements.
I threw in the towel long ago before any of these laws with massive fines were put in place. People were already running scripts to upload CSAM then report their own CSAM automatically to registrars, dns providers, CDN's, etc... This was around the time a couple alternatives to Akamai CDN's were growing really fast and I have my suspicions about who these people are but either way I got rid of my forums and stopped lurking here. It has been a wonderful 9 year vacation.
The whole government page at https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c... has an off-putting and threatening tone, celebrating how wonderful it is that online spaces will be tied in bureaucratic knots. Disgraceful.
I host a Mastodon server in the US. I almost wish I could get a legal threat from the UK so that I could print it and hang it on my wall as a conversation piece.
I have zero legal connection to the UK and their law doesn't mean jack to me. I look forward to thoroughly ignoring it, in the same way that I thoroughly ignore other dumb laws in other distant jurisdictions.
UK, look back on this as the day -- well, another day -- when you destroyed your local tech in favor of the rest of the world.
The hacker in me is real grumpy about all this, and believes that they’re nannying a whole lot of the dumb superficial stuff while pushing serious malfeasance underground.
But they make a good point: if you exclude the smaller providers, that’s where the drugs and CSAM and the freewheeling dialog go. Assuming it’s their policy goal to deter these categories of speech, I’m not sure how you do that without a net fine enough to scoop up the 4chans of the world too.
It’s not the behavior of a confident, open, healthy society, though…
> if you exclude the smaller providers, that’s where the drugs and CSAM and the freewheeling dialog go.
- Bad actors go everywhere now.
- £18 million fines seem like a fairly unhinged cannon to aim at small webistes.
- A baseless accusation is enough to trigger a risk of life-changing fines. Bad actors don't just sell drugs and freewheel; they also falsely accuse.
18m maximum fine.
As a quick cheatsheet, laws targeting CSAM are always just tools to go after other things.
CSAM is absolutely horrible.. but CSAM laws don't stop CSAM (primarily this happens from group defections).
Instead it's just a form of tarring, in this case unliked speech, by associating it with the most horrible thing anyone can think of.
The cure is worse than the disease.
It is pretty clear that many European countries, EU or not, do not want individuals hosting websites. Germany has quite strict rules regarding hosting, the EU has again and again proposed legislation that makes individuals hosting sites very hard and the UK doing similar things is no surprise.
These governments only want institutions to host web services. Their rules are openly hostile to individuals. One obvious benefit is much tighter control, having a few companies with large, registered sites, gives the government control.
It is also pretty clear that the public at large does not care. Most people are completely unaffected and rarely venture outside of the large, regulated platforms.
> do not want individuals hosting websites
It's more about "accepting and publishing arbitrary content".
But, in practice, how hard is it to host a website anonymously? Or off-shore?
>But, in practice, how hard is it to host a website anonymously? Or off-shore?
Obviously it is trivial, but so is shoplifting.
Both are illegal and telling people to commit crimes is not helpful.
What's illegal in hosting a website anonymously? I don't even have to provide my personal info to register a domain name, I can run it on an IP-address
>What's illegal in hosting a website anonymously?
In Germany it is straight up illegal. The UK law has provisions where a services has to name a responsible person or report specific things to the government. Obviously those can not be accomplished anonymously. In any case hosting a website anonymously doesn't work if you want to work within the law, any lawsuit against you will identify you.
> I don't even have to provide my personal info to register a domain name, I can run it on an IP-address
Which is totally irrelevant. I can also go into a store and take something without paying. The question is whether that is legal or not and what you need to do to keep it legal.
Please consider working with Archive Team and/or the Internet Archive to preserve the content of the site.
you don't need archiveteam to save databases you already have. just dump the PII columns and put 'em up for torrent.
I'd argue that despite technically being higher fidelity, the SQL dump is less useful than the archive team style http request dump, because the latter fits into the Wayback Machine while the former would require extra steps to meaningfully access the data.
(Unless of course someone is resurrecting the site)
Usually ArchiveTeam will do both where we can. For example we do wikis that are shutting down with both ArchiveBot (recursively download the site, saving in WARC format and uploading to web.archive.org) and Wikibot (download MediaWiki/DokuWiki source text exports in a way that can be restored later, upload to archive.org).
https://wiki.archiveteam.org/index.php/ArchiveBot https://wiki.archiveteam.org/index.php/Wikibot
I’m not a lawyer but I wouldn’t recommend posting a dump of the PII columns lol
goddamn the English language
FTR, the admin is now talking to ArchiveTeam and is willing to help reduce barriers to archiving the site to IA. For eg removing rate limits, allow-listing AT user-agents etc.
An insightful comment on this from an American context, but about basically the same problem [0]
> Read the regs and you can absolutely see how complying with them to allow for banana peeling could become prohibitively costly. But the debate of whether they are pro-fruit or anti-fruit misses the point. If daycares end up serving bags of chips instead of bananas, that’s the impact they’ve had. Maybe you could blame all sorts of folks for misinterpreting the regs, or applying them too strictly, or maybe you couldn’t. It doesn’t matter. This happens all the time in government, where policy makers and policy enforcers insist that the negative effects of the words they write don’t matter because that’s not how they intended them.
> I’m sorry, but they do matter. In fact, the impact – separate from the intent – is all that really matters.
[0] https://www.eatingpolicy.com/p/stop-telling-constituents-the...
That's an excellent article. Another quote I found especially relevant:
>Every step that law takes down the enormous hierarchy of bureaucracy, the incentives for the public servants who operationalize it is to take a more literal, less flexible interpretation. By the time the daycare worker interacts with it, the effect of the law is often at odds with lawmakers’ intent.
Put another way, everyone in the chain is incentivized to be very risk averse when faced with a vague regulation, and this risk aversion can compound to reach absurd places.
I hope you've spoken to a good lawyer briefly to understand the practical realism of your legal fears. Understanding the legal system involves far more than just literally reading text.
Maybe he did. Or maybe that is the first step in the very path that op doesnt want to walk.
Doing enough research to write this post tho is already more work than quickly calling a lawyer.
> as a forum moderator you are known, and you are a target
I want to emphasize just how true this is, in case anyone thinks this is hyperbole.
I managed a pissant VBulletin forum, and moderated a pretty small subreddit. The former got me woken up at 2, 3, 4am with phone calls because someone got banned and was upset about it. The latter got me death threats from someone who lived in my neighborhood, knew approximately where I lived, and knew my full name. (Would they have gone beyond the tough-guy-words-online stage? Who knows. I didn't bother waiting to find out, and resigned as moderator immediately and publicly.)
100%
I used to run a moderately sized forum for a few years. Death threats, legal threats, had faeces mailed to my house, someone found out where I worked and started making harrasing calls/turning up to the office.
I don't run a forum no more. For what I feel are obvious reasons.
I, too, can confirm all this. Way back in the day, I hosted a largish forum and moderated it. It grew to be a pain to moderate, and it became costly to run as well, in short, it was no longer fun like it was initially, and I walked away from it.
> I managed a pissant VBulletin forum, and ... got me woken up at 2, 3, 4am with phone calls because someone got banned and was upset about it.
I home-hosted a minecraft server and was repeatedly DDoS'd. Don't underestimate disgruntled 10yo's.
Facts: my friend ran a Minecraft server, and his hosting provider once told him it was the most DDoS'd server in the entire datacenter.
I used to be a moderator for the french forum OP mention in his post and helped maintain the previous website. I threw the towel long before they migrated to microcosm. I didn't even ask to be that moderator but ended up somehow when other "hired members" when they themselves tried to distance themselves from the moderation. It is so thankless. Some trolls were constantly making thread derails, were repeatedly banned but kept creating new accounts, joining other forums I was a simple member to stalk me and get more information in order to publish stuff about me or my family online.
People seem to forget that the more legislation there is around something the more it is only feasible to do if you are a corporate person. Human persons just don't have the same rights or protections from liabilty.
It also raises the barrier to entry for newcomers, ensuring established large players continue to consolidate power, since they have the means to deflect and defend themselves from these regulations (unless there are specific carve-outs in place, of course).
This effectively makes censorship much simpler for the government, no need to chase down a million little sites,just casually lean on the few big ones remaining.
Skying too! Way fewer taps required.
This is something the EU got right for once in the DMA/DSA: It only applies starting from a certain, large size - if you're that big, you can afford the overhead.
It's not like there are laws that are more lenient with non-profits or with tiny companies right?
The EU's digital markets act is one that got that right and I love it. But it's the exception to the rule. The vast majority of such laws are for the benefit of the corporations themselves, despite any ostensible purposes. And this is definitely in that latter category.
"glad that EU overregulation doesn't hamper the freedom of the United kingdom any more."
what can we do about this creep up of totalitarian surveillance plutocracy?
sweet were the 1990s with a dream.of.information access for all.
little did we know we were the information being accessed.
srry
very un-HN-y.. maybe it's just the time of the year but this really pulls me down currently.
Also a lot of other EU regulations do the same.
Sometimes it's explicitly mentioned but oftentimes it's behind "appropriate and proportionate measures"
But most don't. GDPR for example. It's pretty wack that random people coming to my neighborhood BBQ can demand I give them the backyard surveillance camera recording or force me to delete it (a metaphor for a personal website logs). Such makes perfect sense for a corporation but none when applied to a human person and context.
We should make the laws for our digital spaces for human person use cases first, not corporate person use cases. Even if it's in the sense of trying to protect humans from corporations.
Good news:
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A...
2. This Regulation does not apply to the processing of personal data: (c) by a natural person in the course of a purely personal or household activity;
That does not include security cameras: https://www.gov.uk/government/publications/domestic-cctv-usi...
From that link:
Most European countries have laws for recording spaces not your own. They typically predate the GDPR by decades. AFAIK, they are not harmonized, except for a tiny bit by the GDPR.If I understand it well, this is a big difference between the USA where you can mostly record the public space and create databases of what everyone does in public. IN Europe (even outside the EU), there is a basic expectation of privacy even in public spaces. You are allowed to make short term recordings, do journalism, and have random people accidentally wander in and out of your recording. Explicitly targetting specific people or long-term recording is somewhere between frowned upon to flat out illegal.
Indeed, or VATMOSS, which made all small merchants move their ecommerces to Amazon, Gumroad (and Paddle) to avoid the complexity
Is VATMOSS more complicated for small merchants in the EU than it is for foreign merchants that sell online into the EU?
I work for the latter kind of merchant, and "complexity" is not a word I would associate with VATMOSS. Here is what we've had to do to deal with VATMOSS:
• Register with the tax authority in a country that was part of VATMOSS. We registered with Ireland. We did this online via the Irish tax authority's web site. It took something like 15-30 minutes.
• Collect VAT. VAT rates are country wide and don't change very often so it is easy to simply have a rate table in our database. No need to integrate any third party paid tax processing API into our checkout process.
Once a month I run a script that uses a free API from apilayer.com to get the rates for each country and tell me if any do not match the rates in our database, but that's just because I'm lazy. :-) It's not much work to just manually search to find news of upcoming VAT rate changes.
• At the start of each quarter we have to report how much we sold and how much VAT we collected for each country. I run a script I wrote that generates a CSV file with that data from our database. We upload it to the Irish tax authority's web site and send them the total VAT. They deal with distributing the data and money to the other countries.
It was a bit more complicated before Brexit. Back then we made the mistake of picking the UK as our country to register with. Instead of going online by making a web-based way to do things like Ireland did, the UK did it by making available OpenOffice versions of their paper forms for download. You could download those, edit them to contain your information, and then upload them.
If you read the guidance:
https://www.ofcom.org.uk/siteassets/resources/documents/onli...
It amounts to your basic terms of service. It means that you'll need to moderate your forums, and prove that you have a policy for moderation. (basically what all decent forums do anyway) The crucial thing is that you need to record that you've done it, and reassessed it. and prove "you understand the 17 priority areas"
Its similar for what a trustee of a small charity is supposed to do each year for its due diligence.
Yep super simple. You just have to make individual value judgements every day on thousands of pieces of content for SEVENTEEN highly specific priority areas. Then keep detailed records on each value judgement such that it can hold up to legal scrutiny from an activist court official. Easy peasy.
No, not at all. You need to consider your service's risks against those seventeen categories once, and then review your assessment at least every year.
From the linked document above: "You need to keep a record of each illegal content risk assessment you carry out", "service providers may fully review their risk assessment (for example, as a matter of course every year)"
And links to a guidance document on reviewing the risk assessment[1] which says: "As a minimum, we consider that service providers should undertake a compliance review once a year".
[1] https://www.ofcom.org.uk/siteassets/resources/documents/onli...
> You just have to make individual value judgements every day on thousands of pieces of content
That's simply not true.
If you read that guidance, it wants you to have a moderation policy for 17 specific priority areas. You need prove you can demonstrate that you have thought about it. You need to have a paper trail that says you have a policy and that its a policy. You _could_ be issued with a "information notice", which you have to comply with. Now, you could get that already, with the RIPA, as a communications provider.
this is similar to running a cricket club, or scout club
For running a scout association each lesson could technically require an individual risk assessment for every piece of equipment, and lesson. The hall needs to be safe, and you need to prove that it's safe. Also GDPR, and safeguarding, background checks, money laundering.
> hold up to legal scrutiny from an activist court official
Its not the USA. activist court officials require a functioning court system. Plus common law has the concept of reasonable. A moderated forum will be of a much higher standard of moderation than facebook/twitter/tiktok.
Any competent forum operator is already doing all of this (and more) just without the government-imposed framework. Would the OP allow CSAM to be posted on their website? No. Would the OP contact the authorities if they caught someone distributing CSAM on their website? Yes. Forum administrators are famous (to the point of being a meme) for their love of rules and policies and procedures.
What is your evidence that the record keeping described by the parent is routine among competent forum operators?
The record keeping requirements described by the parent are completely wrong: https://news.ycombinator.com/item?id=42436626
A risk assessment is not the same as a record of each decision.
https://russ.garrett.co.uk/2024/12/17/online-safety-act-guid... has a more comprehensive translation into more normal English.
You will need to assess the risk of people seeing something from one of those categories (for speciality forms, mostly low), think about algorithms showing it to users (again for forums thats pretty simple) Then have a mechanism to allow people to report offending content.
Taking proportionate steps to stop people posting stuff in the first place (pretty much the same as spam controls, and then banning offenders)
The perhaps harder part is allowing people to complain about take downs, but then adding a subforum for that is almost certainly proportionate[1].
[1] untested law, so not a guarantee
1) "record keeping requirements described by the parent are completely wrong:"
2) "Any competent forum operator is already doing all of this [this = record keeping requirements described by the parent]".
These two assertions seem to conflict (unless good forum OPs are doing wrong record keeping). Are you willing to take another stab at it? What does good forum op record keeping look like?
Me saying they don't need to do what pembrook claims, and aimazon saying they already do it, are not conflicting assertions. I didn't assert that competent forum operators are doing everything the new law requires. If you're asking me to "take a stab" at convincing you that forum operators are doing the hyperbolic FUD that pembrook posted, I won't. Take a stab at convincing you that they are already doing some large sub-set of what the law actually calls for, okay; I suspect internet forum operators already don't want their forums to become crime cesspits, or be taken overy by bots or moderators running amok, and that will cover quite a lot of it.
For comparison imagine there was a new law against SQL Injection. Competent forum operators are already guarding against SQL Injection because they don't want to be owned by hackers. But they likely are not writing down a document explaining how they guard against it. If they were required to make a document which writes down "all SQL data updates are handled by Django's ORM" they might then think "would OfCom think this was enough? Maybe we should add that we keep Django up to date ... actually we're running an 18 months old version, let's sign up to Django's release mailing list, decide to stay within 3-6 months of stable version, and add a git commit hook which greps for imports of SQL libraries so we can check that we don't update data any other way". They are already acting against SQL injection but this imaginary law requires them to make it a proper formal procedure not an ad-hoc thing.
> "What does good forum op record keeping look like?"
Good forum operators already don't want their forums to become crime cesspits because that will ruin the experience for the target users and will add work and risk for themselves. So they will already have guards against bot signups, guards against free open image hosting, guards against leaking user private and personal information. They will have guards against bad behaviour such as passive moderation where users can flag and report objectionable content, or active moderation where mods read along and intervene. If they want to guard against moderators power tripping, they will have logs of moderation activities such as editing post content, banning accounts. There will be web server logs, CMS / admin tool logs, which will show signups, views, edits. They will likely have activity graphs and alerts if something suddenly becomes highly popular or spikes bandwidth use so they can look what's going on. If they contact the authorities there may be email or call logs of that contact, there will be mod messages records from users, likely not all in one place. If a forum is for people dealing with debt and bankruptcy they might have guards against financial scams targetting users of their service such as a sticky post warning users, a banned words list for common scam terms - second hand sales site https://www.gumtree.com has a box of 'safety tips' prominently on the right warning about common scams.
Larger competent forums with multiple paid (or volunteer) employees would likely already have some of this formalised and centralised just to make it possible to work with as a team, and for employment purposes (training, firing, guarding against rogue employees, complying with existing privacy and safety regulations).
Yes I think the new law will require forum operators to do more. I don't think it's unreasonable to require forum operators once a year to consider "is your forum at particular risk of people grooming children, inciting terrorism, scamming users, etc? If your site is a risk, what are you doing to lower the chance of it happening, and increase the chance of it being detected? And can you show OfCom that you actually are considering these things and putting relevant guards in place?".
(Whether the potiential fines and the vagueness/clarity are appropriate is a separate thing).
Is the trustee of a small charity on the hook for £18,000,000 in minimum fines?
The maximum fines are 10% of "qualifying worldwide revenue", or £18M, whichever is larger. This is an exercise in stopping companies from claiming tiny revenues when they're actually much larger, rather than fining genuinely tiny companies (or individuals) a ridiculous multiple of their value (or wealth).
Plenty of things in UK law attract "an unlimited fine", but even that doesn't lead to people actually being fined amounts greater than all the money that's ever existed.
Sounds like a legally risky environment. One of the biggest things we've understood about the world over the last century has been that commerce flourishes under an environment of legal certainty.
Trustees for small charities can be personally liable for unlimited amounts.
GDPR, Safeguarding, liability for the building you operate in, money laundering. there are lots of laws you are liable for.
Most people don't have time to wade through that amount of bureaucratic legalese, much less put it into practice.
I run just over 300 forums, for a monthly audience of 275k active users
I can't imagine one person running over 300 forums with 275,000 active users. That gives you an average of eight minutes a week to tend to the needs of each one.
I used to run a single forum with 50,000 active users, and even putting 20 hours a week into it, I still didn't give it everything it needed.
I know someone currently running a forum with about 20,000 active users and it's a full-time job for him.
I don't understand how it's possible for one person to run 300 forums well.
With 1 or 2 competent mods, a forum can be very low-maintenance.
When I was running the 50,000 user forum, I had five. Mods are great, but I still can't grok 300+ forums all being done well. He'd need hundreds of mods.
I seriously doubt a large portion of those 300+ forums have tens of thousands of users.
He is running the infra and maintain the code. Each of those forums have their own moderators. They are basically his customers.
I think what he fears is he has no control on how these individual forums moderate their content and how liable he would be as the hosting admin.
Op uses they/them pronouns.
Sorry, genuine mistake. I haven't seen mention of their pronouns.
Remember when Omegle shut down recently? https://web.archive.org/web/20231109003559/https://omegle.co...
It seems that some people are convinced that the benefits of having strangers interact with each other are not worth the costs. I certainly disagree.
Omegle was widely known to be full of underage children and overage child sexual predators.
If I designed a site for 14-year-old girls to sext with 30-year-old men it would be rightfully shut down.
If I designed a site as a fun chat site but becomes in actual reality it became a sexting site for 14-year-old girls with adult men, should it be shut down?
I had a hosted forums for almost two decades, 4mm monthly users, etc, and can attest to the death threats and DDOS attempts (I was a very early customer of Cloudflare which basically saved us — thanks Matthew!)
The stories… people get really personally invested in their online arguments and have all sorts of bad behavior that stems from it.
It's insane that they never carved out any provisions for "non big-tech".
I feel like the whole time this was being argued and passed, everyone in power just considered the internet to be the major social media sites and never considered that a single person or smaller group will run a site.
IMO I think that you're going to get two groups of poeple emerge from this. One group will just shut down their sites to avoid running a fowl of the rules and the other group will go the "go fuck yourself" route and continue to host anonymously.
> I feel like the whole time this was being argued and passed, everyone in power just considered the internet to be the major social media sites and never considered that a single person or smaller group will run a site.
Does this shock you? I don't recall a time in memory where a politician discussing technology was at best, cringe and at worst, completely incompetent and factually wrong.
Off the top of my head Oregon Senator Ron Wyden. I’m sure there are others. Millennials are in office now.
> Off the top of my head Oregon Senator Ron Wyden. I’m sure there are others
If we exclude politicians whose tech awareness is curated by lobbyists, Ron Wyden may be the entire list.
Ok, did you perform this audit of all US politicians? Or are you just spreading FUD?
I don't think FUD is on the table at this stage in the discussion. And I'm not sure why you chose a confrontational stance for your question.
But for an answer, I've done what folks do - spent decades carefully listening to legislators (and judges!) reveal their expertise in the fields I work and interact with.
Ron Wyden aside, authentic technical competency from legislators is so uncommon it stand out. Glaringly. What technical acumen we do get pretty much always rhymes with lobbyists talking points.
I expect my perspective to be boringly familiar here.
And AFAIK, we don't have any other Ron Wydens serving in Congress or coming onboard.
That is, someone with the basic technical understanding to foresee reasonable downstream consequences of the laws they vote on. Not someone with a minimal technical awareness that was crafted to be a lobbyists tool.
I will be genuinely grateful if someone would correct me here.
> Millennials are in office now.
The most stupid influencers, at least in my country.
> Millennials are in office now.
So? Tons of millennials barely understand technology too. I'd say a politician being one makes the odds they know tech marginally better, but I still interact with people of my generation that barely know what a filesystem is, let alone how to make one, or why it's important.
Well it means came of age at the turn of the millennium. So the whole online culture thing. Presumably we'd have a pretty good idea what's going on, having lived the whole thing.
The joke used to be that Boomers don't understand the internet, even though they invented it.
Based on that experience I guess it should be no surprise that now Millennials don't understand the web even though we were born on web 1.0, grew up on web 2.0, and created web 3.0.
> It's insane that they never carved out any provisions for "non big-tech".
That would be insane, and it's not true. You have to consider the risks and impacts of your service, and scale is a key part of that.
I think it's really important around this to actually talk about what's in the requirements, and if you think something that has gone through this much stuff is truly insane (rather than just a set of tradeoffs you're on the other side of) then it's worth asking if you have understood it. Maybe you have and lots of other people are extremely stupid, or maybe your understanding of it is off - if it's important to you in any way it probably makes sense to check right?
I'm sure Big Tech helped to write the bill.
> It's insane that they never carved out any provisions for "non big-tech".
There's only 13 provisions that apply to sites with less than 7 million users (10% of the UK population).
7 of those are basically having an inbox where people can make a complaint and there is a process to deal with complaints.
1 is having a 'report' button for users.
2 say you will provide a 'terms of service'
1 says you will remove accounts if you think they're run by terrorists.
The OP is blowing this out of proportion.
You are obviously rewriting a lot of the law, and ignoring that the penalty seems to still be "up to 18 million pounds". So no, there is a deliberate bias against smaller sites.
> You are obviously rewriting a lot of the law
Feel free to address any specific points. Have you looked at the Ofcom guidance?
> penalty seems to still be "up to 18 million pounds".
Fines "up to" a certain amount allow flexibility in punishment, enabling courts to consider the specific circumstances of each case, such as the severity of the offence and the offender's financial situation. This discretion ensures that penalties are proportional and fair, avoiding undue hardship while still serving as a deterrent.
You cannot write in to legislation specific fines for every possible scenario, this is how the UK legislation works. Suggesting you need to shutdown a cycling forums because you don't have 18 million in the bank is ludicrous.
Mishandling personal data has a maximum fine of £18 million too, yet small/medium/large businesses still exists...
> So no, there is a deliberate bias against smaller sites.
I'm saying there is deliberate bias against smaller sites, smaller sites only have 13 minor provisions whereas larger ones have 30+.
> It's insane that they never carved out any provisions for "non big-tech".
Very little legislation does.
Two things my clients have dealt with: VATMOSS and GDPR. The former was fixed with a much higher ceiling for compliance but not before causing a lot of costs and lost revenue to small businesses. GDPR treats a small businesses and non profits that just keep simple lists for people (customers, donors, members, parishioners, etc.) has to put effort into complying even thought they have a relatively small number of people's data and do not use it outside their organisation. The rules are the same as for a huge social network that buys and sells information about hundreds of millions of people.
Do you know a small business that has got into trouble with GDPR?
You can filter this list to see 200+ GDPR fines assigned to sole proprietors, the smallest of small businesses, individuals that haven't even registered a separate entity for their business:
https://www.enforcementtracker.com/
They're only cataloging the (2500+) publicly known ones, most of which have a link to a news article. As an example: some guy in Croatia emailed a couple websites he thought might be interested in his marketing services, and provided a working opt-out link in his cold emails. One of them reported the email to the Italian Data Protection Authority who then put him through an international investigation and fined him 5000 euro.
"Assuming here that the reasons expressed in the aforementioned document have been fully recalled, [individual] was charged with violating articles 5, par. 1, letter a), 6, par. 1, letter a) of the Regulation and art. 130 of the Code, since the sending of promotional communications via e-mail was found to have been carried out without the consent of the interested parties. Therefore, it is believed that - based on the set of elements indicated above - the administrative sanction of payment of a sum of €5,000.00 (five thousand) equal to 0.025% of the maximum statutory sanction of €20 million should be applied."
It's worth noting that each country has a different approach to GDPR enforcement (which arguably defeats the point of it but that's another discussion).
The UK tends to be a lot more (IMO) reasonable in its approach than some other European countries. Italy tends to be one of the strictest, and likes to hand out fines, even to private individuals for things like having a doorbell camera. The UK has only fined one person on that basis, and it was more of a harassment case rather than just simply that they had a camera.
ICO and Ofcom aren't generally in the business of dishing out fines unless it's quite obviously warranted.
To clarify, I'm not interested in this, because it doesn't answer the question at all. I don't want a Googled answer, I want personal experience.
For instance, I know of a company that flouted GDPR and got multiple letters off the ICO trying to help them with compliance before finally, months later, they ended up in court and got a very small fine.
Edit: it is not cool to edit your post after I replied to make it look more reasonable
They do not get into trouble because have spent the money and the time on compliance, which is an unfair burden.
Also, is not just small businesses, it is not for profits too.
Yes. $30k in compliance costs from a pissed off ex-employee and malicious gdpr requests.
Any more details? What information did the employee request that cost money to fulfil? Interesting that it was in dollars?
> What information did the employee request that cost money to fulfil?
Employees costs money, and so do attorneys. When people won't limit scope, that can require extensive manual review.
You've spent 20+ posts misinforming about compliance costs in this thread alone so forgive me if I don't believe this is anything like a good faith query. If you know people who operate companies, it's easy to find cases.
$ because I'm American.
How is it insane? The target is non big-tech. Do you think Facebook cares they have to hire a couple of people to do compliance?
I'm ignoring the comments, they seem to be all about the posters themselves.
I have no knowledge of your site, but I'm still sad to see it having to shut down.
This is what bad legislation does, punish small communities!
What we need is some entity setup in the United Arab Emirates, Ukraine, The Democratic Republic of the Congo, anywhere that is outside of where this law will matter. Sites turned over locals, legally and in other ways.
The thing though is how to finance it and how to provide stewardship for the sites going forward.
Running sites like this post is about is not profitable. Nor is it too resource intensive.
Are you Velocio? Thanks for all the hard work!
Sad that lufguss will probably become just another channel on one of the big platforms. RIP.
yup, and thank you for the kind words
I wonder how much of the risk could be mitigated by simply disabling the private message function of microcosm? Surely having reports button help moderating the "public facing part" of a forum?
Having said that, thanks for all the work you have done. I was (and maybe still am) a member of lfgss although I mostly lurked once in a long while without logging in and barely commented over the years.
It is sad to see all online communities slowly migrate to discord, reddit and other walled gardens.
That's terrible. Hopefully the users can make backups (of at least what has already been posted, if not of their ongoing social connections) before the shutdown. It's good that you can provide such notice. Are you providing tarballs?
It's much more responsible to put this whole thing into some nonprofit trust format and hand it over to someone with the time and energy to handle it. This also would not exclude you from volunteering.
I can understand them not wanting the due diligence burden (even if only self-imposed rather than required for any official reason) of picking a successor in this way, or the admin burden of setting up “some non-profit trust format”.
Also depending on the terms agreed to when people signed on and started posting, it might be legally or morally difficult because transferring the data to the control of another party could be against the letter or the spirit of the terms users agreed to. Probably not, but I wouldn't want to wave such potential concerns off as “nah, it'll be fine” and hoping for the best.
Even leaving a read-only version up, so a new home could develop with the old content remaining for reference, isn't risk free: the virtual-swatting risk that people are concerned about with this regulation would be an issue for archived content as much as live stuff.
At least people have a full three months notice. Maybe in that time someone can come up with a transfer and continuation plan the current maintainer is happy with, if not the users at least have some time to try to move any social connectivity based around the site elsewhere.
I did say it's more responsible, which I get is a values judgment.
Okay, I'm putting up a new bbs in the US that is only going to be accessible via SSH modern terminal only... UK users will be more than welcome.
I've been wanting to pay with remote modern terminals and Ratatui anyway.
Or just use a news server and create a tilde with gopher/irc and so on, then federate it with the tildeverse.
Sad news. I enjoy LFGSS a lot. Thank you for your work.
> The act only cares that is it "linked to the UK" (by me being involved as a UK native and resident, by you being a UK based user), and that users can talk to other users... that's it, that's the scope.
So basically is this act a ban on indvidual communication through undermoderated platforms?
It's a de facto ban on small forums.
This seems like a classic "Don't interrupt your adversary when they are making a mistake" situation.
The EU and UK have been making these anti-tech, anti-freedom moves for years. Nothing can be better if you are from the US. Just hoover up talent from their continent.
Have you seen how difficult US immigration is to navigate? It's impossible for most people, and about to get even harder soon.
Even if US immigration were more liberal, moving is very costly (financially, emotionally, psychologically). Injustice anywhere is a threat to justice everywhere.
In addition to the cookie privacy pop-over when viewing that site, I (as an American) am just amazed how regulated the internet is in Europe compared to the USA.
Is there an argument why we would want it any other way?
> In addition to the cookie privacy pop-over when viewing that site
I don't know where you're seeing that as the site does not have such things. The only cookies present are essential and so nothing further was needed.
The site does not track you, sell your data, or otherwise test you as a source of monetisation. Without such things conforming with cookie laws is trivial... You are conformant by just connecting nothing that isn't essential to providing the service.
For most of the sites only a single cookie is set for the session, and for the few via cloudflare those cookies get set too.
Why we would want it any other way than what? It's not clear to me whether you see the added European regulation as positive or negative.
I'm really sad this stuff is happening. For me the hobby sites are by far the best part of the internet. All the commercial stuff gets enshittified. I hardly use any commercial social media or forums anymore.
I don't believe this kind of regulation will do anything but put the real criminals more underground while killing all these helpful community initiatives. It's just window dressing for electoral purposes.
as a lurker here and at lfgss, just wanted to say lfgss exposed me to so much that i'm thankful for (not only fg/ss but also in cycling culture + more)
so thanks for all that buro9! <3
I'm wondering, would putting the forum behind auth wall 'solve' this 'problem'? Forum users already have accounts, and make it not too difficult for new users to sign up.. Otherwise content would be not accessible to unauthed users.
Or another thought, distribute it only through VPN, OpenVPN can be installed on mobiles these days (I have one installed on my Android). Make keys creation part of registration process.
Can’t you just ip ban the offending country?
UK sucks
Why not hand it over to someone else who would take the risk?
Seems a bit megalomaniacal.
"I'm not interested in doing this any more. Therefore I'll shut it down for everyone"
Because then they're responsible, at least socially, for the things the new admin does.
This way, people have been given plenty of advanced notice and can start their own forums somewhere instead. I'm sure each of the 300 subforums already has some people running them, and they could do the above if they actually cared.
I find it hard to believe someone will take over 300 forums out of the goodness of their hearts and not start making it worse eventually, if not immediately.
> Because then they're responsible, at least socially, for the things the new admin does.
Nonsense.
If you hand someone a ton of power and they abuse it, you can expect, at the very least, the people affected by that abuse to blame you for handing it to the wrong person.
And since social media works the way it does, you can also expect a ton of unaffected people to also pick up their pitchforks and join in without having any real clue what's going on.
And that's assuming there isn't some law somewhere that really does put you on the hook for it.
300 forums is a lot of power.
It makes a lot more sense to expect the sub-admins of those forums to start their own communities elsewhere than for just hand power to a single person over all of them.
> 300 forums is a lot of power.
Yes, that's possibly 100 middle-aged men you could urge into battle!
Specious reasoning.
The due diligence of picking a successor could be a large job, and depending upon the licence/terms agreed to when people signed up before posted might not even be legally possible.
You're just making stuff up.
Incorrect.
Does everyone else remember when GDPR came out and everyone running a website was extradited to Europe and fined a billion pounds.
I don't understand this decision. Running a website as an individual is a liability risk for all sorts of reasons for which there are simple (and cheap) mitigations. Even if you believe this legislation is a risk, there are options other than shutting down. The overreaction here is no different than when GDPR came in, and we all collectively lost our minds and started shutting things down and then discovered there was zero consequence for mom-and-pop websites. I assume this isn't a genuine post and is actually an attempt at some sort of protest, with no intention of actually shutting down the websites. Or, more likely, they're just old and tired and ready to move on from this period of their life, running these websites.
the real risk I see is that as it's written, and as Ofcom are communicating, there is now a digital version of a SWATing for disgruntled individuals.
the liability is very high, and whilst I would perceive the risk to be low if it were based on how we moderate... the real risk is what happens when one moderates another person.
as I outlined, whether it's attempts to revoke the domain names with ICANN, or fake DMCA reports to hosting companies, or stalkers, or pizzas being ordered to your door, or being signed up to porn sites, or being DOX'd, or being bombarded with emails... all of this stuff has happened, and happens.
but the new risk is that there is nothing about the Online Safety Act or Ofcom's communication that gives me confidence that this cannot be weaponised against myself, as the person who ultimately does the moderation and runs the site.
and that risk changes even more in the current culture war climate, given that I've come out, and that those attacks now take a personal aspect too.
the risk feels too high for me personally. it's, a lot.
> the real risk I see is that as it's written, and as Ofcom are communicating, there is now a digital version of a SWATing for disgruntled individuals.
I'm sorry, what precisely do you mean by this? The rules don't punish you for illegal content ending up on your site, so you can't have a user upload something then report it and you get in trouble.
Yes you can https://www.ofcom.org.uk/siteassets/resources/documents/onli...
A forum that isn't proactively monitored (approval before publishing) is in the "Multi-Risk service" category (see page 77 of that link), and the "kinds of illegal harm" include things as obvious as "users encountering CSAM" and as nebulous as "users encountering Hate".
Does no-one recall Slashdot and the https://en.wikipedia.org/wiki/Gay_Nigger_Association_of_Amer... trolls? Such activity would make the site owner liable under this law.
You might glibly reply that we should moderate, take it down, etc... but we, is me... a single individual who likes to go hiking off-grid for a vacation and to look at stars at night. There are enough times when I could not respond in the timely way to moderate things.
This is what I mean by the Act providing a weapon to disgruntled users, trolls, those who have been moderated... a service providing user generated content in a user to user environment can trivially be weaponised, and it will be a very short amount of time before it happens.
Forum invasions by 4chan and others make this extremely obvious.
> A forum that isn't proactively monitored (approval before publishing) is in the "Multi-Risk service" category (see page 77 of that link),
That's not true, you'd need to conclude you're at a medium or high risk of things happening and consider the impact on people if they do.
> and as nebulous as "users encountering Hate".
But users posting public messages can easily fit into the low risk category for this, it's even one of their examples of low risk.
edit - moved across now the comment is alive
> Not sure why the reply buro9 gave is dead
Oh I do... the link... HN must have a word based deny list
Ahh that makes sense.
> Not sure why the reply buro9 gave is dead
I vouched for it, so it should be visible now.
I used to frequent the forum about 15 or so years ago. This guy is very level headed and has been around the block a lot. Therefore I don't believe this is purely performative.
I like and respect the OP and their work. I do not think this is consistent with his previous levelheadedness.
edit: removed unintentional deadnaming
As I said I haven't frequented the forum for years, so maybe things have changed but I highly doubt this is a knee jerk reaction.
thank you for removing the deadname.
A fair number of sites hosted and operated outside the European Union reacted to GDPR by instituting blocks of EU users, many returning HTTP 451. Regardless of whether you believe GDPR is a good idea or not (that's beyond the scope of this comment), the disparity in statutory and regulatory approaches plus widely varying (often poor) levels of 'plain language' clarity in obligations, and inconsistent enforcement, it all leads to entirely understandable decisions like this and more of a divided internet.
Thank you to those who have tirelessly run these online communities for decades, I'm sorry we can't collectively elect lawmakers who are more educated about the real challenges online, and thoughtful on real ways to solve them.
>A fair number of sites hosted and operated outside the European Union reacted to GDPR by instituting blocks of EU users, many returning HTTP 451.
My outlook on doing this is that this is not the way to do it because these things exist:
- EU citizens living in non-EU countries (isn't GDPR supposed to apply EU citizens worldwide?)
- EU citizens using VPN with exit node to/IP address spoofing a non-EU country
Either comply with GDPR or just don't exist, period.
China or Russia also have "interesting" data protection / "let's protect children" laws. Some of they also formulated in same way as GDPR so VPN doesn't help. Why should they be ignored? (other than "but it's DIFFERENT thing, EU is good ones")
What are the simple and cheap mitigations you have in mind?
Don't run a website personally, set up a separate legal entity. The UK is one of the easiest places in the world to do this and has well-understood legal entities that fit the model of a community-operated organisation (i.e: "community interest company"). The fact that the OP is running such a large community as an individual is bonkers in the first place, independent of this new act.
Personally, I think it's bonkers that an individual can't run an online forum.
Personally, I think it's bonkers that you think "I don't have time to comply with the law so I shouldn't have to" is a reasonable position.
People who think this law is reasonable are bonkers.
People who go for reductio ad ad-hom are making low quality internet arguments.
Restauranteurs can't say they don't have time to comply with the law. Construction companies can't. Doctors can't. Why should online service providers be able to?
I agree but that ship sailed a decade ago. There's no additional risk with the new bill, it's more of the same. If there are concerns about liability because of this new bill then there should be concerns about liability already.
I sympathise with the OP because at some point everyone becomes too old to deal with the headaches of running a community. I have no opposition to their choice to shut down the forum. I just don't believe liability as a result of the new bill is the reason.
> I just don't believe liability as a result of the new bill is the reason.
It seems like OP is commenting on this thread; you can accuse them of lying directly, if you'd like.
but there is additional risk and liability, because the new act creates more work in order to be compliant (which is what increases the liability), and it increases the risk of being attacked.
there's never been an instance of any of the proclaimed things that this act protects [...] people from, so he should be safe, right?
but despite this, he is already being attacked, and those attacks will not just continue but they are likely to increase because the attack surface has become larger.
It raises the cost and hassle involved from "I need a cheap hosting package" to I need to do paperwork, keep and file accounts, etc.
Are you claiming that setting up a CIC removes individual liability for wrongdoing? So, I set up a CIC for running forums, with $0 of assets and negligible running costs, then in the event of a fine I'm scot free?
Yes. A CIC is just a limited company with some additional community interest obligations. You can set up a limited company to shield yourself from liability (i.e: if your website is sued by a user, your personal assets aren't at risk) and only in exceptional cases (where serious lawbreaking is involved) could you be held personally liable.
Rightly or wrongly, limited companies in the UK provide a high degree of protection for wrongdoing. Defrauding HMRC out of hundreds of thousands of pounds and suffering no consequence is happening day in day out. An Ofcom fine is nothing by comparison.
1. Thanks for this very helpful information about how some seemingly quite simple legal manoeuvring can be used to dodge 99% of this law.
2. Doesn't the fact that simple legal manoeuvring can be used to dodge 99% of this law make the law (and laws like it) farcical on its face? Merely an elaborate set of extra hoops that only serves to punish the naive, while increasing everyone's compliance costs?
The law is designed to target large companies that aren't seen as doing enough to prevent harm on the internet. Setting up a CIC doesn't dodge the law, the legal entity (the CIC) remains accountable for conduct on the website. Setting up a CIC removes the risk from the individual website operator because they're no longer operating the website, instead the website is operated by a separate legal entity. The website could still be held accountable for violating the law but the consequence would be the CIC is fined, not the individual behind it. The same principle as any limited liability company.
In the U.S. we have the concept of "piercing the corporate veil" whereby if you can prove that a LLC (effectively a corporate entity created to shield owner's liability) is a flimsy legal device whose only intent is to skirt laws like this one, you are able to go after the LLC owner personally anyway.
Does the UK have a similar concept?
In a case like the forums in question though you wouldn't set up the LLC so you can skirt the law. You'd set up the LLC and make a good faith attempt to comply with the law.
It may turn out that it is too much work to comply and so you might still need to shut down, but with the LLC you've got a lot more leeway to try without personal risk.
Yes it has the concept. But like the previous poster said, it can only be used in very serious criminal wrongdoing.
>the consequence would be the CIC is fined
Does this in practice mean that the original human person would have to pay that fine? What would the consequences likely be for the original human person?
If those consequences remain severe, then it's not a simple legal manoeuvre after all. This reduces farcicality, but also means there's no way for an individual to safely run this kind of website.
If those consequences round to zero, my next question would be: Can a large company spin up a CIC just to shield itself in the same way? (If so, it seems the farce would be complete.)
In practice the person would not have to pay the fine. The company would. If the company had no cash to pay it then it could be forced into insolvency.
No a large company can't spin up a CIC to run a business website (because it is not community interest), but it doesn't need to, it is already a limited liability company. However this is not a farce, the limited liability applies to the shareholders, not the company. The company gets fined, and has to pay the fine or risk having its assets siezed.... then the shareholders have lost their company. The liability of the shareholders is limited to the shareholders invested amount, ie the shareholders can't lose any more than they put in. So if the fine was more than the company can afford, the shareholders lose their company, but don't have to pay the rest.
It is not a farce, because losing a profit earning company is bad for a shareholder
Thanks for the explanation. However, there's one thing I've now realised I'm not clear on:
Could a company create a non-CIC sub-company (with ~$0 in assets) to own the website, and thereby shield shareholders of the original company? (If so, I think farcicality is conserved.)
Yes companies do this kind of thing to try and shield themselves from risk all of the time.
However it probably wouldn't work for a profit seeking company in this case. Big Corp owns Web Corp, and Web Corp owns the site. Which company is operating the site? If it is Web Corp. So when Web Corp gets fined, you lose your site. This is a problem for a profit seeking company, because it lost its value. If Big Corp owned the site, and Web corp operated the site, you may be OK. Your accountancy costs just went through the roof though. Not sure about this law, but some compliance laws treat the group as one whole entity to stop this sort of thing.
Since this applies to laws in general, are you arguing that corporations are a farce? I may be inclined to agree.
Edit: answering your other point, the company could not have no assets, if it owns the site then it has the site as an asset. If it runs the site then it will have cash etc. Etc.
I see, thanks.
>So when Web Corp gets fined, you lose your site.
My mind immediately goes in the direction of "Maybe you lost that server, but just buy a new one and change some DNS entries", which isn't free but a lot less than £18M. But maybe there are protections against this kind of scheming? I'd like to think there were.
>If Big Corp owned the site, and Web corp operated the site, you may be OK.
I don't follow -- if Big Corp owns the site, won't it lose everything?
>Since this applies to laws in general, are you arguing that corporations are a farce? I may be inclined to agree.
I think I am actually. They do seem like a way to get something for (almost) nothing (and they seem like they were probably engineered to be this way deliberately).
> Maybe you lost that server
Everything, ownership of the domain, codebase, digital assets for instance.
> I don't follow -- if Big Corp owns the site, won't it lose everything
Good question If Big Corp owns the domain, codebase, IP etc. and lets Web Corp operate a site using those assets, Big Corp is not responsible for Web Corps transgressions.
A simpler analogy. Big Corp owns a pub, rents it to Web Corp. Web Corp plays music too loud, opens too late and gets fined and loses its alcohol licence. Web Corp is insolvent, but Big Corp still owns the pub.
How would that work if someone set up a CIC, used it to rent a VPS and did some grey-hat hacking activities?
If the person committed a criminal offence under the Computer Misuse Act then they would face criminal sanctions, personally.
The Online safety bill gives Ofcom the power to levy regulatory fines, not criminal sanctions, so is very different
Assuming you regard the cost of keeping and filing accounts and other paperwork, annual registration fees, etc. as negligible yes.
Keeping accounts: cost, depends on if you make any money. If you do then you would have to keep accounts even if not a CIC!
Filing accounts: £15. An online form will ask you for your balance sheet summary only unless you are very large.
One off registration:£65
Annual confirmation statement:£34
So depends on your perspective I suppose.
There are plans to remove the small company p & l exemption and new rules on verifying directors ids. Costs are going up too. It looks like a CIC you can only file online if full accounts. https://find-and-update.company-information.service.gov.uk/a...
You have to keep accounts if a business even if not incorporated. A company has to keep accounts if it has any assets (e.g. a domain) or any financial transactions (e.g. paying for hosting)
You will also probably have to file a tax return. You have to keep a register of shareholders.
In fact if definitely not making a profit a standard ltd might be simpler (or maybe a company limited by guarantee) then a CIC as all a CIC does it add restrictions and extra regulation https://assets.publishing.service.gov.uk/media/5a7b800640f0b...
>You have to keep accounts if a business even if not incorporated
Indeed, so this cost is not relevant to the decision to set up a CIC or not
It is relevant this case. The thread is about moving an activity that is not a business to a company (and for some reason a CIC in particular). If you did it in your own name you would not have to keep accounts.
As jimnotgym explained, you don't have to 'keep accounts' in any onerous sense. You just need to keep a rough track of the business's income and expenses (which any sensible person would be doing anyway). No-one at HMRC is going through the accounts of very small businesses with a fine tooth comb. You just tell them how much money you made and pay the taxes on it.
> You just need to keep a rough track of the business's income and expenses (which any sensible person would be doing anyway
We are not talking about a business here. The whole problem is that these are things that people are doing as essentially voluntary work.
What your saying would be true in a different context, but this is not business. I do not know whether you find it hard to grasp that some people will put a lot of effort into something for motives other than profit.
Right, so the 'accounts' are correspondingly simple. Essentially just a list of things the company has purchased.
Has anyone offered to take these over for you?
Hopefully you guys can somehow fight this, Have you contacted any big news sites about this? Also I think its likely there going to be alot of Judicial reviews and legal challenges to this. I don't see how this will hold up under the ECHR.
[dead]
[dead]
[flagged]
Ah, yes, “just” run every comment from 275k users through an error prone system, while paying for every API call, to host something they already do at a loss.
Ah yes, "just" run every new comment through an AI system which costs peanuts for a binary response, and then covers them for having a moderation policy.
> which costs peanuts
Peanuts aren’t free. Buying many peanuts adds up. And again, they are already operating at a loss. Additionally, it does not cover them when the system inevitably makes a mistake, especially considering that the OP’s fear is precisely that a disgruntled user would target them, meaning it would be a matter of time before someone would bypass the LLM.
Bypassing, errors, etc is all process stuff that can be explained to authorities. And what do the cheap AI models cost, like $5 for a few million tokens? Hardly going to break the bank.
Still an overreaction.
[flagged]
[flagged]
> Block UK users
The site is primarily focused on London/UK biking enthusiasts.
> Make a forum that is only for UK users
That is the forum for UK users.
> Just ignore the law and fight it
The linked post mentions that the fines for failure to comply start at £18 million. I'd understand not wanting to take that risk.
> Setup the forums on bulletproof hosting which ignore such silly laws.
I think this is the most viable strategy, but even then the site owner incurs risks through ie. ownership of the domain or considerable participation.
£18m is a maximum, not a minimum, but your point stands.
The laughable thing is believing that Ofcom has any budget to prosecute anyone, let alone a small website.
>Any monies donated in excess of what is needed to provide the service through to 16th March 2025 will be spent personally on unnecessary bike gear or astrophotography equipment, but more likely on my transition costs as being transfemme I can tell you there is zero NHS support and I'm busy doing everything DIY (reminder to myself, need to go buy some blood tests so I can guess my next dosage change)... Not that I imagine there will be an excess, but hey, I must be clear about what would happen if there were an excess.
I would argue the honorable thing to do in the event excess monies remain would be to donate it to a charity. Using it for personal ends, whatever the details, is wrong because that's not what the donations were for.
Welcome to British humour, and - it's spelled correctly.
I haven't read the act and am not going to, but, for this size community I'm pretty sure having a flag/report button would do the trick, and to go the extra mile, with very cheap LLM's generating a "dodgy content" score on every message would be pretty trivial. This seems a bit knee-jerk of a reaction to delete the whole site.