The title is incorrect (as the article itself notes) as neither Gmail nor email contents were the trigger; rather, the situation unfolded as a result of Google Photos automatically uploading a pediatric medical photo, flagging it as CSAM, and setting in motion a law-enforcement reporting process.
I think this represents two issues. One: AI is not AI but a litany of conditionals that poorly reflects what we as humans can determine fairly quickly. Unless the process is unquestionably objective and possibly quantitative in its outcome, I just don't feel full automation will be without these events.
Google as a business automates everything. They are over their ski-tips in the amount of "getting it perfect" that is possible and are completely accepting of "close enough". I don't think this thought process is unique in the corporate world and unless revenue is impacted heavily, they will not be incentivized to incur the heavy cost of making the end-user whole. The majority of revenue is ad based meaning B2B. Users are a bucket of data, not the person they are selling to.
It's like complaining that a cattle farmer doesn't treat his cows kind enough...the farmer would think you're nuts despite the masses possibly agreeing.
As it relates to this specific type of content, I'd rather see something bad happen to good people than something good happen to bad. Maybe it's unpopular, and I'd hate to be the person on the receiving end, but unless we can have truly objective AI, I don't think we will be absent of these types of problems. In the meantime there has to be someone in the background fixing these cases and Google has shown clearly with YouTube...unless you are making enough noise they aren't listening.
"but unless we can have truly objective AI, I don't think we will be absent of these types of problems."
You don't need "objective AI." In this case, you just need a process that goes to a human when it is contested. Sure, make the user pay a certain amount of money to get an investigation by a third party or to have all the information provided to a court. But if Google made a mistake, and that mistake caused tort, Google is going to have to pay out.
The appropriate thing to happen here is something along these lines:
Google's AI tells the person their account is cancelled, but they can contest it for $100. The person contests it, and a human is put on the case and investigates. That person determines that Google made a mistake, and Google will restore the account quickly, and pay the person $200 for the hassle.
While I like the spirit of this idea, wouldn't the reviewer be incentivized to say "no" to as many claims as possible? This is akin to how health insurance reviews work and, at least in the usa, that never goes poorly...
No, it should not be Google doing the review, it has to be a non-biased third party. This should apply to a lot of things, including app store stuff. Google doesn't make the decision, but they do get a share of the fee (if they don't have to pay out), since it is a hassle for them and they should be protected against frivolous complaints.
And by the way, while I'm sure Google should have to restore the account in this case, I'm not so sure Google should have to compensate even for the investigation fee. Because honestly it is a tad clueless to take photos of a kid's privates and allow them in the cloud. He didn't do anything truly "wrong" (in the molestation/ child porn sense), but still should have known it wasn't a good idea, so paying $100 for his mistake and being without his account for a week sounds about the right "punishment."
I think the reality is it technically hits this legal grey area. The whole, dating for years and one kid turns 17 while the other is 16 (or whatever the boundary case may be)...this letter vs spirit of the law argument.
It is by definition distribution of this material and to have software edge-case exceptions allowing certain situations through is something I can't imagine anyone willing to sign their name to endorse.
It seems from a heartless management perspective that the simplest decision is to walk away from the whole situation, wash their hands of it, and accept this as collateral damage.
I meant to say specifically that until we can define things from an objective and not subjective stance, AI will not fare better.
That's an interesting proposition on the solution. I feel I see some potentially undesirable effects similar to the legal system but perhaps better than what exists currently.
The author describe's the other side as best they can:
> Google says they won't give Mark his account back because they found another "problematic" image in his files: "a young child lying in bed with an unclothed woman." Mark doesn't know which picture they mean (he no longer has access to any of his photos), but he thinks it was probably an intimate photo he captured of his son and wife together in bed one morning ("If only we slept with pajamas on, this all could have been avoided.").
This begs the question of if you have a picture of a woman breast feeding is this going to be considered child exploitation?
The world is very complicated and our ML models aren't great at dealing with OOD or infrequent samples. Mistakes are going to happen, and this is okay, but we need to ensure that we have mechanisms that can quickly remedy these mistakes. There are countless people who have had far less recourse because their stories did not make international headlines. Your ability to go viral shouldn't highly correlate with your ability to solve mistakes.
That is a funny question because YouTube, aka google, has hundreds of breastfeeding videos that hardly seem educational but rather provocative in nature. You can go down that rabbit hole yourself if you choose but there are uploaders that upload dozens and dozens of these breastfeeding videos which doesn't seem to have any purpose but to show off breasts.
I can testify that while most of those were originally legit, at least a few of those in circulation originated as original content on paid porn sites. Fetish category: lactation. Off the top of my head, Katarina Hartlova, Nadine Jansen, and 'FTV Erika' were all on there with bullshit 'educational' subtitles.
I mean, I'm not against hearing more information, but barring that quote being a complete misrepresentation of the truth, I really can't see there being a way this could become any less terrible for Google.
Dont use gmail. Google motto these days is really be evil everyday, if not a day then it is wasted. Use Tuta. Use Proton. Use pgp. Use Telios. Pretty much anything except Google product. Heck even Apple iCloud is way more privacy in the entire Google history ever achieved.
Personally, I think companies with beyond some level marketshare should be regulated like we did train and telecom companies. As a common carrier cannot discriminate against customers, and tech monopolies shouldn't either. There should also be clear processes, perhaps initially through Google, but eventually through courts. EULA opt-outs from basic rights should carry less than zero weight.
The big thing for me at least, beyond phoning the police is that arguably losing your Gmail account is as bad as having the police called on you in the first place.
No email -> no banking -> no rent -> no house -> ??
Again, what does this have to do with this story? Having the police called on you because of something like this is not going to result in the police chasing you, so this has nothing to do with this story. It's just utterly unrelated what-aboutism.
Gmail being one of the worst. I've been using my own domain + fastmail for years and from time to time I still find stuff tied to my old Gmail account. Right now I have forwarding set, but if it got disabled overnight I would still run into issues.
All the rest I can survive without, but that took some doing. And I started years ago.
Photostructure (https://photostructure.com/) lacks a lot of features google photos has - but one of those missing features is calling the police on you. I make tiny hardware that runs it very well and supports the project too: https://pibox.io/order/photostructure
Google photos is easily the best product I cannot use - maybe TikTok is a close second. But sometimes it’s worth being that stubborn nerd and fighting for an alternative - even if the alternative is a distant second.
The page you link to has zero information about anything called "photostructure" other than the fact that it's in the URL. I have no idea what thing you are trying to promote.
It's a Raspberry Pi that is ready to go 'out of the box' so you can install software immediately rather than fiddling with it.
I believe the PiBox OP linked is intended to be a PiBox that comes pre-installed with their software, Photostructure, so you can run your own instance easily without fiddling with a Raspberry Pi.
> I believe the PiBox OP linked is intended to be a PiBox that comes pre-installed with their software
...and yet the page for that pibox product makes no mention of photostructure at all, let alone saying it's pre-installed. I'm just trying to point this out. The linked page would appear to be a complete non sequitur if not for the fact that I can see '/photostructure' in the URL.
It's the landing page for the product. You can click features for more info on the features. Pricing for more info on the pricing. It has pretty much everything you need.
> Photostructure (https://photostructure.com/) lacks a lot of features google photos has - but one of those missing features is calling the police on you.
Your photos and videos never touch our servers, _by design_.
I've worked for several large tech companies that hosted personal content, and seen how _utterly miserable_ it is to try to comply with the morass of local customs and laws.
I have no willingness or desire (or sufficient revenue) to spin up a safety and security team: PhotoStructure will always be entirely self-hosted.
Because PhotoStructure is always self-hosted, the onus falls on the user to comply with any local laws, just like any other software tool.
The author took a few liberties with the title and "call" was one of them.
The police said that Google "contacted" them, which could mean an Email, and online form submission, a paper letter, etc., not necessarily a phone call.
Born in the US and lived in many different states over many decades and I’ve never heard any reporter or crime documentary use “call” as a synonym for “email”, or “send a letter”, or “file a complaint”.
To “call on” someone, maybe. But this says “call the cops”.
Maybe I’m just getting old, but anytime I’ve heard anyone say they “called the cops”, they dialed them via phone.
You have to consider that everything you have on any platform can be taken away at any time for whatever reason and there is nothing you can do about it.
With that in mind you can decide if the convenience is worth the risk of losing it all. For some things it will be but not for others. You should ensure that everything you cannot afford to lose has at least one backup that you fully control.
"You have to consider that everything you have on any platform can be taken away at any time for whatever reason and there is nothing you can do about it"
Well there is something you can do about it, which is involve courts and/or legislation.
I mean your advice may be good for individuals who haven't (yet) run into this, but I think the discussion should be more "why is this acceptable and what can we do about it?"
Each of these stories is a little push. In sum it feels like I've been shoved hard away from Google. It's taken several years now for me to separate from so many of their services, and the last big fearsome jump will be to leave Android behind. It will mean keeping a second phone, because work is still thoroughly engoogled. But stories like this make me want to complete the divorce and de-slime my personal phone. And then wonder if it's worth changing jobs.
This is how I envision this should be handled in a better world where corporations aren't allowed to just say "you get what you pay for" as they screw over their customers. (and yes, users of ad-supported products are customers)
If an account is terminated, it is done in a way it can be brought back if the user thinks it was unwarranted.
The user should be told that if they want to contest it, they need to pay a certain amount of money to a third party, to pay for the time of the people who do investigate. ($100?)
Those investigating it are not owned by or controlled in any way by Google. Those investigating it are allowed to rule not only that the account be restored, but that the user is compensated (maybe up to $5000).
If the investigation doesn't go the way the user hoped, they should be able to use the courts and Google should be ready to make all available data available.
This article reinforces my feelings towards moving away from email which is not hosted by myself. Maybe to host your own email on your own machine is just a weird way of applying security by obscurity (since I host on VPS so I don't fully control the hardware I run my email on), but having hosted my own email for more than a decade now I am feeling it's the right thing to do.. For some time I was wondering if publishing my docker images for postfix, opendkim, dovecot and openvpn would help anyone, maybe I'll put some time into polishing them..
To be fair, with very few exceptions, every email you have is also on another server somewhere. The large majority of them are likely on servers owned by one of the big boys.
On the bright side, if you own it, at least an AI can't shut you down and ruin your day.
I remember, maybe in 2008 (I might be off by few years and it's too late at night to check my sources) yahoo had a breach which resulted in emails going into hackers hands. This made me thinking AI scanning your email is certainly not a good thing, but to have somebody to take a loan in your name because you happened to have a scan of your ID in one of your emails might end up with much much worse consequences (I was one of those dummies who had scan of their ID stored in inbox, once I learned about breach I was running to invalidate it). This was a lesson I needed to make me stop using google/yahoo/similar. I'm too small (and unknown) to be attacked (however who knows..?), but yahoo was big enough and probably harvest was worth it.
Yeah, I host it since 2011. Of course, not everyone is interested to learn this, but it's certainly not beyond your reach. Especially now, there are so many online resources helping you to grade quality of emails you send, check if your server is miss-configured etc.
This case refers to alleged lawbreaking that may have occurred, and is fairly easy for people to morally reason about (CSAM), were it proved to be true.
The question in my mind is as to what the limits of this snooping is? Discussing out of state abortions in a location where it's outlawed? Conducting a homosexual relationship in a repressive state?
This title is straight-up incorrect and the article says as much. Can we try not to spread misinformation about an event like this? It is very serious and raises lots of important questions about how we can effectively reign in the pseudo-judicial systems that tech companies appear to be evolving.
See the correction posted at the top of the article:
> Correction: An earlier draft of this story misstated a technical detail; Mark didn’t email his photo to his doctor; rather, he took the photo with his phone and the image was automatically synched to his Google Photos account, triggering a scan.
A more accurate title is:
Google will contact the police based on their analysis of your Google Photos.
This seems overly pedantic. Yes, the article only discusses photos. However, it's not a far stretch (nor hard to imagine) this same technology is running on every Google-enabled service.
If your photos are being scanned the system is compromised. Perhaps the author intended to produce a chilling effect necessary for people to understand this. There is no reason to believe your entire life on the Google platform is compromised and available (freely!) to any government agency that wants it.
Unfortunately, people won't realize how dangerous this is until a skin tone photo of something entirely benign gets marked as CSAM by a robot, and you wont know until the police are letting you know they want you in for questioning. What a dystopian world we live in.
Mixing up text and images isn't really pedantic. One is reading my thoughts, one is monitoring my captured moments. They're both bad, but very different. Even the author issued a correction at the top of the article, but should have updated the title to reflect the correction.
> However, it's not a far stretch (nor hard to imagine) this same technology is running on every Google-enabled service.
OK, but then write those articles, with proof. Until then, just keep journalism as accurate as possible. When we lower the bar so much that we're writing about things that we can imagine being true, it's just fantasy land.
> If your photos are being scanned the system is compromised.
I can have a Gmail account without having a Google Photos account, so no not everyone who sees the "Gmail will call the copy on you" headline has a reason to panic.
> Perhaps the author intended to produce a chilling effect necessary for people to understand this.
That's really irresponsible fear-mongering and I doubt was done by this author since they were responsible enough to post a correction at the top of the article. The truth is the scariest thing because it's real. Fantasy isn't scary because it's fiction.
When I took a digital forensics course about ten years ago, the instructor (who'd been in the field since the 80s) said that in the US, if one finds suspected CSAM on a device they're examining for any reason (forensics, tech support, etc.) the only legally-advisable choice is to immediately suspend all work and contact the FBI, because not doing so can be considered a felony.
He also said that even if one does that, it's likely that all of the equipment that was ever connected to the device will be confiscated along with it, and might not be returned for a very long time. I.e. if you run a small forensics shop, have at least two of each piece of hardware in different locations so the confiscation doesn't put you out of business.
So while I feel like Google's reaction was extreme, and they should have unlocked his account after being provided with confirmation by the doctor, I suspect they were just doing their best to comply with US law.
This is essentially a dupe of https://news.ycombinator.com/item?id=32538805
The title is incorrect (as the article itself notes) as neither Gmail nor email contents were the trigger; rather, the situation unfolded as a result of Google Photos automatically uploading a pediatric medical photo, flagging it as CSAM, and setting in motion a law-enforcement reporting process.
Yeah.
I think this represents two issues. One: AI is not AI but a litany of conditionals that poorly reflects what we as humans can determine fairly quickly. Unless the process is unquestionably objective and possibly quantitative in its outcome, I just don't feel full automation will be without these events.
Google as a business automates everything. They are over their ski-tips in the amount of "getting it perfect" that is possible and are completely accepting of "close enough". I don't think this thought process is unique in the corporate world and unless revenue is impacted heavily, they will not be incentivized to incur the heavy cost of making the end-user whole. The majority of revenue is ad based meaning B2B. Users are a bucket of data, not the person they are selling to.
It's like complaining that a cattle farmer doesn't treat his cows kind enough...the farmer would think you're nuts despite the masses possibly agreeing.
As it relates to this specific type of content, I'd rather see something bad happen to good people than something good happen to bad. Maybe it's unpopular, and I'd hate to be the person on the receiving end, but unless we can have truly objective AI, I don't think we will be absent of these types of problems. In the meantime there has to be someone in the background fixing these cases and Google has shown clearly with YouTube...unless you are making enough noise they aren't listening.
"but unless we can have truly objective AI, I don't think we will be absent of these types of problems."
You don't need "objective AI." In this case, you just need a process that goes to a human when it is contested. Sure, make the user pay a certain amount of money to get an investigation by a third party or to have all the information provided to a court. But if Google made a mistake, and that mistake caused tort, Google is going to have to pay out.
The appropriate thing to happen here is something along these lines:
Google's AI tells the person their account is cancelled, but they can contest it for $100. The person contests it, and a human is put on the case and investigates. That person determines that Google made a mistake, and Google will restore the account quickly, and pay the person $200 for the hassle.
While I like the spirit of this idea, wouldn't the reviewer be incentivized to say "no" to as many claims as possible? This is akin to how health insurance reviews work and, at least in the usa, that never goes poorly...
No, it should not be Google doing the review, it has to be a non-biased third party. This should apply to a lot of things, including app store stuff. Google doesn't make the decision, but they do get a share of the fee (if they don't have to pay out), since it is a hassle for them and they should be protected against frivolous complaints.
And by the way, while I'm sure Google should have to restore the account in this case, I'm not so sure Google should have to compensate even for the investigation fee. Because honestly it is a tad clueless to take photos of a kid's privates and allow them in the cloud. He didn't do anything truly "wrong" (in the molestation/ child porn sense), but still should have known it wasn't a good idea, so paying $100 for his mistake and being without his account for a week sounds about the right "punishment."
I think the reality is it technically hits this legal grey area. The whole, dating for years and one kid turns 17 while the other is 16 (or whatever the boundary case may be)...this letter vs spirit of the law argument.
It is by definition distribution of this material and to have software edge-case exceptions allowing certain situations through is something I can't imagine anyone willing to sign their name to endorse.
It seems from a heartless management perspective that the simplest decision is to walk away from the whole situation, wash their hands of it, and accept this as collateral damage.
The human can't work for google.
Yes I agree. (I think I said "third party")
My mistake. It was an incomplete thought.
I meant to say specifically that until we can define things from an objective and not subjective stance, AI will not fare better.
That's an interesting proposition on the solution. I feel I see some potentially undesirable effects similar to the legal system but perhaps better than what exists currently.
> Google says they won't give Mark his account back because they found another "problematic" image in his files
Whoever is responsible for this decision and the resulting communication, I have nothing constructive to say. Sincerely, fuck you.
I can't believe they went through this person's account to find justification for their first mistake. This is intolerable.
Seems like we're hearing only one side of the story
Let's hear the other.
The author describe's the other side as best they can:
> Google says they won't give Mark his account back because they found another "problematic" image in his files: "a young child lying in bed with an unclothed woman." Mark doesn't know which picture they mean (he no longer has access to any of his photos), but he thinks it was probably an intimate photo he captured of his son and wife together in bed one morning ("If only we slept with pajamas on, this all could have been avoided.").
This could describe my wife breastfeeding our 6 month old, you know, an activity generally requiring some nudity.
A bunch of people need to be fired at this point in the story.
This begs the question of if you have a picture of a woman breast feeding is this going to be considered child exploitation?
The world is very complicated and our ML models aren't great at dealing with OOD or infrequent samples. Mistakes are going to happen, and this is okay, but we need to ensure that we have mechanisms that can quickly remedy these mistakes. There are countless people who have had far less recourse because their stories did not make international headlines. Your ability to go viral shouldn't highly correlate with your ability to solve mistakes.
That is a funny question because YouTube, aka google, has hundreds of breastfeeding videos that hardly seem educational but rather provocative in nature. You can go down that rabbit hole yourself if you choose but there are uploaders that upload dozens and dozens of these breastfeeding videos which doesn't seem to have any purpose but to show off breasts.
I can testify that while most of those were originally legit, at least a few of those in circulation originated as original content on paid porn sites. Fetish category: lactation. Off the top of my head, Katarina Hartlova, Nadine Jansen, and 'FTV Erika' were all on there with bullshit 'educational' subtitles.
I mean there is a big irony that you can find Nirvana's Nevermind album on google image search too. So it looks like Google has double standards.
That is ok. Feminist stuff is allowed otherwise Google will be labelled as bigot racist. That scared them even more then law enforcement.
And that's all we'll ever hear. Good luck trying to find an actual person to talk to.
I mean, I'm not against hearing more information, but barring that quote being a complete misrepresentation of the truth, I really can't see there being a way this could become any less terrible for Google.
Dont use gmail. Google motto these days is really be evil everyday, if not a day then it is wasted. Use Tuta. Use Proton. Use pgp. Use Telios. Pretty much anything except Google product. Heck even Apple iCloud is way more privacy in the entire Google history ever achieved.
Personally, I think companies with beyond some level marketshare should be regulated like we did train and telecom companies. As a common carrier cannot discriminate against customers, and tech monopolies shouldn't either. There should also be clear processes, perhaps initially through Google, but eventually through courts. EULA opt-outs from basic rights should carry less than zero weight.
Discussed in the article. Don't think there's value to repeating it here.
There is plenty of value discussing it here. You don't need to join in if you don't want to.
The big thing for me at least, beyond phoning the police is that arguably losing your Gmail account is as bad as having the police called on you in the first place.
No email -> no banking -> no rent -> no house -> ??
At least Google services can be replaced by alternatives, though perhaps painfully. Running from the police doesn't generally end well.
What does this have to do with running from the police? That isn't part of this story.
> losing your Gmail account is as bad as having the police called on you in the first place.
Gmail won't chase you and throw you in a cell if you run from them. The police will.
Again, what does this have to do with this story? Having the police called on you because of something like this is not going to result in the police chasing you, so this has nothing to do with this story. It's just utterly unrelated what-aboutism.
very painfully.
Gmail being one of the worst. I've been using my own domain + fastmail for years and from time to time I still find stuff tied to my old Gmail account. Right now I have forwarding set, but if it got disabled overnight I would still run into issues.
All the rest I can survive without, but that took some doing. And I started years ago.
Photostructure (https://photostructure.com/) lacks a lot of features google photos has - but one of those missing features is calling the police on you. I make tiny hardware that runs it very well and supports the project too: https://pibox.io/order/photostructure
Google photos is easily the best product I cannot use - maybe TikTok is a close second. But sometimes it’s worth being that stubborn nerd and fighting for an alternative - even if the alternative is a distant second.
The page you link to has zero information about anything called "photostructure" other than the fact that it's in the URL. I have no idea what thing you are trying to promote.
re: PiBox
It's a Raspberry Pi that is ready to go 'out of the box' so you can install software immediately rather than fiddling with it.
I believe the PiBox OP linked is intended to be a PiBox that comes pre-installed with their software, Photostructure, so you can run your own instance easily without fiddling with a Raspberry Pi.
> I believe the PiBox OP linked is intended to be a PiBox that comes pre-installed with their software
...and yet the page for that pibox product makes no mention of photostructure at all, let alone saying it's pre-installed. I'm just trying to point this out. The linked page would appear to be a complete non sequitur if not for the fact that I can see '/photostructure' in the URL.
You are correct: PhotoStructure is not preinstalled on PiBoxes.
PhotoStructure installs via a couple clicks in their app store, though--it's super easy. Barely an inconvenience.
(FWIW I'm the author of PhotoStructure)
It's the landing page for the product. You can click features for more info on the features. Pricing for more info on the pricing. It has pretty much everything you need.
The post was edited. It was originally only a link to https://pibox.io/order/photostructure that says exactly nothing about photostructure.
That seems to be a bad link for photostructure since it doesn't describe it at all
> Photostructure (https://photostructure.com/) lacks a lot of features google photos has - but one of those missing features is calling the police on you.
Yet. They will be forced to implement it.
Howdy! Author of PhotoStructure here.
Your photos and videos never touch our servers, _by design_.
I've worked for several large tech companies that hosted personal content, and seen how _utterly miserable_ it is to try to comply with the morass of local customs and laws.
I have no willingness or desire (or sufficient revenue) to spin up a safety and security team: PhotoStructure will always be entirely self-hosted.
Because PhotoStructure is always self-hosted, the onus falls on the user to comply with any local laws, just like any other software tool.
That said, my security.txt has a canary.
First paragraph I should read shall start as : "Photostructure is ...."
Disgusting behavior. IMHO, the second action by Google is even worse than the first.
Some busy body at google acting as morality police over what they see as "problematic" behavior.
Time to backup my Google content and get a dumb camera.
Finally, a way to get a real person on the phone at Google!
The author took a few liberties with the title and "call" was one of them.
The police said that Google "contacted" them, which could mean an Email, and online form submission, a paper letter, etc., not necessarily a phone call.
Sorry “call” is still correct in all the cases you mentioned, at least colloquially in the US.
Born in the US and lived in many different states over many decades and I’ve never heard any reporter or crime documentary use “call” as a synonym for “email”, or “send a letter”, or “file a complaint”.
To “call on” someone, maybe. But this says “call the cops”.
Maybe I’m just getting old, but anytime I’ve heard anyone say they “called the cops”, they dialed them via phone.
As in “contact”.
You have to consider that everything you have on any platform can be taken away at any time for whatever reason and there is nothing you can do about it.
With that in mind you can decide if the convenience is worth the risk of losing it all. For some things it will be but not for others. You should ensure that everything you cannot afford to lose has at least one backup that you fully control.
"You have to consider that everything you have on any platform can be taken away at any time for whatever reason and there is nothing you can do about it"
Well there is something you can do about it, which is involve courts and/or legislation.
I mean your advice may be good for individuals who haven't (yet) run into this, but I think the discussion should be more "why is this acceptable and what can we do about it?"
Each of these stories is a little push. In sum it feels like I've been shoved hard away from Google. It's taken several years now for me to separate from so many of their services, and the last big fearsome jump will be to leave Android behind. It will mean keeping a second phone, because work is still thoroughly engoogled. But stories like this make me want to complete the divorce and de-slime my personal phone. And then wonder if it's worth changing jobs.
Engoogled or enandroided? LineageOS could help you.
This is how I envision this should be handled in a better world where corporations aren't allowed to just say "you get what you pay for" as they screw over their customers. (and yes, users of ad-supported products are customers)
If an account is terminated, it is done in a way it can be brought back if the user thinks it was unwarranted.
The user should be told that if they want to contest it, they need to pay a certain amount of money to a third party, to pay for the time of the people who do investigate. ($100?)
Those investigating it are not owned by or controlled in any way by Google. Those investigating it are allowed to rule not only that the account be restored, but that the user is compensated (maybe up to $5000).
If the investigation doesn't go the way the user hoped, they should be able to use the courts and Google should be ready to make all available data available.
This article reinforces my feelings towards moving away from email which is not hosted by myself. Maybe to host your own email on your own machine is just a weird way of applying security by obscurity (since I host on VPS so I don't fully control the hardware I run my email on), but having hosted my own email for more than a decade now I am feeling it's the right thing to do.. For some time I was wondering if publishing my docker images for postfix, opendkim, dovecot and openvpn would help anyone, maybe I'll put some time into polishing them..
To be fair, with very few exceptions, every email you have is also on another server somewhere. The large majority of them are likely on servers owned by one of the big boys.
On the bright side, if you own it, at least an AI can't shut you down and ruin your day.
I remember, maybe in 2008 (I might be off by few years and it's too late at night to check my sources) yahoo had a breach which resulted in emails going into hackers hands. This made me thinking AI scanning your email is certainly not a good thing, but to have somebody to take a loan in your name because you happened to have a scan of your ID in one of your emails might end up with much much worse consequences (I was one of those dummies who had scan of their ID stored in inbox, once I learned about breach I was running to invalidate it). This was a lesson I needed to make me stop using google/yahoo/similar. I'm too small (and unknown) to be attacked (however who knows..?), but yahoo was big enough and probably harvest was worth it.
It takes a lot of time and skill to successfully (and securely!) host email – and if someone thinks that's not true, they are probably skipping steps.
Even a barebones email host is out of reach of most people.
Yeah, I host it since 2011. Of course, not everyone is interested to learn this, but it's certainly not beyond your reach. Especially now, there are so many online resources helping you to grade quality of emails you send, check if your server is miss-configured etc.
This case refers to alleged lawbreaking that may have occurred, and is fairly easy for people to morally reason about (CSAM), were it proved to be true.
The question in my mind is as to what the limits of this snooping is? Discussing out of state abortions in a location where it's outlawed? Conducting a homosexual relationship in a repressive state?
Jokes on them, I don't even read my email, let alone write email
Time to lawyer up. Looks like a fun case.
What's more fun than a working man facing off against the legal team of one of the most powerful multinationals in the world?
Exactly this. In a pay-to-play legal system this person would only be risking more by litigating.
I get that we have our ideals here, but our legal system does not necessarily lead to justice - especially when asymmetric imbalances are in play.
IANAL but I would not lawyer up against a huge corporation like this and expect to keep my money.
wtf. anybody know in which countries google acts like this? is it only the US?
This title is straight-up incorrect and the article says as much. Can we try not to spread misinformation about an event like this? It is very serious and raises lots of important questions about how we can effectively reign in the pseudo-judicial systems that tech companies appear to be evolving.
How is it incorrect? They proactively contacted the police and reported possible CSAM?
Emails were not involved.
I see, I missed that on my first read through.
See the correction posted at the top of the article:
> Correction: An earlier draft of this story misstated a technical detail; Mark didn’t email his photo to his doctor; rather, he took the photo with his phone and the image was automatically synched to his Google Photos account, triggering a scan.
A more accurate title is: Google will contact the police based on their analysis of your Google Photos.
So "~Gmail~ Google will call the cops on you based on the content of your ~emails~ Google Photos"? It seems like a rather small correction to me.
This seems overly pedantic. Yes, the article only discusses photos. However, it's not a far stretch (nor hard to imagine) this same technology is running on every Google-enabled service.
If your photos are being scanned the system is compromised. Perhaps the author intended to produce a chilling effect necessary for people to understand this. There is no reason to believe your entire life on the Google platform is compromised and available (freely!) to any government agency that wants it.
Unfortunately, people won't realize how dangerous this is until a skin tone photo of something entirely benign gets marked as CSAM by a robot, and you wont know until the police are letting you know they want you in for questioning. What a dystopian world we live in.
> This seems overly pedantic.
Mixing up text and images isn't really pedantic. One is reading my thoughts, one is monitoring my captured moments. They're both bad, but very different. Even the author issued a correction at the top of the article, but should have updated the title to reflect the correction.
> However, it's not a far stretch (nor hard to imagine) this same technology is running on every Google-enabled service.
OK, but then write those articles, with proof. Until then, just keep journalism as accurate as possible. When we lower the bar so much that we're writing about things that we can imagine being true, it's just fantasy land.
> If your photos are being scanned the system is compromised.
I can have a Gmail account without having a Google Photos account, so no not everyone who sees the "Gmail will call the copy on you" headline has a reason to panic.
> Perhaps the author intended to produce a chilling effect necessary for people to understand this.
That's really irresponsible fear-mongering and I doubt was done by this author since they were responsible enough to post a correction at the top of the article. The truth is the scariest thing because it's real. Fantasy isn't scary because it's fiction.
I'm glad Google called the cops, someone needed to investigate a possible crime
When I took a digital forensics course about ten years ago, the instructor (who'd been in the field since the 80s) said that in the US, if one finds suspected CSAM on a device they're examining for any reason (forensics, tech support, etc.) the only legally-advisable choice is to immediately suspend all work and contact the FBI, because not doing so can be considered a felony.
He also said that even if one does that, it's likely that all of the equipment that was ever connected to the device will be confiscated along with it, and might not be returned for a very long time. I.e. if you run a small forensics shop, have at least two of each piece of hardware in different locations so the confiscation doesn't put you out of business.
So while I feel like Google's reaction was extreme, and they should have unlocked his account after being provided with confirmation by the doctor, I suspect they were just doing their best to comply with US law.