Juerd a year ago

Very simple script to check which PNG files have trailing data: https://gist.github.com/Juerd/40071ec6d1da9d4610eba4417c8672...

I hope that people who host forums, image boards, chat applications, etc., will delete or fix potentially vulnerable images before anyone uses them maliciously.

One way to repair a vulnerable image is to use `optipng -fix`.

johdo a year ago

>Google was passing "w" to a call to parseMode(), when they should've been passing "wt" (the t stands for truncation). This is an easy mistake, since similar APIs (like POSIX fopen) will truncate by default when you simply pass "w". Not only that, but previous Android releases had parseMode("w") truncate by default too! This change wasn't even documented until some time after the aforementioned bug report was made.

Reading about this silent API change makes me feel like I'm losing braincells. What's going on with the processes behind Android's development?

  • Jasper_ a year ago

    This is par for the course for Android, having had to work with it at the hardware enablement level. Google will refactor and break everything in a dot release, and then swear up and down that their code is perfect, even as you point them directly to the commit that caused the issue.

    This is clearly unacceptable, but I've seen so much worse.

    • BoorishBears a year ago

      > and then swear up and down that their code is perfect

      Even worse when they don't do this and just flat out admit they broke a usecase intentionally and label it Won't Fix because the team that implemented the breaking change is also the team that triages the issue. See logcat access being completely broken in Android 13: https://www.reddit.com/r/tasker/comments/wpt39b/dev_warning_...

    • yazzku a year ago

      That's completely fucked up. Arrogance much or?

  • BoorishBears a year ago

    I have some pretty unique insight into this since I work with AOSP a lot and have worked with a few engineers on Android's core system apps:

    Google's engineers working on Android at a system level regularly break basic functionality in the "userspace"*. Google's engineers working on Android apps get early access to the Android versions and work through the resulting bugs, bubbling them back up until they get fixed.

    (*userspace being used loosely here, it's all userspace vs being in a kernel, but it's interfaces that are implemented at the OS level and consumed at the app level)

    Like Google is large enough that I'm sure someone will take offense to implying that such questionable engineering takes place there, but this isn't a story I've heard just once. People working on apps that are part of every GMS enabled Android image have confirmed this completely bizarre setup on multiple separate occasions

    • saagarjha a year ago

      Of course, this issue did not get fixed in Google’s apps.

      • BoorishBears a year ago

        I think that just proves how stupid of a process it is?

        You're relying on your internal teams' relatively outlay to catch contract breaking. And sure in this case internal is Google, meaning there are some pretty widely used apps acting as filters... but relative to the millions of apps out there, they're not likely to catch all the regressions.

        They must have automated testing, but at the point where it's just accepted that things will break and your own engineers regularly have to "convince" your OS team that they broke things, you know something is wrong.

  • bananapub a year ago

    ? it sounds like you didn't look at the actual commit that changed it. it was an overly elaborate refactoring gone wrong, not someone explicitly and clearly deleting a "case 'w'" or whatever.

    • johdo a year ago

      It sounds like you don't have any understanding of sound engineering if you think modifying the default behavior of this kind of API with no fanfare is okay because "we are elaborately refactoring". Whether there was a specific intent or not.

      This is absolutely horrifying.

Groxx a year ago

>IMHO, the takeaway here is that API footguns should be treated as security vulnerabilities.

Yeah, especially in this case, due to changing defaults and similar-but-differently-behaving APIs.

Defaults really suck sometimes. But so does not having any. And so many things can become security issues when used just so.

:/

  • olliej a year ago

    See that's not what happened here. It wasn't that the API had a footgun (I'll leave out "is this API actually good"). It was that someone decided that changing core API behaviour after that library had shipped was acceptable - and it isn't.

    That's why shipping a new API requires a lot of time investment in the design of the API: once an API is shipped you can't just change the behavior dramatically.

SergeAx a year ago

This is literally the hacker news. I wish HN have more content like this instead endless topics swirling around money.

GistNoesis a year ago

In the same line of idea of aCropalypse, there is the classic mistake beginner make redacting jpg documents with black box.

It does leak info from inside the box due to jpg compression artifacts.

Here is the proof of concept I quickly wrote to show why this isn't safe : https://github.com/unrealwill/jpguncrop

  • zamadatix a year ago

    I'm not following the "uncrop" portion of jpguncrop. Sure, the images are different, but unlike aCropalypse it's not clear how the image is supposed to be reconstructed from the data without already having knowledge what the data was. All this says is "yep, there was something other than a black square there before" and it's fair to say you could figure that out by the presence of the black square in the first place.

    • GistNoesis a year ago

      The two cropped files are different. They are built deterministically, therefore you know whether the redaction was of a red circle or of a blue circle.

      Typically that apply to things like redacted pdf lossily compressed as image, for which you have already a few candidates words (and you can bruteforce). You try them one at a time and see whether the compression artifacts match.

      The uncropping algorithm is pretty straightforward in theory : remove jpg artifacts, and fill the cropped region with candidate image portion x, compress and compare the produced artifacts to the cropped image artifacts, try a neighbor candidate x+eps*N(0,1) and optimize (aka random search).

      The artifacts are related to Fourier coefficients so the distance between artifacts isn't too irregular.

      The remove jpg artifacts, can range from really simple to really hard depending on the class of problem you have.

      If the background image is something digitally generated (like here in our example of red and blue circles), or a pdf file you can get the uncompressed version without mistakes.

      If the background image is something like a photo then you need a little finesse : you need to estimate the compression level, then run a neural network outside the cropped regions (4 different images : above, below, left and right of the cropped region) that remove the artifacts (something like https://vanceai.com/jpeg-artifact-removal/ but finetune to your specific compression level), so you can estimate the artifacts, then you search for the image inside the region (eventually with a neural net prior but that increase the probability of data hallucination) such that the jpg artifacts of the compressed reassembled image, is close to the jpg artifacts of your cropped image.

      • bastawhiz a year ago

        There is far less information in the artifacts than in the part of the image that was redacted. And of course, you need to replicate the process of redacting the original image, which requires you to know about the algorithm that encoded the image. It also, if I'm not misunderstanding, requires you to have a version of the redacted image without the artifacts from the redacted section (or a second identical version with a different redacted graphic, like the red and blue circles)

        Do you have a proof of concept of the technique you're describing? Otherwise I remain skeptical.

      • zamadatix a year ago

        I get the approach I'm just saying your demonstration doesn't show any proof of actually uncropping the image rather proof if you have the original image and encoding settings the artifacts will match. This alone does not imply the approach can be used to uncrop it just shows the JPEG encoder used is deterministic. It's like if I used the original 2048 bit RSA key to prove RSA is insecure by saying "and then you can just brute force it". I haven't actually shown brute force generating the original key to create a matching output is something anyone can be expected to actually do, should be something that's worried about, or means using RSA keys is a "classic mistake beginners make" rather I've just demonstrated RSA is deterministic and look a bit alarmist.

        One important piece overlooked by not doing an actual demonstration of blind reconstruction is lossy compression is naturally not a 1:1 mapping of original sources to compressed sources. If it were it'd be a lossless compression! This means your theorized recovery process ends early as it assumes that the first match is the only match and therefore the original. Had the claims of been fully demonstrated instead of assumed to be the same as recreation this would have been accounted for.

        In practice JPEG blocks represent an 8x8x3=192 byte/1536 bit original source. From that we take the lossy compression and get some small number of representing bits (which is encoder and settings dependent) back, let's say 154 (1/10th size) for discussion. Of that some number of bits is going to be used to accurately encode the black square, let's say 1/2 just to be generous to the extraneous noise and let's also be generous and say the re-encode from original JPEG to obfuscated JPEG was nearly lossless (i.e. "100" quality) to help the numbers further. We're now at 154/2 = 77 bits representing 1536, in the generous case. 2^1536/2^77 = 1.5 * 10^439 possible original matches. Even assuming the image is largely losslessly compressible the numbers still don't come out in good favor. For most cases, like the pictures example, this rules out finding the original being practical without some additional guidance that hasn't been demonstrated either (and maybe there is some other guidance! It's just not shown what that would be or why we should assume it exist).

        What would be a really good demo is focusing on something minimal (e.g. text instead of images) and regenerating the source material without knowing it before hand. E.g. it's safe to assume a typeface of a document that was screenshot and later obfuscated and now all you need to do is show that one understandable string of text has artifacts that match a short whiteout/blackout on the document. This is still an extremely hard problem but information theory tells us it might be (not that it is) at least reasonably possible.

        • GistNoesis a year ago

          You can use the well known approach like https://github.com/bishopfox/unredacter

          I am just showing that jpg compression cast a digital shadow on itself that can be outside of the cropped area, which many people find unintuitive.

          Matching the shape of this digital shadow (which is not very small (all the highlighted pixel region contain some info) ), combined with an educated prior on the missing portion (that must provide a way to represent the hidden portion in less bits than you can get from the shape of the shadow) , can help you recover the data.

          • Retr0id a year ago

            You're getting downvoted a lot for some reason, but I just thought I'd say that I think this technique has potential.

            It'll be fiddly to exploit (like the "undredacter" you linked), but I think you could get a text-recovery PoC going. Try to craft an "ideal scenario" (for an attacker - i.e. ideal censor-bar placement and font size), see if you can exploit it in practice, and build from there.

          • zamadatix a year ago

            > You can use the well known approach like https://github.com/bishopfox/unredacter

            If it's so easy and already done why not show it actually reconstructing real examples like that page. The reality is there is no existing well known approach for recovering data in the exact way you laid out. Similar approaches sure but they rely on different data sources (such as downsampled versions of text) instead of artifacts. That it is feasible in the easy case does not mean it is feasible in the hard case, why should the two be expected to hold the exact same amount of information about the source? Even if they do the method of reconstruction will surely be different.

            > I am just showing that jpg compression cast a digital shadow on itself that can be outside of the cropped area, which many people find unintuitive.

            That I agree 100%, and it's a great thing to show, it's just a bit overreaching to then name that "jpguncrop" with a description referring to aCropalypse instead. Maybe it could lead to something like that but what's in the repository does not match the name or description.

            > Matching the shape of this digital shadow (which is not very small (all the highlighted pixel region contain some info) ), combined with an educated prior on the missing portion (that must provide a way to represent the hidden portion in less bits than you can get from the shape of the shadow) , can help you recover the data.

            Great, you have a theory - demonstrate! I'll be the first to comment on how clever, persevering, and well executed the work was if you post a thread showing it works. Until that point though it isn't so just because you think it would work.

            Here is a great test case https://i.imgur.com/sAEpfQ6.jpg. The font size should be small enough that data leaks out of the obfuscated area from the characters. I'm optimistic this blind reconstruction case is feasible given the additional constraints compared to the circle test but it would probably take a decent chunk of new work.

yazzku a year ago

Great find.

"The end result is that the image file is opened without the O_TRUNC flag, so that when the cropped image is written, the original image is not truncated. If the new image file is smaller, the end of the original is left behind."

RIP

ericpauley a year ago

I would assume that any image reformatting or exif stripping by online platforms would protect against this. Yet another good reason to include this when developing apps.

  • olliej a year ago

    This isn't an exif issue.

    This isn't a metadata issue.

    An underlying IO library changed its behavior so that instead of truncating a file when opened with the "w" mode (as fopen and similar have always done, and this API did originally), it left the old data there. If the edited image is smaller than the original file, then the tail of the original image is left in the file. There is enough information to just directly decompress that data and so recover the pixel data from the end of the image.

    You're not necessarily recovering the edited image data, just whatever happens to be at the end of the image. If you are unlucky (or lucky depending on PoV) the trailing data still contains the pixel data from the original image - in principle the risk is proportional to how close to the bottom right of the image the edits were (depending on image format).

    • ericpauley a year ago

      Not saying it is. Sensible exif stripping (re-serialization) also has the upside of removing trailing data, which would prevent this.

      • olliej a year ago

        No, the whole point is that with this bug is that more filtering or stripping would not fix/prevent it. The problem is not some kind of "trailing data in memory" issue.

        The bug is you say "write to this file" which is meant to erase the existing file if such exists, but the underlying library either had a serious regression, or intentionally broke API compatibility, and changed the behavior to not erase the existing data. Your exif stripping + reserialization would write the new data down and the trailing data from the original file would still be present: e.g. exactly what is happening in this bug.

        No amount of processing in memory, no amount of reserialization, no amount of data filtering prevents this bug. The bug occurs at the point of IO, because the IO is meant to have erased the original file, and it did not, so if you write fewer bytes to the destination file than were present in file being overwritten the tail of the overwritten file remains and is leaked.

        To make it very clear that this is not an error in processing the image: if you opened "image1.png" (or whatever format), edit it, and then saved it over a different file that already exists, say "image2.png", and then send image2.png to someone, this bug will allow the recipient to extract the trailing data for the original image2.png, it would not show any information about the original image1.png.

        • ericpauley a year ago

          This is not the case when the exif stripping is happening at the service side (By online platforms, in my original comment). Yes, anything happening before save is useless because the trailing data is kept. But if a service (e.g., Facebook) then does exif stripping via re-serialization the trailing data is lost.

          • olliej a year ago

            Server side filtering isn't relevant. A user editing or removing things from their photo does not expect that data to exist on the image uploaded to a server.

            • creatonez a year ago

              The uppermost comment in this thread is making the suggestion to use server-side filtering just in case something goes wrong with end-user software. So that's why other commenters were using this assumption and ignoring the software itself.

  • Retr0id a year ago

    EXIF stripping won't necessarily catch it (but probably would in most instances - depends on how you do it), but reformatting or reencoding will.

    • ericpauley a year ago

      I’m guessing most exif stripping would deserialize the image and write a new file, so unless that has the same bug as this (overwriting the existing file without truncation), it ought to work?

      • jsheard a year ago

        Discord strips EXIF but the author was still able to unredact the images they'd posted there.

        Some implementations of EXIF stripping might help, but it's not guarenteed.

        • Retr0id a year ago

          Discord doesn't strip EXIF from PNGs, only JPEGs

          • jsheard a year ago

            Seriously? What's the reasoning behind that?

            • Retr0id a year ago

              It's rare to see PNGs in-the-wild containing EXIF data, it's a feature that's only been in the spec since ~2017. I'm actually looking for one to double-check my statement about discord, but I can't find any.

              Edit: I made my own. I can confirm that the exif chunk was not stripped. https://cdn.discordapp.com/attachments/541730746805649476/10...

          • rtldg a year ago

            That's interesting. I've seen a couple of rotated PNGs before which I assumed were caused by Discord stripping the EXIF and orientation data. Found a PNG like that without EXIF from May 2022 so I wonder if Discord stopped stripping or if it was stripped on the person's device somewhere.

      • Retr0id a year ago

        A naive approach to stripping EXIF from a PNG would be to parse up to the start of the first eXIf chunk, discard the contents of that chunk, and then include the rest of the file verbatim without actually parsing anything.

        But yes, a more sensibly coded EXIF stripper would deserialise and reserialise. Unfortunately I am no longer able to assume that programmers will behave sensibly.

        Edit: Also, the PNGs generated by Markup don't contain EXIF in the first place, so an EXIF stripper could reasonably decide that no changes are necessary at all.

        • ericpauley a year ago

          Does anyone take this “naive” approach in practice? Any good image sanitization I’ve seen is equivalent to taking a screenshot of the image, re-serializing pixel contents but ignoring anything else. Any reputable service (e.g., Gmail) must take this approach to prevent proliferation of possible image-based malware.

          As you noted above Discord doesn’t sanitize PNGs. This exposes a failing on their end as well, as large services taking input from users should sanitize images to protect both senders and recipients.

zorlack a year ago

It'd be so interesting to collect aCropalypse-affected images. Maybe you could build a crop-suggester out of it...

Not that I'd want to maintain custody of such a dataset...

Wingman4l7 a year ago

Why the hell is this exploit being fully provided for use via a handy-dandy web interface? An image /cleanup/ tool is one thing... this is very irresponsible.

  • andersa a year ago

    I wonder if hiding the tool would help. Anyone interested could simply archive and hoard potentially interesting images until such tool emerges later. So in reality, it would change nothing, only slightly delay the images being extracted.

    The only thing I can think of that would have made a real difference is to send a tool to fix the images to all image hosting platforms in advance. But which ones do you trust?

    • batmanthehorse a year ago

      I think making this tool readily available right now is doing to result in a lot of people being doxxed who otherwise wouldn’t be.

      Some people would just lose interest if there isn’t an easy tool immediately available, and also it would give potential victims or image hosts more time to fix or delete vulnerable pics.

      • seba_dos1 a year ago

        Making such tool is trivial. Someone would have done it already, all you need is to point people's attention at the issue.

        • Wingman4l7 a year ago

          Just because making something that enables novice-computer-user bad actors is trivial, does not somehow justify it being ethical to do so.

          What if we just... thought twice about making it this easy?

          • seba_dos1 a year ago

            Since bad actors were likely going to provide such tool really soon anyway, providing it along with the announcement for potential victims to check their files wasn't probably a bad idea.

  • moosedev a year ago

    That was my first thought when I clicked on the website link in the Twitter thread -- expecting a disclosure/high-level info page in the fashion of the last decade of big-deal exploits with cute names -- and found only a tool the tweet author (not OP, but apparently working with him?) built that runs in-browser, requires no knowledge/setup, and appears to enable recovery of cropped-out image data at scale by even non-technical users. Jeez.

    Edit: I find myself wryly weighing this against the ongoing unleashing of LLMs upon the world. Both have shades of clever people prioritizing being and demonstrating clever at the cost of... other stuff. On the bright side, it is distracting me from facepalming at the underlying Pixel bug.

    • noirscape a year ago

      The bug is so simplistic (yet also damaging) that you can't really do it high info. Google Markup doesn't truncate the file properly before writing new data to it (due to a mixture of bad coding and a bad Android API change in Android 10).

      All the tool seems to do is just read out whatever comes after the end of the PNG and then supply the missing data to construct an image that can be rendered.

  • alwayslikethis a year ago

    If you send me some extra information than you intend, nothing stops me from just looking at it.

    • Wingman4l7 a year ago

      Of course not -- but you still have to put in the effort to "just look at it". They set the bar on that effort extremely low, taking an exploit that required expertise to deploy, and put it in the hands of anyone who could operate a web form.

  • jasonmp85 a year ago

    Google is irresponsible (current, not past tense, is and was always).

    Everything after that is fair game.

boosteri a year ago

Thought it's common knowledge the proper way to redact things is by masking it physically, then re-doing a photo/scan of an item.

  • easrng a year ago

    then you get the printer tracking dots :)

  • bastawhiz a year ago

    Sadly I don't have a second phone that I can keep with me to photograph my phone with, so I'm stuck with cropping

    • mfcl a year ago

      Screenshot, crop... then screenshot again! :D

wildylion a year ago

Randy definitely should make a cool panel-escape-vulnerability xkcd piece about this.

Waterluvian a year ago

EXIF metadata is useful but we strip it when we post an image because it’s also a security vulnerability.

Image edit metadata also seems like an incredibly useful feature. Do we just strip it as well?

  • Retr0id a year ago

    Since you read the article beforehand, you know that this comment is entirely orthogonal to the vulnerability in question.

    • Waterluvian a year ago

      I think it’s okay to talk about the core issue that leads to that. From the linked tweet it looks like there’s edit data stored in the image, allowing the original to be recovered?

      Do you have a specific concern to warrant your comment?

      • Retr0id a year ago

        It's not the core issue, and it's misleading to suggest that it is. I suggest reading the aptly named "Root Cause Analysis" section of the linked article.

        • Waterluvian a year ago

          I’m trying to follow the article. So it’s not the image format specifically that is holding on to the blacked out pixels, it’s the compression method that the image format uses, or more specifically, how Google’s code is handling that work?

          Is this possibly a helpful feature or is it really just a terrible hack/bug that has no practical use holding on to a sort of edit history inside a PNG?

          I would love a way to track some level of history in a commonly supported image format (but of course being aware of needing to strip it when appropriate)

          • Retr0id a year ago

            It's neither a feature nor a hack, it's simply a bug related to missing the O_TRUNC flag when opening the file for modification. No deliberate attempt was made to "hold onto" any data.

            • Waterluvian a year ago

              I feel like we’re talking past each other. I’ll find my answers elsewhere. Thanks for your time!

              • tslater2006 a year ago

                My (limited) understanding is that if you have say a 5mb file, and you open it for writing and wrote 3mb. You might expect the file to be 3mb, but...if you didn't specify the truncate flag (the bug here) the file is still actually the 5mb it was. The image appears cropped because the relevant metadata has the new sizes etc, but that 2mb of extra data is still there by mistake. This can be used to recover some of the original image.

                • Waterluvian a year ago

                  Aha! Thanks.

                  So this behaviour is likely unfit to be used as a feature, but could in some cases be used as a clever hack to, in a way, preserve some edited (cropped) data.

                  • Retr0id a year ago

                    No, it is not suitable as a clever hack. It doesn't work reliably or well enough for that.

              • andybak a year ago

                With all due respect, it's simply that you haven't properly understood the article.

                You sound like you're probably fairly smart but I suspect you're rushing this one and commenting before you've properly grasped the topic.