samwillis 3 years ago

The WASM is only 311kb, so for an image heavy site this can easily be offset by the savings on image sizes. It seems quite quick too, are there any benchmarks?

There isn't any source in the repository for the WASM, thats slightly worrying as it's difficult to confirm the Apache License realy applies. I assume you are using an existing JPEG XL decoding lib?

(Edit - Source here: https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...)

Are you doing any progressive decoding while it downloads? I believe that is one of the features of JPEG XL.

Anyone wanting a good overview of JPEG XL, there is a good one here: https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...

Aside: This is a great example of how awesome Mutation Observers are, they are the foundation of so many nice new "minimal" front end tools like HTMX and Alpine.js.

  • lucideer 3 years ago

    > this can easily be offset by the savings on image sizes

    Maybe but I wouldn't say "easily". Only if you're considering bandwidth in isolation as your only bottleneck & neglecting client-side processing. The combination of it being WASM & being run in a Web Worker should mitigate that a lot but it's still going to be a non-zero cost, particularly in terms of RAM usage for any reasonably "large" image perf. problem being solved.

    On top of that the 311kb download & execution/io processing, while small, is an up front blocking perf cost whereas loading images directly will at least be parallelised entirely (small images especially rendering immediately, before the WASM would've even downloaded).

    It's a really cool project that'd definitely improve perf. for many cases. I'd just be reluctant to say "easily offset".

    The other extra factor here is the overhead of abstraction/maintenance/likely developer mistakes due to the added code complexity of your new client-side decoding pipeline.

    • rektide 3 years ago

      Such a pity that host-origin isolation has fucked over the would be ways to make this good. We could have a couple CDN modules that can get loaded/forked in (something wasm is increasingly good at), or even shoved into an invisible processor iframe, but because we are freaked out about privacy (rightly), everyones gonna re-load & re-instantiate the necessary modules independently.

      I always thought the vision of the web as cooperating participating pieces of code was so awesome, was going to lead to vast vast efficiency savings. Tons of modules would just be available, be ambiently running. We've spent so long making modules on the web maybe perhaps possible. But just before prime time, we cancel every since ounce of win & savings by imposing huge host-origin isolation domains, all to avoid letting a host know a user had some code already. Because that indeed could be tracked. I both get it, it makes sense, but my god, what a last minute rug pull on such a huge long saga of the industry & my own maturation.

  • samwillis 3 years ago

    To answer my own question, they have extracted the decoder from this Google Chrome Labs "Sploosh" project [0], all the sources are here:

    https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...

    It's under an Apache license, and so the correct license applies.

    0: https://squoosh.app

    • haolez 3 years ago

      I don't know why, but the design of Squoosh's page is very appealing to me. Must be some design witchcraft :)

  • BiteCode_dev 3 years ago

    Maybe on desktop, but on mobile I don't think the benefit would be that clear cut, because mobile are slow, and heavy computations drain battery pretty fast.

  • Jyaif 3 years ago

    > The WASM is only 311kb

    That's gargantuan.

    • pizza 3 years ago

      I'd agree with that if it were about a JPEG decoder - JPEG is a simple algorithm. But JPEG XL is really quite complex. Like kinda really complex. It is better described as a mixing pot for lots of different, individually-impressive contributions to image compression accumulated over years of open source iteration, rather than a succinctly describable algorithm. like it would be better to call it a hybrid of the existing flif/fuif/lode png/pik/brunsli/guetzli/gritibanzli/butteraugli/gralic projects, and it's also using libhwy for cross-platform simd. 311 kb, to run that via js, is amazing, imo.

    • samwillis 3 years ago

      Not if you have 10mb of images on the page... at that point it makes a significant saving.

      • Jyaif 3 years ago

        Yes it will save bandwidth in many cases, it doesn't change the fact that for an image decoder 300KB is huge.

        • lifthrasiir 3 years ago

          If you are going to leverage vectorization for performance (and WebAssembly does support 128-bit vector instructions), it's very much inevitable for binary size to increase.

      • yakubin 3 years ago

        Which is 1-2 full-sized photos from a 30Mpx camera. More if it’s just thumbnails.

  • JyrkiAlakuijala 3 years ago

    libjxl project members have currently a 174 kB wasm decoder

    Making it reflect only libjxl-tiny functionality should make it 25-50 kB if my guesswork is correct (libjxl-tiny is about 10x smaller than full blown libjxl).

DrNosferatu 3 years ago

An awesome variation would be to, instead inside web pages, to use this inside PDF files.

Some PDF newspaper subscriptions (many times sent by email) have very poor quality in the contained photos. I suppose the intent is to keep the already big file size (multi MB) down. Having the newspaper photos in JPEG XL - or even AVIF - would be a great upgrade.

PS: And no, I don't think the poor quality photos are deliberate as some sort of forced "VHS-style Macrovision" degradation to minimize financial losses on easy to copy content - the same articles are also partially available online.

  • niutech 3 years ago

    PDF 1.5 supports JPXDecode filter based on the JPEG 2000 standard.

    • aidenn0 3 years ago

      I use jpeg-2k in a PDF for encoding comics; it's better than any pre-JXL compressor I tried for such images; HEIC and webp both blur out the stippling and hatching at much higher bitrates than j2k. JXL soundly defeats it though and is much easier to use (I have to use different quantization settings for greyscale and colour images with openJPEG, but the default "lossy" setting with libjxl never creates a larger file than j2k).

      So for me, at least, I'd like jxl to make it into PDF.

    • DrNosferatu 3 years ago

      But no JPEG XL, AVIF or HEIF! I think AVIF would be optimally useful.

cosarara 3 years ago

Is it possible to fall back to a jpeg if the browser does not support js, wasm, or web workers? With a <picture> element, maybe?

I did some tests on my own server and found that for some reason it's quite fast when running on https, but super slow on insecure http. Not sure why that is, maybe the browser disallows something that's important here for insecure connections.

  • niutech 3 years ago

    Yes, you can use <picture> to fall back to JPEG/PNG/WebP.

    • cosarara 3 years ago

      https://cosarara.me/misc/jxljs/ in my test, it will simply fall back to jpeg, even if the jxl.js library is loaded (scroll down to Image with fallback).

      • niutech 3 years ago

        This is because JXL.js doesn't support `<picture>` tags - there is no rationale for WASM-decoding JXL when you can provide a fallback in natively-decoded JPEG/PNG/WebP.

        • cosarara 3 years ago

          The rationale would be that it might be more efficient to WASM-decode JXL than to download a JPEG, especially in pages with lots of images, but I would want a fallback if the browser does not support WASM.

Retr0id 3 years ago

> Then the JPEG XL image data is transcoded into JPEG image

Why not PNG?

  • samwillis 3 years ago

    JPEG is significantly quicker to encode/decode than PNG.

    Plus I believe there may be a particularly fast route from JPEG XL -> JPEG, as you can go JPEG -> JPEG XL without having to decode to pixels. That then lets you take advantage of the browser/hardware jpeg decoder.

    • Retr0id 3 years ago

      That fast route would only be possible for a subset of JXLs, though.

      With control over codec parameters (turning down/off zlib compression), PNG can definitely encode/decode faster (in software). There might be a gap in the market for such an implementation though, perhaps I should make it.

      • vanderZwan 3 years ago

        I'm guessing the author suspects the more common first adaptation of JXL will be losslessly re-encoded JPEGs. In other words: the very subset you mention, right?

        Having said that, the author seems very open to suggestions and contributions - I suggested using canvas instead of a data URL-based PNG a week ago and within a day they had implemented a version of it.

        edit: the other reason might be having smaller image sizes in the cache.

    • pornel 3 years ago

      Uncompressed PNG can be quick to encode. There are also dedicated encoders like fpng that are an order of magnitude faster than zlib.

  • DaleCurtis 3 years ago

    I think you can skip the transcode step entirely in favor of transmux with BMP too.

  • niutech 3 years ago

    The main reason is that JPEG takes less space in the cache.

jbverschoor 3 years ago

Great, but it won't work on iPhones with lockdown mode enabled

  • niutech 3 years ago

    Lockdown mode disables WebAssembly.

    • jbverschoor 3 years ago

      Exactly Not sure if that’s actually a security feature, or just something to preempt AppStore-less apps