The WASM is only 311kb, so for an image heavy site this can easily be offset by the savings on image sizes. It seems quite quick too, are there any benchmarks?
There isn't any source in the repository for the WASM, thats slightly worrying as it's difficult to confirm the Apache License realy applies. I assume you are using an existing JPEG XL decoding lib?
Aside: This is a great example of how awesome Mutation Observers are, they are the foundation of so many nice new "minimal" front end tools like HTMX and Alpine.js.
> this can easily be offset by the savings on image sizes
Maybe but I wouldn't say "easily". Only if you're considering bandwidth in isolation as your only bottleneck & neglecting client-side processing. The combination of it being WASM & being run in a Web Worker should mitigate that a lot but it's still going to be a non-zero cost, particularly in terms of RAM usage for any reasonably "large" image perf. problem being solved.
On top of that the 311kb download & execution/io processing, while small, is an up front blocking perf cost whereas loading images directly will at least be parallelised entirely (small images especially rendering immediately, before the WASM would've even downloaded).
It's a really cool project that'd definitely improve perf. for many cases. I'd just be reluctant to say "easily offset".
The other extra factor here is the overhead of abstraction/maintenance/likely developer mistakes due to the added code complexity of your new client-side decoding pipeline.
Such a pity that host-origin isolation has fucked over the would be ways to make this good. We could have a couple CDN modules that can get loaded/forked in (something wasm is increasingly good at), or even shoved into an invisible processor iframe, but because we are freaked out about privacy (rightly), everyones gonna re-load & re-instantiate the necessary modules independently.
I always thought the vision of the web as cooperating participating pieces of code was so awesome, was going to lead to vast vast efficiency savings. Tons of modules would just be available, be ambiently running. We've spent so long making modules on the web maybe perhaps possible. But just before prime time, we cancel every since ounce of win & savings by imposing huge host-origin isolation domains, all to avoid letting a host know a user had some code already. Because that indeed could be tracked. I both get it, it makes sense, but my god, what a last minute rug pull on such a huge long saga of the industry & my own maturation.
Did you check? There aren't really any sources there, either.
From having already spent some time looking into this to track down what, exactly, you should patch should you want to make modifications, it looks like it's using the decoder from the JPEG XL reference implementation, libjxl (which is available under a BSD license):
Maybe on desktop, but on mobile I don't think the benefit would be that clear cut, because mobile are slow, and heavy computations drain battery pretty fast.
I'd agree with that if it were about a JPEG decoder - JPEG is a simple algorithm. But JPEG XL is really quite complex. Like kinda really complex. It is better described as a mixing pot for lots of different, individually-impressive contributions to image compression accumulated over years of open source iteration, rather than a succinctly describable algorithm. like it would be better to call it a hybrid of the existing flif/fuif/lode png/pik/brunsli/guetzli/gritibanzli/butteraugli/gralic projects, and it's also using libhwy for cross-platform simd. 311 kb, to run that via js, is amazing, imo.
If you are going to leverage vectorization for performance (and WebAssembly does support 128-bit vector instructions), it's very much inevitable for binary size to increase.
libjxl project members have currently a 174 kB wasm decoder
Making it reflect only libjxl-tiny functionality should make it 25-50 kB if my guesswork is correct (libjxl-tiny is about 10x smaller than full blown libjxl).
An awesome variation would be to, instead inside web pages, to use this inside PDF files.
Some PDF newspaper subscriptions (many times sent by email) have very poor quality in the contained photos.
I suppose the intent is to keep the already big file size (multi MB) down. Having the newspaper photos in JPEG XL - or even AVIF - would be a great upgrade.
PS: And no, I don't think the poor quality photos are deliberate as some sort of forced "VHS-style Macrovision" degradation to minimize financial losses on easy to copy content - the same articles are also partially available online.
I use jpeg-2k in a PDF for encoding comics; it's better than any pre-JXL compressor I tried for such images; HEIC and webp both blur out the stippling and hatching at much higher bitrates than j2k. JXL soundly defeats it though and is much easier to use (I have to use different quantization settings for greyscale and colour images with openJPEG, but the default "lossy" setting with libjxl never creates a larger file than j2k).
So for me, at least, I'd like jxl to make it into PDF.
Is it possible to fall back to a jpeg if the browser does not support js, wasm, or web workers? With a <picture> element, maybe?
I did some tests on my own server and found that for some reason it's quite fast when running on https, but super slow on insecure http. Not sure why that is, maybe the browser disallows something that's important here for insecure connections.
https://cosarara.me/misc/jxljs/ in my test, it will simply fall back to jpeg, even if the jxl.js library is loaded (scroll down to Image with fallback).
This is because JXL.js doesn't support `<picture>` tags - there is no rationale for WASM-decoding JXL when you can provide a fallback in natively-decoded JPEG/PNG/WebP.
The rationale would be that it might be more efficient to WASM-decode JXL than to download a JPEG, especially in pages with lots of images, but I would want a fallback if the browser does not support WASM.
JPEG is significantly quicker to encode/decode than PNG.
Plus I believe there may be a particularly fast route from JPEG XL -> JPEG, as you can go JPEG -> JPEG XL without having to decode to pixels. That then lets you take advantage of the browser/hardware jpeg decoder.
That fast route would only be possible for a subset of JXLs, though.
With control over codec parameters (turning down/off zlib compression), PNG can definitely encode/decode faster (in software). There might be a gap in the market for such an implementation though, perhaps I should make it.
I'm guessing the author suspects the more common first adaptation of JXL will be losslessly re-encoded JPEGs. In other words: the very subset you mention, right?
Having said that, the author seems very open to suggestions and contributions - I suggested using canvas instead of a data URL-based PNG a week ago and within a day they had implemented a version of it.
edit: the other reason might be having smaller image sizes in the cache.
The WASM is only 311kb, so for an image heavy site this can easily be offset by the savings on image sizes. It seems quite quick too, are there any benchmarks?
There isn't any source in the repository for the WASM, thats slightly worrying as it's difficult to confirm the Apache License realy applies. I assume you are using an existing JPEG XL decoding lib?
(Edit - Source here: https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...)
Are you doing any progressive decoding while it downloads? I believe that is one of the features of JPEG XL.
Anyone wanting a good overview of JPEG XL, there is a good one here: https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...
Aside: This is a great example of how awesome Mutation Observers are, they are the foundation of so many nice new "minimal" front end tools like HTMX and Alpine.js.
> this can easily be offset by the savings on image sizes
Maybe but I wouldn't say "easily". Only if you're considering bandwidth in isolation as your only bottleneck & neglecting client-side processing. The combination of it being WASM & being run in a Web Worker should mitigate that a lot but it's still going to be a non-zero cost, particularly in terms of RAM usage for any reasonably "large" image perf. problem being solved.
On top of that the 311kb download & execution/io processing, while small, is an up front blocking perf cost whereas loading images directly will at least be parallelised entirely (small images especially rendering immediately, before the WASM would've even downloaded).
It's a really cool project that'd definitely improve perf. for many cases. I'd just be reluctant to say "easily offset".
The other extra factor here is the overhead of abstraction/maintenance/likely developer mistakes due to the added code complexity of your new client-side decoding pipeline.
Such a pity that host-origin isolation has fucked over the would be ways to make this good. We could have a couple CDN modules that can get loaded/forked in (something wasm is increasingly good at), or even shoved into an invisible processor iframe, but because we are freaked out about privacy (rightly), everyones gonna re-load & re-instantiate the necessary modules independently.
I always thought the vision of the web as cooperating participating pieces of code was so awesome, was going to lead to vast vast efficiency savings. Tons of modules would just be available, be ambiently running. We've spent so long making modules on the web maybe perhaps possible. But just before prime time, we cancel every since ounce of win & savings by imposing huge host-origin isolation domains, all to avoid letting a host know a user had some code already. Because that indeed could be tracked. I both get it, it makes sense, but my god, what a last minute rug pull on such a huge long saga of the industry & my own maturation.
To answer my own question, they have extracted the decoder from this Google Chrome Labs "Sploosh" project [0], all the sources are here:
https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...
It's under an Apache license, and so the correct license applies.
0: https://squoosh.app
> all the sources are here:
https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...
Did you check? There aren't really any sources there, either.
From having already spent some time looking into this to track down what, exactly, you should patch should you want to make modifications, it looks like it's using the decoder from the JPEG XL reference implementation, libjxl (which is available under a BSD license):
<https://github.com/libjxl/libjxl>
I don't know why, but the design of Squoosh's page is very appealing to me. Must be some design witchcraft :)
Maybe on desktop, but on mobile I don't think the benefit would be that clear cut, because mobile are slow, and heavy computations drain battery pretty fast.
> The WASM is only 311kb
That's gargantuan.
I'd agree with that if it were about a JPEG decoder - JPEG is a simple algorithm. But JPEG XL is really quite complex. Like kinda really complex. It is better described as a mixing pot for lots of different, individually-impressive contributions to image compression accumulated over years of open source iteration, rather than a succinctly describable algorithm. like it would be better to call it a hybrid of the existing flif/fuif/lode png/pik/brunsli/guetzli/gritibanzli/butteraugli/gralic projects, and it's also using libhwy for cross-platform simd. 311 kb, to run that via js, is amazing, imo.
Not if you have 10mb of images on the page... at that point it makes a significant saving.
Yes it will save bandwidth in many cases, it doesn't change the fact that for an image decoder 300KB is huge.
If you are going to leverage vectorization for performance (and WebAssembly does support 128-bit vector instructions), it's very much inevitable for binary size to increase.
Which is 1-2 full-sized photos from a 30Mpx camera. More if it’s just thumbnails.
libjxl project members have currently a 174 kB wasm decoder
Making it reflect only libjxl-tiny functionality should make it 25-50 kB if my guesswork is correct (libjxl-tiny is about 10x smaller than full blown libjxl).
An awesome variation would be to, instead inside web pages, to use this inside PDF files.
Some PDF newspaper subscriptions (many times sent by email) have very poor quality in the contained photos. I suppose the intent is to keep the already big file size (multi MB) down. Having the newspaper photos in JPEG XL - or even AVIF - would be a great upgrade.
PS: And no, I don't think the poor quality photos are deliberate as some sort of forced "VHS-style Macrovision" degradation to minimize financial losses on easy to copy content - the same articles are also partially available online.
PDF 1.5 supports JPXDecode filter based on the JPEG 2000 standard.
I use jpeg-2k in a PDF for encoding comics; it's better than any pre-JXL compressor I tried for such images; HEIC and webp both blur out the stippling and hatching at much higher bitrates than j2k. JXL soundly defeats it though and is much easier to use (I have to use different quantization settings for greyscale and colour images with openJPEG, but the default "lossy" setting with libjxl never creates a larger file than j2k).
So for me, at least, I'd like jxl to make it into PDF.
But no JPEG XL, AVIF or HEIF! I think AVIF would be optimally useful.
Is it possible to fall back to a jpeg if the browser does not support js, wasm, or web workers? With a <picture> element, maybe?
I did some tests on my own server and found that for some reason it's quite fast when running on https, but super slow on insecure http. Not sure why that is, maybe the browser disallows something that's important here for insecure connections.
Yes, you can use <picture> to fall back to JPEG/PNG/WebP.
https://cosarara.me/misc/jxljs/ in my test, it will simply fall back to jpeg, even if the jxl.js library is loaded (scroll down to Image with fallback).
This is because JXL.js doesn't support `<picture>` tags - there is no rationale for WASM-decoding JXL when you can provide a fallback in natively-decoded JPEG/PNG/WebP.
The rationale would be that it might be more efficient to WASM-decode JXL than to download a JPEG, especially in pages with lots of images, but I would want a fallback if the browser does not support WASM.
> Then the JPEG XL image data is transcoded into JPEG image
Why not PNG?
JPEG is significantly quicker to encode/decode than PNG.
Plus I believe there may be a particularly fast route from JPEG XL -> JPEG, as you can go JPEG -> JPEG XL without having to decode to pixels. That then lets you take advantage of the browser/hardware jpeg decoder.
That fast route would only be possible for a subset of JXLs, though.
With control over codec parameters (turning down/off zlib compression), PNG can definitely encode/decode faster (in software). There might be a gap in the market for such an implementation though, perhaps I should make it.
I'm guessing the author suspects the more common first adaptation of JXL will be losslessly re-encoded JPEGs. In other words: the very subset you mention, right?
Having said that, the author seems very open to suggestions and contributions - I suggested using canvas instead of a data URL-based PNG a week ago and within a day they had implemented a version of it.
edit: the other reason might be having smaller image sizes in the cache.
Uncompressed PNG can be quick to encode. There are also dedicated encoders like fpng that are an order of magnitude faster than zlib.
I think you can skip the transcode step entirely in favor of transmux with BMP too.
The main reason is that JPEG takes less space in the cache.
Very novel implementation I would say!
Is there any information on how the jxl_dec.js/wasm files are created? Is it an originally C++ decoder that was compiled into wasm with LLVM?
See the Makefile in https://github.com/GoogleChromeLabs/squoosh/tree/dev/codecs/...
Great, but it won't work on iPhones with lockdown mode enabled
Lockdown mode disables WebAssembly.
Exactly Not sure if that’s actually a security feature, or just something to preempt AppStore-less apps