Guzba 2 years ago

I recently helped work on a new open source JPEG decoder in Nim. (Over here on GitHub: https://github.com/treeform/pixie/blob/master/src/pixie/file...)

This video was extremely helpful to understand the "why" of all the things the spec was trying to explain. It made a huge difference in us being able to get things working.

We talk a bit about JPEG and actually writing our decoder in Nim here: https://www.youtube.com/watch?v=vYwD7OynFcg

Overall, our concluding opinion is that JPEG has some extremely cool and really smart ideas for how to compress images but the binary file format itself has some very painful things in it (progressive and restart markers as a couple examples).

  • Sesse__ 2 years ago

    The amazing thing is how well JPEG performs for something that is pretty simple and worked (although very slowly) on 1992 hardware. (I don't mind restart markers, BTW, but stuffing definitely was a mistake.) Look at the state of the art in video codecs in 1992 versus today, then consider that we still make new image formats that can beat JPEG only on PSNR (not perceived quality), or in very narrow niches like super-low bit rates. As the quote goes, “it's like alien technology from the future”.

    JPEG XL appears to finally be getting there, with a meaningful improvement. But still nothing like 3x. Also perhaps AVIF, but current encoders have problems with rough texture on high bitrates.

    • TacticalCoder 2 years ago

      It's the same with the, even older, CD audio format. Sure, it's lossless and uncompressed but stereo 16-bit 44.1 kHz was a stroke of genius. In 1980.

      Some may argue about lossy audio format (as it the nineties called and wanted their 20 GB HDD back) while others may argue about SACD, 24 bit 96 kHz and whatnots but the fact stays: there are engineers out there who came up with the CD audio format in 1980, which is still in use to this day.

      I legally and bit-perfectly rip my CDs to FLAC files and it still boggles my mind that it's basically the format from 1980 (FLAC files are lossless and you can re-burn the exact same identical CD, which you can then, if you fancy so, re-rip to the exact same, bit perfect, WAV or FLAC files, rinse & repeat as many times as you want).

      Speakers definitely got better. DACs are ubiquitous now. Amps probably got better too. But 16-bit 44.1 kHz stereo lives on since 42 years (40 years commercially). Soon half a century.

      "It's like alien technology from the future" indeed.

      • zRedShift 2 years ago

        FLAC compression is, although lossless, not nearly as straightforward as raw PCM/WAV/AIFF. It has LPC (linear predictive coding), with the usual residual entropy/RLE coding (but without the quantization stage, due to being lossless). Also an optimization for when there's stereo input and both channels are very similar, (where it converts it losslessly to mid-channel and side-channel, where the values in side-channel are very small and lend themselves to RLE/entropy coding).

        As far as the xiph.org audio codecs go however, Opus is the real magnum opus (pun obviously intended). SILK (the LPC part, donated by skype) + CELT + DNN (used to detect whether it's speech or music to tune the 2 codecs since libopus v1.3), it's quite complex, and I feel like some of its parts (specifically the SILK encoder, which has the donated implementation and only the high level details in its RFC, since CELT has a plethora of documentation/articles and independent encoder re-implementation in ffmpeg) are only really understood by the original authors (or at least were when they wrote them a decade and a half ago). Reverse engineering the (SILK) encoder code and making a video similar to the one on the OP (or at least an article/blog post) could be a fun activity.

        • Sesse__ 2 years ago

          Opus feels like it solved the problem of audio compression. Even if someone came out with a codec that gave same quality at half the bitrate, I don't think I would care much; I just want Opus in all my devices, everywhere. :-) It's good enough along pretty much all axes, except, of course, universal support.

          • zRedShift 2 years ago

            If we're talking about wish-lists, encoding performance on low-power IoT devices, maybe? It has decent SIMD support on ARM/x86 and tweakable complexity settings, but if your device is weaker than an ESP32, you'll be hard-pressed to encode audio in real time, even on the lowest complexity.

            The new kids on the block in the speech encoding/real time communications space (Google Lyra/Microsoft Satin) have fancy AI models, promise decent quality in ultra-low bitrates (3-6kbps), but don't look like they're any easier to run on micro controller.

            • lifthrasiir 2 years ago

              How about Codec 2 [1]? I think it delivers a comparable performance to Lyra etc. while not using ML, and has multiple ESP32 ports already. Maybe it might be usable for less powered devices.

              [1] https://www.rowetel.com/?page_id=452

      • vanderZwan 2 years ago

        > as it the nineties called and wanted their 20 GB HDD back

        I think that was the early 2000s. The nineties were the era of the CD-ROM storing huge games that did not fit on your HDD.

vanderZwan 2 years ago

Out of curiosity: why did they decide to go with a zig-zag pattern instead of grouping same frequencies of each block together? That way you'd end up with even longer runs of zeros.

edit: is that perhaps how progressive jpegs work and is that why they typically compress better?

GeertJohan 2 years ago

The QOI format is relatively easy to understand, explained in a single page pdf file. I found it very interesting, knowing nothing of image encoding/compression.

https://qoiformat.org/

  • vanderZwan 2 years ago

    It's great but it's an entirely different beast, being both lossless and designed to be fast and minimal above everything else. It works by minimizing the overhead of encoding deltas and run-lengths.

Traubenfuchs 2 years ago

I was disheartened right at the beginning at the color space conversion: Why is there no green chrominance?

Is the chrimonance quantization done for both red and blue chrominance?

  • vanderZwan 2 years ago

    The calculated red and blue chrominance values can be both positive and negative. The latter may sound a bit ridiculous in isolation, but if you combine it with luminance it works out: you can subtract blue and red from white to reconstruct green.

  • astrange 2 years ago

    You don’t need it. Y has all the info you need.

diogenes_of_ak 2 years ago

Oh! Hey!

I just did something like this! I was messing around and used SVD to write a compressor. Do I dare share my feeble GitHub here lol

  • gabsens 2 years ago

    Sounds similar to performing PCA and keeping only the top eigenvectors

    • IIAOPSW 2 years ago

      Any sufficiently good compression algorithm is inherently also a dimensionality reduction algorithm, reducing the representation down to as few dimensions as needed. This simple fact is suggestive of an origin to a number of cognitive abilities. Out of a system which evolves only for condensed representation, the ability to discern patterns in the represented things emerges for free.

  • texaslonghorn5 2 years ago

    Please do!

    • diogenes_of_ak 2 years ago

      Debating it - my GitHub account is basically my name… so yah… debating the subsequent loss of anonymity

      • unnouinceput 2 years ago

        Then create another one, and share that one. It's not like there is a limit of how many GitHub accounts you can have.