constexpr a year ago

Hello! I made this. People are talking about not wanting pictures to be initially blurry before they finish loading. I understand that too, and I'm not sure how I feel about it myself (I could go either way).

But for what it's worth, I actually made this for another use case: I have a grid of images that I want to be able to zoom really far out. It'd be nice to show something better than the average color when you do this, but it would be too expensive to fetch a lot of really small images all at once. ThumbHash is my way of showing something more accurate than a solid color but without the performance cost of fetching an image. In this scenario you'd only ever see the ThumbHash. You would have to zoom back in to see the full image.

  • franciscop a year ago

    I have a bigger budget and would love higher quality, would it be possible easily to adapt the code to ouput 50-100 bytes strings in a similar fashion (2x-4x), or it'd be a complete rewrite? I read the JS code but unfortunately I'm really unfamiliar with low level byte manipulation and could not make heads or tails out of it.

    • Scaevolus a year ago

      Try very small webp images. 32x32 quality 5 webp images are around 100-150 bytes.

      • franciscop a year ago

        Given the difference in quality seen, I'd guess Thumbhash at 50 bytes would be similar quality to webp at 100-150 bytes, so basically 1/3rd of the size or 3x the quality for "the same price". The examples of webp are way worse quality, and almost 2x the size.

  • efsavage a year ago

    Nice job, a material improvement over the mentioned blur hash.

    A nice CSS transition for when the image loaded would be the cherry on top ;)

  • fiddlerwoaroof a year ago

    This is cool. Do you happen to know if the thumbhash string has other uses? Perhaps grouping images by similarity or something?

    • bityard a year ago

      That's a whole field of study on its own, called perceptual hashing. I surveyed these a while for amusement and the TL;DR is that all immediately obvious approaches tend to have particularly bad corner cases.

      https://en.wikipedia.org/wiki/Perceptual_hashing

    • mholt a year ago

      I just wrote a quick function to compare visual similarity using the thumbhash and just adding up the difference at each byte position seems to work really well! (As long as the images are the same aspect ratio. I want to do more tests.)

    • 8n4vidtmkvmk a year ago

      I was thinking about that too. Can't answer the question, but I did come across this just the other day: https://github.com/cloudinary/ssimulacra2 Supposedly good for comparing image similarity. Might depend on your use-case, I think it's geared towards image quality moreso than similar photos.

  • Lorin a year ago

    Are you Evan? Thanks so much for your work in open source - your GitHub avatar is easily recognized! :)

    • jchw a year ago

      No kidding. Just to consider a single project, my total time saved using esbuild instead of pure JS bundlers could probably be measured wallclock months, and I really like the way it's written: simple, approachable, handwritten lexer/parser, etc. The way that esbuild organizes its lexer state has infected my handrolled lexers, too.

  • javier2 a year ago

    It is very cool. I exactly want to use it as a placeholder in a large grid of small images!

jiggawatts a year ago

Blurring images or doing any sort of maths on the RGB values without first converting from the source-image gamma curve to "linear light" is wrong. Ideally, any such generated image should match the colour space of the image it is replacing. E.g.: sRGB should be used as the placeholder for sRGB, Display P3 for Display P3, etc...

Without these features, some images will have noticeable brightness or hue shifts. Shown side-by-side like in the demo page this is not easy to see, but when replaced in the same spot it will result in a sudden change. Since the whole point of this format is to replace images temporarily, then ideally this should be corrected.

As some people have said, developers often make things work for "their machine". Their machine on the "fast LAN", set to "en-US", and for their monitor and web browser combination. Most developers use SDR sRGB and are blithely unaware that all iDevices (for example) use HDR Display P3 with different RGB primaries and gamma curves.

A hilarious example of this is seeing Microsoft use Macs to design UIs for Windows which then look too light because taking the same image file across to a PC shifts the brightness curve. Oops.

  • eyelidlessness a year ago

    > A hilarious example of this is seeing Microsoft use Macs to design UIs for Windows which then look too light because taking the same image file across to a PC shifts the brightness curve. Oops.

    (Showing my age I’m sure) I distinctly remember how frustrating this was in the bad old days before widespread browser support for PNG [with alpha channel]. IIRC, that was typically caused by differences in the default white point. I could’ve sworn at some point Apple relented on that, eliminating most of the cross platform problems of the time. But then, everything was converging on sRGB.

    • duskwuff a year ago

      I think you're thinking about gamma correction. Before 2009, Apple used a display gamma of 1.8, so images displayed differently than they did on Windows systems (which used a gamma of 2.2).

  • astrange a year ago

    > Most developers use SDR sRGB and are blithely unaware that all iDevices (for example) use HDR Display P3 with different RGB primaries and gamma curves.

    This shouldn't matter as long as everything is tagged with the real colorspace; it'll get converted.

    If you forget to do that you can have issues like forgetting to clip floating point values to 1.0 and then you get HDR super-white when you expected 100% white.

  • btown a year ago

    Do any of the prior-art approaches, or any others, do this correctly?

    • telios a year ago

      BlurHash kind of does, but also doesn't, as I believe the canvas image API respects the colorspace of the loaded image; you can see this by generating a blurhash from a (non-sRGB) image and comparing it between the browser and server implementations.

munro a year ago

I hate these blurry image thumbnails, much prefer some sort of hole, and just wait for a better thumbnail (look at youtube for this, or basically any site). I'd much rather see engineers spending more time making the thumbnails load faster (improving their backend throughput, precache thumbnails, better compression, etc). The blurry thumbnails have 2 issues 1) trick person into thinking they're loaded, especially if there's a flicker before the blurry thumbnails are displayed!!! so then the brain has to double back and look at the new image. 2) have a meaning that content is blocked from viewing

  • crazygringo a year ago

    I think they're great, and it's not much different from progressive image loading that's been around for decades. Images going from blurry to sharp was a big thing back in the 1990's over dial-up AOL and MSN.

    > I'd much rather see engineers spending more time making the thumbnails load faster

    Generally it's a client-side bandwidth/latency issue, not something on the server. Think particularly on mobile and congested wi-fi, or just local bandwidth saturation.

    > The blurry thumbnails have 2 issues 1) trick person into thinking they're loaded

    I've never found myself thinking that -- a blurry-gradient image seems to be generally understood as "loading". Which goes all the way back to the 90's.

    > 2) have a meaning that content is blocked from viewing

    In that case there's almost always a message on top ("you must subscribe"), or at least a "locked" icon or something.

    These blurry images are designed for use in photos that accompany an article, grids of product images, etc. I don't think there's generally any confusion as to what's going on, except "the photo hasn't loaded yet", which it hasn't. I find they work great.

    • deadbeeves a year ago

      >it's not much different from progressive image loading that's been around for decades

      Progressive images suck. PNG's implementation is particularly awful, as you have to use increasing amounts of brainpower to tell whether it has finished loading or not.

  • layer8 a year ago

    What I find more of an issue cognitively is that they entice to discern their contents, but of course they are too blurry to really see anything and trigger the subliminal feeling that you forgot to put your glasses on. So they attract your attention while typically not providing much useful information yet. A non-distracting neutral placeholder is generally preferable, IMO. Even more preferable would be for images to load instantly, as many websites somehow manage to do.

  • Eduard a year ago

    > The blurry thumbnails have 2 issues 1) trick person into thinking they're loaded

    But is _not_ showing blurry thumbnails during image loading any better in that regard?

    - an empty area would give the false impression there isn't any image at all

    - a half-loaded image would give the false impression the image is supposed to be like that / cropped

    - if e.g. the image element doesn't have explicit width and height attributes, and its dimensions are derived from the image's intrinsic dimensions, there will be jarring layout shifts

    > 2) have a meaning that content is blocked from viewing

    For you maybe. And even when so, so what? Page context, users' familiarity with the page, and the full images eventually appearing will make sure this is at most is only a temporary and short false belief.

  • Aachen a year ago

    When browsing DCIM via sshfs, I'd kill for using this instead of having to wait for it to read every 3MB image and generate a thumb to show me. I don't have a problem with current thumbs because the status quo of waiting 30 seconds for a page of images to get thumbed is so terrible. Probably I'd love the improvement if it would use ThumbHash server-side^W phone-side instead of small pngs or whatever it does today.

  • imhoguy a year ago

    I think these issues can be solved by just rendering a spinner or "loading" text on top of the blurred image.

    • actionfromafar a year ago

      Maybe... but solving dstraction with more distraction feels off somehow.

    • 8n4vidtmkvmk a year ago

      Definitely don't want 100 spinners over a grid of images.

jjcm a year ago

FWIW, this is Evan Wallace, cofounder of Figma and creator of ESBuild. The dude has an incredible brain for performant web code.

transitivebs a year ago

I open sourced a version of what Evan calls the "webp potato hash" awhile back: https://github.com/transitive-bullshit/lqip-modern

I generally prefer using webp to BlurHash or this version of ThumbHash because it's natively supported and decoded by browsers – as opposed to requiring custom decoding logic which will generally lock up the main thread.

  • kurtextrem a year ago

    Small heads-up, you might want to look at the PRs (I've opened the single open PR) :D

    Take a look here: https://github.com/ascorbic/unpic-placeholder. Recently created by a Principal Engineer of Netlify, which kind of server-side-renders BlurHash images, so that they don't tax the mainthread. Maybe the same can be done for ThumbHash (I've opened an issue in that repo to discuss it)

  • eyelidlessness a year ago

    FWIW, it can almost certainly be moved off the main thread with OffscreenCanvas, but that has its own set of added complexities.

    Edit: word wires got crossed in my brain

emptysea a year ago

What I’ve seen instagram and slack do is create a really small jpg and inline that in the API response. They then render it in the page and blur it while the full size image loads.

Placeholder image ends up being about 1KB vs the handful of bytes here but it looks pretty nice

Everything is a trade off of course, if you’re looking to keep data size to a minimum then blurhash or thumbhash are the way to go

  • codetrotter a year ago

    Yep. I also remember a blog post from a few years ago about how fb removed some of the bytes in the JPEG thumbnails, because those bytes would always be the same in the thumbnails they created, so they kept those bytes separate and just added them back in on the client side before rendering the thumbnails

    • jws a year ago

      As you get to small image sizes, the relative size of the quantization table and the Huffman encoding table becomes significant. You can easily just pick a standard version of these, leave them out of the image (which is no longer a valid JPEG), and then put them in at the destination to make a valid JPEG again.

      I'm not finding a source, but I think I remember the tables being in the 1-2kB range.

      Cheap MJPEG USB cameras do this. Their streaming data is like JPEG but without the tables since it would take up too much bandwidth.

    • degenerate a year ago

      I both love this level of optimization (for the novelty) and hate it (knowing all the work involved is being done with JS, on my machine, costing more CPU time than the bandwidth saved)

      • explaininjs a year ago

        The claim being that the act of concatenating some bytestrings together is a massive time sink?

        • 8n4vidtmkvmk a year ago

          Decode from base64 json, concat, convert to a Blob, createObjectURL and feed back into an img? That's about the simplest way I can think to do it. Either that or blip it onto a canvas. Or convert it back to base64 in the form of a data-URI.

          • eyelidlessness a year ago

            I just recently did some work in a much less sophisticated area of this. For folks who want to efficiently juggle image data into and out of canvas, ImageBitmap is a little known web API that does it very well. Unlike most web APIs dealing with binary data that interfaces with user interaction, you can use it synchronously, and it’s vastly more efficient than juggling base64-blobs-URLs.

            • 8n4vidtmkvmk a year ago

              That was the 'blip it onto the canvas' approach :D I discovered that ImageBitmap type just recently too, it's been very handy for the little project I'm working on. My data still comes as base64 strings in JSON though.

              • eyelidlessness a year ago

                Yeah you’re gonna have to async once to get the binary thing canvas works well with, but if it’s even remotely stable you can just memoize shuttling ImageBitmap back in as needed. For my use case it was just literally redrawing the same background image things get drawn on top of with an internal data structure that translates to canvas API. Switching from async calls to ImageBitmap basically made the difference between “this flickering is making me question medical facts about myself” and “I can see it lag if I’m trying really hard”. Which isn’t great but it’s more than good enough for what I was trying to solve.

          • explaininjs a year ago

            No reason to transfer the image data in JSON, just read the response as an ArrayBuffer. Pass over that and write it into a dataURI with the needed modifications in a single go.

            • 8n4vidtmkvmk a year ago

              How are you going to do that? There's other things in the payload. If it was just the image in the response you could read it as binary, but remember we want lots of tiny images coupled with titles and other metadata. You can use something other than JSON but it'll need some kind of structure that needs parsing.

              • explaininjs a year ago

                I don't see how it would ever need to be more than a single pass over the data. Sure the logic might be mildly complicated, but nothing CPU intensive.

  • Doxin a year ago

    As far as I know just about any file format other than JPEG is better at this. If I recall correctly you basically want to go with GIF if your thumbnail has less than 255 pixels total.

    • emptysea a year ago

      just tested this and the jpeg ends up being 690 Bytes while the gif ends up being 2 KB

      • Doxin a year ago

        That can't possibly be right. What size image are you using here?

  • nonethewiser a year ago

    > Everything is a trade off of course, if you’re looking to keep data size to a minimum then blurhash or thumbhash are the way to go

    Isn’t that optimizing for load speed at the expense of data size?

    I mean the data size increase is probably trivial, but it’s the full image size + placeholder size and a fast load vs. full image size and a slower load.

martin-adams a year ago

This is nice, I really like it.

It reminds me of exploring the SVG loader using potrace to generate a silhouette outline of the image.

Here's a demo of what that's like:

https://twitter.com/Martin_Adams/status/918772434370748416?s...

  • Dwedit a year ago

    Crazy idea, combine that with the technique in the submission. Use the chroma as-is, and average the traced luma with the blurry thumbnail luma.

Scaevolus a year ago

Nice! This would probably do even better if the color space was linear-- it should reduce how much the highlights (e.g. the sun) are lost.

a-dub a year ago

ThumbHash? seems more like MicroJPEG maybe? hash implies some specific things about the inputs and outputs that are definitely not true!

cool idea to extract one piece of the DCTs and emit a tiny low-res image though!

  • stcg a year ago

    I agree. Calling it a hash function feels off.

    I do not expect a hash function's output to be used to 'reverse' to an approximation of the input (which is the primary use here). That being easy is even an unacceptable property for cryptographic hash functions, which to me are hash functions in the purest form.

    I would rather call this extreme lossy compression.

  • Aeolun a year ago

    I think this is very much a one way operation, which would imply some form of hash?

    • a-dub a year ago

      regular (non-secure) hash functions do two things: they compress (very lossily) and they make things that are near each other in their domain (inputs) map to things that are far apart in their codomain (outputs).

      the first condition is satisfied, but the second is definitely not!

      • mholt a year ago

        > > they make things that are near each other in their domain (inputs) map to things that are far apart in their codomain (outputs).

        This is describing a specific subset of hash only. *Cryptographic* hash functions map inputs to outputs with high and uniform dispersion.

        So you are talking about cryptographic hashes, but different hash functions can have different properties.

        ThumbHash is absolutely a hash function, which is "any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output." (https://en.wikipedia.org/wiki/Hash_function)

        • a-dub a year ago

          sure. but think about it this way: most real world data is actually highly structured. in the space of bits that aren't encrypted, the real stuff lives in a very, very small subspace and good hash functions seek to avoid collisions in their output.

      • shakna a year ago

        Perceptual hashing wouldn't seem to satisfy your second requirement there, either.

        • a-dub a year ago

          it does, it just works on estimates of percepts rather than bits.

          you can think of a perceptual hash as two functions. a perceptual function that maps differing collections of bits that appear the same or similar to the same bits, and then a traditional hash function to ensure that these intermediate values get shuffled.

nawgz a year ago

On the examples given, it definitely looks the best of all of them, and seems to be as small as or smaller than their size

I'm not really sure I understand why all the others are presented in base83 though, while this uses binary/base64. Is it because EvanW is smarter than these people or did they try to access some characteristic of base83 I don't know about?

  • derefr a year ago

    Unlike b64-encoding, b83-encoding is nontrivial in CPU time (it's not just a shift-register + LUT), so you don't want to be doing it at runtime; you want to pre-bake base83 text versions of your previews, and then store them that way, as encoded text. Which means that BlurHash does that on the encode side, but more importantly, also expects that on the decode side. AFAIK none of the BlurHash decode implementations accept a raw binary; they only accept base83-encoded binary.

    While the individual space savings per preview is small, on the backend you migh be storing literally millions/billions of such previews. And being forced to store pre-baked base83 text, has a lot of storage overhead compared to being able to storing raw binaries (e.g. Postgres BYTEAs) and then just-in-time b64-encoding them when you embed them into something.

  • tolmasky a year ago

    It appears that only BlurHash is using base83. I imagine the base83 encoding is being used in the table because that is what the library returns by default.

    As to why everyone else uses base64, I figure it's because base64 is what you'd have to inline in the URL since it's the only natively supported data URL encoding.

    In other words, in order to take advantage of the size savings of base83, you would have to send it in a data structure that was then decoded into base64 on the page before it could be placed into an image (or perhaps the binary itself). Whereas the size savings of the base64 can be had "with no extra work" since you can inline them directly into the src of the image (with the surrounding data:base64 boilerplate, etc.) Of course, there are other contexts where the base83 gives you size savings, such as how much space it takes up in your database, etc.

    • miohtama a year ago

      When encoded images are 20-30 bytes, few byte savings because of encoding seem irrelevant. But it of course depends on the context.

  • nosequel a year ago

    BlurHash looks not at all accurate in the examples given. Some are not even close. I wouldn't use it on that fact alone.

    • telios a year ago

      Oddly, it looks like colorspace issues, as I've had these issues intermittently with BlurHash.

attah_ a year ago

Cool tech, but i feel that for all even remotely modern connection types placeholders like this are obsolete and do nothing but slow down showing the real thing.

  • NickBusey a year ago

    And this is why everything is slow and terrible. Because us developers use fast machines on fast connections, and assume everyone else does.

    Travel to some far flung parts of the world, and see if your hypothesis holds true.

    • tshaddox a year ago

      I'm not really sure that sentiment applies here. People on slow or unreliable connections probably aren't going to rejoice that they get to see blobs of color for a while until the full images load, all for the cost of waiting longer for the full images and loading more total data at the end of the ordeal.

      • nextaccountic a year ago

        > all for the cost of waiting longer for the full images and loading more total data at the end of the ordeal.

        Thumbhash is tiny (like, 28 bytes base64) and can be embedded in the html itself

        Replacing all your CSS class names by one or two letter names would save multiple times this amount even on tiny websites, but I'm not seeing nobody doing this

        • tshaddox a year ago

          CSS class names probably don't benefit much unless you're on such low end hardware that you're worried about client gzip costs (and at some point we have to determine some reasonable floor to client capabilities). These embedded thumbnails would presumably be essentially incompressible.

    • attah_ a year ago

      Well, yes and no. There is lots of waste for sure, but this is really just not warranted. Honestly i'm not sure it ever was. Even 15-20 years ago actual pictures loaded just fine. And don't get me started on text placeholders..

    • xwdv a year ago

      If I was in a far flung part of the world I wouldn’t be perusing content rich bandwidth intensive websites.

      I’d be smoking a cig sitting on a worn mattress in the highest floor of an abandoned apartment building typing on a late 90s ThinkPad, negotiating prices on stolen credit card lists through encrypted IRC channels and moving illicit files on SSH servers based in foreign countries.

      • abraae a year ago

        I'm in a far flung part of the world and a lot of my time is spent in very normal places like the AWS console, not doing any of those cool and dangerous sounding things.

        • xwdv a year ago

          Maybe you haven’t been flung far enough.

      • popcalc a year ago

        When you go full lain...

  • nedt a year ago

    When Facebook did it years ago (plus their slim version of the web page) they mentioned India as one important use case. Huge country, a lot of people, but not even remotely a "modern connection". Also other "remote" countries like Australia have worse network performance. Of course you could say you don't care about APAC, but that's not how websites should be build.

  • crazygringo a year ago

    My modern connection is a mobile network where speed very much comes and goes depending on where I am.

    There's nothing obsolete about phones on mobile networks.

  • jbverschoor a year ago

    Until you don't have that connection somewhere.. Plus it will still work when your CDN / image processing server is having troubles.

  • mkmk a year ago

    One pervasive source of slow connections, even in well-developed places, is mobile devices as they travel in a car or public transit.

jurimasa a year ago

This may be a super dumb question but... how is this better than using progressive jpegs?

  • derefr a year ago

    1. If the thing that's going to be loaded isn't a JPEG, but rather a PNG, or WebP, or SVG, or MP4...

    2. These are usually delivered embedded in the HTML response, and so can be rendered all at once on first reflow. Meanwhile, if you have a webpage that has an image gallery with 100 images, even if they're all progressive JPEGs, your browser isn't going to start concurrently downloading them all at once. Only a few of them will start rendering, with the rest showing the empty placeholder box until the first N are done and enough connection slots are freed up to get to the later ones.

  • matsemann a year ago

    You call an API -> it returns some json with content and links to images -> you start doing a new request to load those images -> only when partially loaded (aka on request 2) you will see the progressive images starting to form.

    With this: You call an API -> it returns some json with content and links to images and a few bytes for the previews -> you immediately show these while firing off requests to get the full version.

    So I'm thinking quicker to first draw of the blurry version? And works for more formats as well.

  • eyelidlessness a year ago

    A few things that immediately come to mind:

    - you can preload the placeholder but still lazy load the full size image

    - placeholders can be inlined as `data:` URLs to minimize requests on initial load, or to embed placeholders into JSON or even progressively loaded scripts

    - besides placeholder alpha channel support, it also works for arbitrary full size image formats

  • pshc a year ago

    Looks smoother, transparency, data small enough to inline in the HTML or JSON payload, supports not just JPEGs but also PNGs, WebPs, GIFs.

    IMO I don't really care for a 75%-loaded progressive JPEG. Half the image being pixelated and half not is just distracting.

IvanK_net a year ago

I think they should siply use four patches of BC1 (DXT1) texture: https://en.wikipedia.org/wiki/S3_Texture_Compression

It allows storing a full 8x8 pixel image in 32 Bytes (4 bits per RGB pixel).

  • 8n4vidtmkvmk a year ago

    > 4 bits per RGB pixel

    That sounds inferior. From the article:

    > ThumbHash: ThumbHash encodes a higher-resolution luminance channel, a lower-resolution color channel, and an optional alpha channel.

    You want more bits in luminance. And you probably also don't want sRGB.

  • TheRealPomax a year ago

    If it's really that simple, looking forward to your github repo that gives folks the JS and Rust libraries to do that =)

quechimba a year ago

Very nice, I just saw the Ruby implementation[1]. This looks useful! Right now I'm making 16x16 PNGs and this looks way better. I might attempt making a custom element that renders these.

[1] https://github.com/daibhin/thumbhash

detrites a year ago

Anyone know why the first comparison image is rotated 90 degrees for both ThumbHash and BlurHash versions? Is this a limitation of the type of encoding or just a mistake? All other comparison images match source rotation.

  • constexpr a year ago

    That's the only image with a non-zero EXIF orientation. Which probably means you're using an older browser (e.g. Chrome started respecting EXIF orientation in version 81+, which I think came out 3 years ago?). You'd have to update your browser for it to display correctly.

eis a year ago

The results are pretty impressive. I wonder if the general idea can be applied with a bit more data than the roughly 21 bytes in this version. I know it's not a format that lends itself to be configurable. I'd be fine with placeholders that are say around 100-200 bytes. Many times that seems enough to actually let the brain roughly know what the image will contain.

kitsunesoba a year ago

I'm a big fan of anything that can make networked experiences a little smoother. When you're having to deal with less than amazing connections pages full of loading spinners and blank spots get old fast.

Also, love that this comes with a reference implementation in Swift. Will definitely keep it in mind for future projects.

renewiltord a year ago

These are quite terrific. I really like these because I hate movement on page load. This one looks pretty good too.

TheRealPomax a year ago

It looks like this has a bias for vertical banding that blurhash doesn't have, is that intentional?

spankalee a year ago

For these ultra-small sizes, I think I would go with Potato WebP since you can render it without JS, either with an <img> tag or a CSS background. I think it looks better too.

  • Dwedit a year ago

    The potato WebP had the headers stripped off. You need JS to put the headers back on.

kamikaz1k a year ago

I don't understand why it is only for <100x100 images. Isn't the blurring useful for larger images? what's the point of inlining small ones?

  • emptysea a year ago

    Probably because the algorithm is really slow and you’re already producing a really small image so scaling your original image down before isn’t too much work

    Blurhash is really slow on larger images but quick with small <500x500 images

    • usrusr a year ago

      Thanks, even if I'm not the one who asked. I guess the longer version of your answer would be "if you have larger input, just downscale first". And for the quality demands, you wouldn't even miss any antialiasing, just sample every nth pixel for a n00xn00 image, should really be good enough.

      I wonder if it might be nice to allocate some more bytes to the center than to the edges/corners? Or is this already done?

clumsycomputer a year ago

love these type of optimizations... blurhash seems to be giving me more pleasant results that thumbhash on the few examples i ran through it! thumbhash seems to over emphasize/crystalize parts of the image and results in a thumbnail that diverges from the source in unexpected ways.

either way this is awesome, and thanks for sharing

ed25519FUUU a year ago

First of all, I love the idea and I think it's very creative.

As for my impression, but I don't think the blurry images is impressive enough to load an additional 32 kB per image. I think the UX will be approximately the same with a 1x1 pixel image that's just the average color used in the picture, but I can't test that out.

  • ninkendo a year ago

    I think you’re 3 orders of magnitude off here, it’s ~30 bytes for each image, not kilobytes.

  • javier2 a year ago

    Its around 20 bytes per image, not kB.

mavci a year ago

I think Whatsapp also uses a similar method for sent pictures and videos.

NoMoreBro a year ago

A single file with a few functions, it seemed a good test to convert it to some other languages with GPT-4 (I tried Python and Ruby). Unfortunately, my access to GPT-4 is limited to the 2k version, and the first function is 4,500 tokens (800 minified, but losing names, comments, and probably the quality of the conversion).

With some language-independent tests in such a repository, you might be able to semi-automatically convert the code into different languages, and continue with code scanning and optimizations.

Anyway: very nice work!