Firefox simply ignores height declarations that resolve to a value greater than exactly 17895697px. What’s this value? Just a smidgeon under 2³⁰ sixtieths of a pixel, which is Firefox’s layout unit. (It’s the last integer before 2³⁰ sixtieths, which is 17,895,697.06̅ pixels, 4⁄60 more.) I presume Firefox is using a 32-bit signed integer, and reserving another bit for something else, maybe overflow control.
Five years ago, Firefox would ignore any CSS declarations resolving like that, but somewhere along the way it changed so that most things now clamp instead, matching WebKit-heritage behaviour. But height is not acting like that, to my surprise (I though it was).
WebKit-heritage browsers use a 1⁄64 pixel layout unit instead. Viewed in that light, the 2²⁵ − 1 pixels is actually 2³¹ − 1 layout units, a less-surprising number.
Firefox's units are quite smart, actually. 60 is divisible by 3, 4, 5 and 6, so they are quite future-proof for the future when we'll have the displays with devicePixelRatio = 6.
One of the factors for choosing 60 was its nice divisors, perfectly handling these scaling factors (and also, I rather suspect, though it wasn’t pointed out, 1.25×, 1.5×, and 2.5×). https://wiki.mozilla.org/Mozilla2:Units#Proposal describes the reasons, including others. It’s a nice piece of history to read.
But we’re unlikely to get mainstream devices beyond 3×.
CSS’s reference pixel has a visual angle of about 77″ <https://www.w3.org/TR/css-values-3/#reference-pixel>. Human eyes cap out at 28″ <https://www.swift.ac.uk/about/files/vision.pdf#page=2>. This means 2.75× is the absolute limit of human resolution at a device’s nominal viewing distance, and you have to come in closer for it to even be possible to resolve a single device pixel. 6× means “you can’t even detect a single pixel until you halve the distance between eye and surface”.
Are there contexts in which this could make sense? Definitely: an art display that’s intended to be viewed at a distance of 10 metres, but you can also get up close to see details—it’d be nice if you can barely make out the pixels at 1 metre.
But mainstream devices? I don’t think anyone’s gone beyond 3×, and I don’t think they will. 3× is at the transition from “diminishing returns” to “no further returns are even possible, going further makes things strictly worse” (increasing monetary, power, memory and processing costs).
As it stands, even the benefits of choosing 60 have turned out pretty slim; the battleground and constraints have shifted so that divisibility isn’t as important as I think it was expected to be. And it has costs, too; there’s a reason WebKit shifted to 64 after a bit, even though it doesn’t even do 1.5× perfectly, which was probably the most common factor other than 1× at the time, and is possibly even more common now.
In engineering, they used to use the foot ' inch '' line ''' and point '''' where each unit was 1/12th the size of the previous one. The european typographic point used to be 1/144th of an inch. Watch components are measured in points.
This 2^25-1 pixel limit makes perfect sense - with 1/64 pixel precision, that's exactly 2^31-1 layout units (the max value of a signed 32-bit integer).
So only slightly related Netscape used to assume the layout during load was infinite and would resolve features to their size as the sizes are known, which meant usually nothing showed until everything was loaded in a world of div and tables.
IE4 did the opposite and assumed everything was sized zero and filled sizes as they become known. this allowed them to load objects and render them as the page objects load appearing to be substantially faster at loading than Netscape.
Early engineering decisions like this can make or break a company. Not saying this was the only challenge Netscape had, but it was one that really drove the need to build the Gecko layout engine. Despite some wildly misleading yet famous blogs written by a Microsoft engineer discussing what happened internally at Netscape that he couldn’t possibly know about and basically got totally upside down and self serving ……
As noted by another comment here [0], when you use a virtual DOM/canvas based "infinite" data grid such as Glide Data Grid [1] or TanStack Virtual [2], you get the performance/usability of native scrollbars because under the hood, both of those libraries create scrollable DIVs with a very large height. ie, you're scrolling a big empty div, and a viewport into the "infinite" grid is actually drawn in the canvas.
But this does fall apart for very very large grids, as you get close to the height limit described in this article.
For a project that I'm working on, I ended up re-implementing scrollbars, but it's super janky - and even more so on mobile where you lose the "flick"/inertia/momentum touch gestures (or you have to re-implement them at your peril).
Are there any good tricks/libraries to tackle this? Thanks!
Since it seems unlikely that a single scroll gesture passes the threshold, and also the scrollbar thumb probably is invisible (either intentionally or due to the extreme height): maybe an "infinite scroll" paginated stack of virtual lists would be enough? I mean a dumb "load more" implementation that swaps out the main container once you reach the end/start of each "item" (virtual lists themselves)?
If that doesn't help, maybe check out this fun post (no native scrolling experience):
it's not crazy to stack virtual lists... at that point, I might also just see that the user is near/at the end of the list, and just swap out the content completely and place them back at scrolling position y:0 or something
for sure, I shouldn't make perfect the enemy of good here. thanks for the ideas!
This article triggered flashbacks to 20 years ago and using all sorts of crazy CSS to hack the browser to do what you needed it to. Nowadays CSS is (usually) a remarkable land of sanity in comparison.
It was also a fun time to interview. Can you write a CSS snippet that will make a box red in IE5, Blue in IE6+, and is Yellow in Firefox but black in Netscape? CSS these days is no longer an adventure — they just tend to work.
Blink, Chrome's rendering engine, was forked from WebKit (Safari) in 2013.
Since WebKit already supported the infinity value at the time of the fork [0], it's highly probable that they share the same underlying code. This would explain their similar behavior.
And actually I’d be quite surprised if the infinity keyword is even relevant here; I would expect the same results if you changed each calc(infinity * 1px) to 9999999999999999px. Firefox (and, if you go far enough back, IE) will still ignore overly large height declarations, and WebKit-heritage browsers will clamp it.
Lots of things get weird in edge cases with SVGs. E.g., one thing that keeps biting me is that Firefox always snaps x="X" y="Y" coordinates on <use> elements to the nearest CSS pixel (even if the pixel size is very large), but allows transform="translate(X,Y)" to keep its full subpixel precision. I've taken to double-checking all my SVGs at 3000% zoom just to spot all the seams that implementations can create.
As in n+1 == n once you go past 2^24 on a float and here they are at 2^25-1. So it doesn't quite make sense as a reason to me.
There's a post above that browsers divide pixels into 1/64th of a unit which accounts for 6bits which puts this limit at precisely that of a signed 32bit integer. This makes much much more sense than the significand of a float.
The correct answer. In particular, up to 2 ^ 24 the float32 behaves like a regular integer, which can be important in some cases. Above that value, the integer starts to have missing values, and strange int behavior such as (n+1)+1 not equal to (n+2)
But why are 32b floats relevant? JS, the language famously representing ints as floats, uses 64b floats. Who puts ints into 32b floats and cares about precision loss?
I didn't say JavaScript anywhere in my comment. No relation to JavaScript. Rendering is typically done in tiles, in GPU, and the best precision that can be used across all GPUs is float32. Some GPUs don't implement float64.
These constants are defined mostly to make serialization of infinite/NaN values simpler and more obvious, but can be used to indicate a "largest possible value", since an infinite value gets clamped to the allowed range. It’s rare for this to be reasonable, but when it is, using infinity is clearer in its intent than just putting an enormous number in one’s stylesheet.
Kinda related - our product, rowzero.io, is a browser-based spreadsheet with a 2 billion row limit. We initially built the client as anyone would, using a div per cell. We tried to use an off-screen div to take advantage of the browser's native scrollbars but ran into document height limits. Firefox's was 6M pixels iirc. The solution was to do rendering in canvas and draw the scrollbars ourselves.
Firefox’s limit is 17,895,697 pixels. Others have a limit less than twice as high, so given you’re aiming for a value way higher than that, it’s not a browser-specific issue, except insofar as Firefox ignores rather than clamping, so you have to detect Firefox and clamp it manually.
In Fastmail’s case (see my top-level comment), making the end of a ridiculously large mailbox inaccessible was considered acceptable. In a spreadsheet, that’s probably not so, so you need to do something different. But frankly I think you needed to use a custom scrollbar anyway, as a linear scrollbar will be useless for almost all documents for anything except returning to the top.
Rendering the content, however, to a canvas is not particularly necessary: make a 4 million pixel square area, hide its scrollbars, render the outermost million pixels of all edges at their edge, and where you’re anywhere in the middle (e.g. 1.7 billion rows in), render starting at 2 million pixels, and if the user scrolls a million pixels in any direction, recentre (potentially disrupting scrolling inertia, but that’s about it). That’s basically perfect, allowing native rendering and scrolling interaction, meaning better behaviour and lower latency.
Do you even need to have one scroll pixel == one screen pixel (or even one scroll pixel == one spreadsheet row)? At the point of 2 billion rows, the scrollbar really falls apart and just jumping to an approximation of the correct location in the document is all anyone can hope for.
> I’d be much happier if someone just explained it to me; bonus points if their name is Clarissa.
> my gray matter needs a rest and possibly a pressure washing
There is a software engineering equivalent of the "theater kids" meme, and this is it. There's an abundance of exuberance in this writing.
I'm glad we have such diversity in perspective. It's a nice change from the usual terseness, though I'm not sure I could read in this style for an extended period.
It works for this context, which is about an experiment that is fundamentally a bit silly. 10 years ago this would've been all memes, and 20 years ago it would've been performatively elaborate insults. I'll definitely take it over those.
btw "toot" is the de-facto standard jargon for "post" on Mastodon-based social media, so that sentence isn't actually an example of this silliness.
Firefox simply ignores height declarations that resolve to a value greater than exactly 17895697px. What’s this value? Just a smidgeon under 2³⁰ sixtieths of a pixel, which is Firefox’s layout unit. (It’s the last integer before 2³⁰ sixtieths, which is 17,895,697.06̅ pixels, 4⁄60 more.) I presume Firefox is using a 32-bit signed integer, and reserving another bit for something else, maybe overflow control.
Five years ago, Firefox would ignore any CSS declarations resolving like that, but somewhere along the way it changed so that most things now clamp instead, matching WebKit-heritage behaviour. But height is not acting like that, to my surprise (I though it was).
WebKit-heritage browsers use a 1⁄64 pixel layout unit instead. Viewed in that light, the 2²⁵ − 1 pixels is actually 2³¹ − 1 layout units, a less-surprising number.
IE had the same behaviour as Firefox used to, but with a much lower limit, 10,737,418.23 pixels (2³⁰ − 1 hundredth pixels), which was low enough to realistically cause problems for Fastmail, all you needed was about 200,000 messages in a mailbox. I’ve written about that more a few times, https://news.ycombinator.com/item?id=42347382, https://news.ycombinator.com/item?id=34299569, https://news.ycombinator.com/item?id=32010160.
Firefox's units are quite smart, actually. 60 is divisible by 3, 4, 5 and 6, so they are quite future-proof for the future when we'll have the displays with devicePixelRatio = 6.
One of the factors for choosing 60 was its nice divisors, perfectly handling these scaling factors (and also, I rather suspect, though it wasn’t pointed out, 1.25×, 1.5×, and 2.5×). https://wiki.mozilla.org/Mozilla2:Units#Proposal describes the reasons, including others. It’s a nice piece of history to read.
But we’re unlikely to get mainstream devices beyond 3×.
CSS’s reference pixel has a visual angle of about 77″ <https://www.w3.org/TR/css-values-3/#reference-pixel>. Human eyes cap out at 28″ <https://www.swift.ac.uk/about/files/vision.pdf#page=2>. This means 2.75× is the absolute limit of human resolution at a device’s nominal viewing distance, and you have to come in closer for it to even be possible to resolve a single device pixel. 6× means “you can’t even detect a single pixel until you halve the distance between eye and surface”.
Are there contexts in which this could make sense? Definitely: an art display that’s intended to be viewed at a distance of 10 metres, but you can also get up close to see details—it’d be nice if you can barely make out the pixels at 1 metre.
But mainstream devices? I don’t think anyone’s gone beyond 3×, and I don’t think they will. 3× is at the transition from “diminishing returns” to “no further returns are even possible, going further makes things strictly worse” (increasing monetary, power, memory and processing costs).
As it stands, even the benefits of choosing 60 have turned out pretty slim; the battleground and constraints have shifted so that divisibility isn’t as important as I think it was expected to be. And it has costs, too; there’s a reason WebKit shifted to 64 after a bit, even though it doesn’t even do 1.5× perfectly, which was probably the most common factor other than 1× at the time, and is possibly even more common now.
The Sumerians were on to something: https://en.m.wikipedia.org/wiki/Sexagesimal
In engineering, they used to use the foot ' inch '' line ''' and point '''' where each unit was 1/12th the size of the previous one. The european typographic point used to be 1/144th of an inch. Watch components are measured in points.
For those curious, in WebKit, this stems from the use of the LayoutUnit (https://github.com/WebKit/webkit/blob/main/Source/WebCore/pl...) for most computed length values. LayoutUnits use a fixed point representation where the smallest unit is 1/64 of a pixel. https://trac.webkit.org/wiki/LayoutUnit is a bit old, but has some good information on the topic.
This 2^25-1 pixel limit makes perfect sense - with 1/64 pixel precision, that's exactly 2^31-1 layout units (the max value of a signed 32-bit integer).
its the same in chrome I think
Blink only forked from WebKit after layout units settled at 1⁄64.
So only slightly related Netscape used to assume the layout during load was infinite and would resolve features to their size as the sizes are known, which meant usually nothing showed until everything was loaded in a world of div and tables.
IE4 did the opposite and assumed everything was sized zero and filled sizes as they become known. this allowed them to load objects and render them as the page objects load appearing to be substantially faster at loading than Netscape.
Early engineering decisions like this can make or break a company. Not saying this was the only challenge Netscape had, but it was one that really drove the need to build the Gecko layout engine. Despite some wildly misleading yet famous blogs written by a Microsoft engineer discussing what happened internally at Netscape that he couldn’t possibly know about and basically got totally upside down and self serving ……
As noted by another comment here [0], when you use a virtual DOM/canvas based "infinite" data grid such as Glide Data Grid [1] or TanStack Virtual [2], you get the performance/usability of native scrollbars because under the hood, both of those libraries create scrollable DIVs with a very large height. ie, you're scrolling a big empty div, and a viewport into the "infinite" grid is actually drawn in the canvas.
But this does fall apart for very very large grids, as you get close to the height limit described in this article.
For a project that I'm working on, I ended up re-implementing scrollbars, but it's super janky - and even more so on mobile where you lose the "flick"/inertia/momentum touch gestures (or you have to re-implement them at your peril).
Are there any good tricks/libraries to tackle this? Thanks!
[0] https://news.ycombinator.com/item?id=44825028
[1] https://github.com/glideapps/glide-data-grid
[2] https://tanstack.com/virtual/latest
Since it seems unlikely that a single scroll gesture passes the threshold, and also the scrollbar thumb probably is invisible (either intentionally or due to the extreme height): maybe an "infinite scroll" paginated stack of virtual lists would be enough? I mean a dumb "load more" implementation that swaps out the main container once you reach the end/start of each "item" (virtual lists themselves)?
If that doesn't help, maybe check out this fun post (no native scrolling experience):
https://everyuuid.com
https://eieio.games/blog/writing-down-every-uuid/
great point re: Nolen's site -- I've collab'ed with him on https://eieio.games/blog/talk-paper-scissors/, I should have remembered that! :-)
it's not crazy to stack virtual lists... at that point, I might also just see that the user is near/at the end of the list, and just swap out the content completely and place them back at scrolling position y:0 or something
for sure, I shouldn't make perfect the enemy of good here. thanks for the ideas!
Oh, I'm glad i wasn't talking past you :) that uuud site and the post about it was genius, and haha, I didn't check the RPS-via-phone number app yet.
Sounds equally fun! Just like the uuid one, also seems very worth bookmarking for a fitting moment
> Chrome and Safari both get very close to 225-1 (33,554,431), with Safari backing off from that by just 3 pixels, and Firefox by 31.
Typo, the last browser in this sentence should be "Chrome", right?
This article triggered flashbacks to 20 years ago and using all sorts of crazy CSS to hack the browser to do what you needed it to. Nowadays CSS is (usually) a remarkable land of sanity in comparison.
It was also a fun time to interview. Can you write a CSS snippet that will make a box red in IE5, Blue in IE6+, and is Yellow in Firefox but black in Netscape? CSS these days is no longer an adventure — they just tend to work.
I always grab the CSS reset Also the purple story
Blink, Chrome's rendering engine, was forked from WebKit (Safari) in 2013.
Since WebKit already supported the infinity value at the time of the fork [0], it's highly probable that they share the same underlying code. This would explain their similar behavior.
[0]: https://caniuse.com/?search=infinity
You’re confusing the JavaScript keyword Infinity, which has been around forever, with the CSS keyword infinity, which has only been around for 2–3 years <https://caniuse.com/mdn-css_types_calc-keyword_infinity>.
And actually I’d be quite surprised if the infinity keyword is even relevant here; I would expect the same results if you changed each calc(infinity * 1px) to 9999999999999999px. Firefox (and, if you go far enough back, IE) will still ignore overly large height declarations, and WebKit-heritage browsers will clamp it.
Thanks for the clarification. However, the number is likely encoded in binary format, which is why you're seeing 2²⁵ in the article.
[dead]
Is the almost-24-bit limit related to the fact that 32-bit floats have 24 bits of significand? (cf. https://en.wikipedia.org/wiki/Single-precision_floating-poin...)
No. Everyone has always used fixed-point numbers for layout, see my comment for further details.
Now SVG… try defining a view box up where float precision drops, and interesting things happen.
Lots of things get weird in edge cases with SVGs. E.g., one thing that keeps biting me is that Firefox always snaps x="X" y="Y" coordinates on <use> elements to the nearest CSS pixel (even if the pixel size is very large), but allows transform="translate(X,Y)" to keep its full subpixel precision. I've taken to double-checking all my SVGs at 3000% zoom just to spot all the seams that implementations can create.
But it's specifically 25bits not 24?
As in n+1 == n once you go past 2^24 on a float and here they are at 2^25-1. So it doesn't quite make sense as a reason to me.
There's a post above that browsers divide pixels into 1/64th of a unit which accounts for 6bits which puts this limit at precisely that of a signed 32bit integer. This makes much much more sense than the significand of a float.
The correct answer. In particular, up to 2 ^ 24 the float32 behaves like a regular integer, which can be important in some cases. Above that value, the integer starts to have missing values, and strange int behavior such as (n+1)+1 not equal to (n+2)
But why are 32b floats relevant? JS, the language famously representing ints as floats, uses 64b floats. Who puts ints into 32b floats and cares about precision loss?
I didn't say JavaScript anywhere in my comment. No relation to JavaScript. Rendering is typically done in tiles, in GPU, and the best precision that can be used across all GPUs is float32. Some GPUs don't implement float64.
Layout engines are not implemented in Javascript.
Plenty of embedded GPUs do coordinates that way. But I doubt they're running browsers.
What's the actual use case for "infinity" in CSS?
According to the spec:
These constants are defined mostly to make serialization of infinite/NaN values simpler and more obvious, but can be used to indicate a "largest possible value", since an infinite value gets clamped to the allowed range. It’s rare for this to be reasonable, but when it is, using infinity is clearer in its intent than just putting an enormous number in one’s stylesheet.
Final boss of z-index
What about `infinity + 1`? ;)
That’s the z-index of your nose.
z-index = i_love_you;
pow(infinity, infinity) is aliased to no_i_love_you_more in CSS5
Isn't the final boss of z-index the top layer?
w buffer cheatcode unlocked.
Cantor has entered the chat.
We use it in Tailwind CSS v4 for pill-shaped borders:
...as opposed to what everyone has done historically, which is pick some arbitrary huge value like: No real practical benefit, just satisfyingly more "correct".Selling pixels at a dollar each to make an Infinity Dollar Website™.
This website is pure art. I enjoyed visiting.
As befits the author of CSS: The Definitive Guide.
Kinda related - our product, rowzero.io, is a browser-based spreadsheet with a 2 billion row limit. We initially built the client as anyone would, using a div per cell. We tried to use an off-screen div to take advantage of the browser's native scrollbars but ran into document height limits. Firefox's was 6M pixels iirc. The solution was to do rendering in canvas and draw the scrollbars ourselves.
Firefox’s limit is 17,895,697 pixels. Others have a limit less than twice as high, so given you’re aiming for a value way higher than that, it’s not a browser-specific issue, except insofar as Firefox ignores rather than clamping, so you have to detect Firefox and clamp it manually.
In Fastmail’s case (see my top-level comment), making the end of a ridiculously large mailbox inaccessible was considered acceptable. In a spreadsheet, that’s probably not so, so you need to do something different. But frankly I think you needed to use a custom scrollbar anyway, as a linear scrollbar will be useless for almost all documents for anything except returning to the top.
Rendering the content, however, to a canvas is not particularly necessary: make a 4 million pixel square area, hide its scrollbars, render the outermost million pixels of all edges at their edge, and where you’re anywhere in the middle (e.g. 1.7 billion rows in), render starting at 2 million pixels, and if the user scrolls a million pixels in any direction, recentre (potentially disrupting scrolling inertia, but that’s about it). That’s basically perfect, allowing native rendering and scrolling interaction, meaning better behaviour and lower latency.
Is it completely client side? Why does it have a 2 billion row limit? Where are the limitations coming from?
Do you even need to have one scroll pixel == one screen pixel (or even one scroll pixel == one spreadsheet row)? At the point of 2 billion rows, the scrollbar really falls apart and just jumping to an approximation of the correct location in the document is all anyone can hope for.
> came across a toot
> For the sake of my aching skullmeats
> My skullmeats did not thank me for this
> I hear you asking.
> Maybe if I give my whimpering neurons a rest
> I’d be much happier if someone just explained it to me; bonus points if their name is Clarissa.
> my gray matter needs a rest and possibly a pressure washing
There is a software engineering equivalent of the "theater kids" meme, and this is it. There's an abundance of exuberance in this writing.
I'm glad we have such diversity in perspective. It's a nice change from the usual terseness, though I'm not sure I could read in this style for an extended period.
Totally refreshing to me. Imagine -- a techie who doesn't take himself too seriously!
"skullmeats" definitely made me wince.
The interaural nerve node.
It works for this context, which is about an experiment that is fundamentally a bit silly. 10 years ago this would've been all memes, and 20 years ago it would've been performatively elaborate insults. I'll definitely take it over those.
btw "toot" is the de-facto standard jargon for "post" on Mastodon-based social media, so that sentence isn't actually an example of this silliness.
[dead]