I wonder to what degree it’s actually necessary to explicitly manage memory spilling to disk like this. Want unified interface over non durable memory + disk? There already is one: memory with swap.
To get good performance from this strategy your memory layout already needs to be optimized to pages boundaries etc to have good sympathy for underlying swap system, and you can explicitly force pages to stay in memory with a few syscalls.
Hi, foyer's author here. Page cache and swap are indeed good general proposed strategies and continuously evoluting. There are several reasons why foyer manages memory and disk control by itself rather than directly utilizing these mechanisms:
1. Leverage asynchronous capabilities: Foyer exposes async interfaces so that while waiting for IO and other operations, the worker can still perform other tasks, thereby increasing overall throughput. If swap is used, a page fault will cause synchronous waiting, blocking the worker thread and resulting in performance degradation.
2. Fine-grained control: As a dedicated cache system, foyer has a better understanding than a general proposed system like the operating system's page cache of which data should be cached and which should not. This is also why foyer has supported direct I/O to avoid duplication of abilities with the page cache since day one. Foyer can use its own strategies to know earlier when data should be cached or evicted.
I prefer this abstraction as it is more widely supported (I’ve had to deploy to hosts that intentionally kill when you swap) and results in development assuming that the access may be at disk speed. When you rely on swap, I often see developers assuming everything is accessible at memory speed and then surprised when swap causes sudden degradation.
Yeah I mean if you have a latency sensitive workload, you don’t want page faults and swapping to give hidden latency spikes - it kills your P99 latencies
Yeah but you can pin important pages in RAM with mlock family of syscalls and friends like move_pages for explicit page management. This is what materialize does as far as I understand it.
I use Foyer in-memory (not hybrid) in ZeroFS [0] and had a great experience with it.
The only quirk I’ve experienced is that in-memory and hybrid modes don’t share the same invalidation behavior. In hybrid mode, there’s no way to await a value being actually discarded after deletion, while in-memory mode shows immediate deletion.
This is interesting. I would be curious to try a setup where I keep a local hybrid cache and transition blocks to deep storage for long-term archival via S3 rules.
Some napkin math suggests this could be a few dollars a month to keep a few TB of precious data nearline.
Restore costs are pricy but hopefully this is something that's only hit in case of true disaster. Are there any techniques for reducing egress on restore?
No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.
The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.
Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?
Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.
Foyer is a great open source contribution from RisingWave
We built a S3 read-through cache service for s2.dev so that multiple clients could share a Foyer hybrid cache with key affinity, https://github.com/s2-streamstore/cachey
Yes, currently it has its own /fetch endpoint that then makes S3 GET(s) internally. One potential gotcha depending on how you are using it, an exact byte "Range" header is always required so that the request can be mapped to page-aligned byte range requests on the S3 object. But with that constraint, it is feasible to add an S3 shim.
It is also possible to stop requiring the header, but I think it would complicate the design around coalescing reads – the layer above foyer would have to track concurrent requests to the same object.
Yes, definitely. S3 has a time to first byte of 50-150ms (depending on how lucky you are). If you're serving from memory that goes to ~0, and if you're serving from disk, that goes to 0.2-1ms.
It will depend on your needs though, since some use cases won't want to trade off the scalability of S3's ability to serve arbitrary amounts of throughput.
In that case you run the proxy service load balanced to get desired throughput or run a sidecar/process in each compute instance where data is needed .
You are limited anyway by the network capacity of the instance you are fetching the data from .
These are, effectively, different use cases. You want to use (and pay for) Express One Zone in situations in which you need the same object reused from multiple instances repeatedly, while it looks like this on-disk or in-memory cache is for when you may want the same file repeatedly used from the same instance.
Is it the same instance ? Rising wave (and similar tools )are designed to run in production on a lot of distributed compute nodes for processing data , serving/streaming queries and running control panes .
Even for any single query it will likely run on multiple nodes with distributed workers gathering and processing data from storage layer, that is whole idea behind MapReduce after all.
"Zero-Copy In-Memory Cache Abstraction: Leveraging Rust's robust type system, the in-memory cache in foyer achieves a better performance with zero-copy abstraction." - what does this actually mean in practice?
Hi, foyer's author here. The "zero-copy in-memory abstraction" is compared to Facebook's CacheLib.
CacheLib requires entries to be copied to the CacheLib managed memory when it's inserted. It simplified some design trade-offs, but may affect the overall throughput when in-memory cache is involved more than nvm cache. FYI: https://cachelib.org/docs/Cache_Library_User_Guides/Write_da...
Foyer only requries entries to be serialized/deserialized when writing/reading from disk. The in-memory cache doesn't force a deep memory copy.
I see, thanks! I don't have much experience in Rust, aside from some pet projects. Which features of Rust's type system are needed to implement such behavior? (It's unclear to me why I wouldn't be able to do the same in, for example, C++.)
I think the article could use more on the cache invalidation and write-through (?) behavior. Are updates to the same file batched or written back to S3 immediately? Do you do anything with write conflicts, which one wins?
The article hints that cache invalidation is driven by the layers higher up the stack, relying on domain knowledge.
For example, the application may decide that all files are read-only, until expired a few days later.
Not clear about write-cache. My guess is that you will want some sort of redundancy when caching writes, so this goes beyond a library and becomes a service. Unless the domain level can absolve you of this concern by having redundancy elsewhere in the system (eg feed data from a durable store and replay if you lost some s3 writes).
Storage Gateway is an appliance that you connect multiple instances to, this appears to be a library that you use in your program to coordinate caching for that process.
S3 Mountpoint is exposing a POSIX-like file system abstraction for you to use with your file-based applications. Foyer appears to be a library that helps your application coordinate access to S3 (with a cache), for applications that don't need files and you can change the code for.
> foyer draws inspiration from Facebook/CacheLib, a highly-regarded hybrid cache library written in C++, and ben-manes/caffeine, a popular Java caching library, among other projects.
> Cost is reduced because far fewer requests hit S3
I wonder. Given how cheap are S3 GET requests you need a massive number of requests to make provisioning and maintaining the cache server cheaper than the alternative.
Here is a non-medium article with the same content: https://risingwave.com/blog/the-case-for-hybrid-cache-for-ob...
I wonder to what degree it’s actually necessary to explicitly manage memory spilling to disk like this. Want unified interface over non durable memory + disk? There already is one: memory with swap.
Materialize.com switched from explicit disk cache management to “just use swap” and saw substantial performance improvement. https://materialize.com/blog/scaling-beyond-memory/
To get good performance from this strategy your memory layout already needs to be optimized to pages boundaries etc to have good sympathy for underlying swap system, and you can explicitly force pages to stay in memory with a few syscalls.
Hi, foyer's author here. Page cache and swap are indeed good general proposed strategies and continuously evoluting. There are several reasons why foyer manages memory and disk control by itself rather than directly utilizing these mechanisms:
1. Leverage asynchronous capabilities: Foyer exposes async interfaces so that while waiting for IO and other operations, the worker can still perform other tasks, thereby increasing overall throughput. If swap is used, a page fault will cause synchronous waiting, blocking the worker thread and resulting in performance degradation.
2. Fine-grained control: As a dedicated cache system, foyer has a better understanding than a general proposed system like the operating system's page cache of which data should be cached and which should not. This is also why foyer has supported direct I/O to avoid duplication of abilities with the page cache since day one. Foyer can use its own strategies to know earlier when data should be cached or evicted.
> There already is one: memory with swap.
Actually, there's already two. The other one: just read from disk (and let OS manage caches).
I prefer this abstraction as it is more widely supported (I’ve had to deploy to hosts that intentionally kill when you swap) and results in development assuming that the access may be at disk speed. When you rely on swap, I often see developers assuming everything is accessible at memory speed and then surprised when swap causes sudden degradation.
Yeah I mean if you have a latency sensitive workload, you don’t want page faults and swapping to give hidden latency spikes - it kills your P99 latencies
Yeah but you can pin important pages in RAM with mlock family of syscalls and friends like move_pages for explicit page management. This is what materialize does as far as I understand it.
TIL https://kubernetes.io/blog/2025/03/25/swap-linux-improvement...
I use Foyer in-memory (not hybrid) in ZeroFS [0] and had a great experience with it.
The only quirk I’ve experienced is that in-memory and hybrid modes don’t share the same invalidation behavior. In hybrid mode, there’s no way to await a value being actually discarded after deletion, while in-memory mode shows immediate deletion.
[0] https://github.com/Barre/ZeroFS
This is interesting. I would be curious to try a setup where I keep a local hybrid cache and transition blocks to deep storage for long-term archival via S3 rules.
Some napkin math suggests this could be a few dollars a month to keep a few TB of precious data nearline.
Restore costs are pricy but hopefully this is something that's only hit in case of true disaster. Are there any techniques for reducing egress on restore?
The easy way to achieve that, is using ZeroFS NBD server with ZFS L2ARC (L2ARC with local storage and “main” pool on ZeroFS).
Interesting! What would be a typical use-case of ZeroFS? could I use this to store my Immich and Jellyfin data on S3 so I don't need disk?
If you don't mind paying about a dollar to stream one of your own movies as well as a couple of bucks per year to store it.
You don’t have to use AWS S3, any compatible implementation will work.
That should work!
The Jellyfin metadata would certainly be a fit but what about streaming video content i.e. sequential reads of large files with random access?
If you have the network that matches, it should be perfectly fine.
Has Medium stopped working on Firefox for anybody else? Once the page is finished loading, it stops responding to scroll events.
No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.
The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.
Same here. Meanwhile I close a link/page as soon as I realize it's on medium.
Maybe you have an ad-blocker that just hides the popup but does not restore scrolling (scrolling is usually prevented when popups are visible)
Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?
Hm, I haven't seen that. Perhaps it's worth reviewing your plugins
Thanks! I think it might have been notifications from futurism.com. I don't remember visiting that site or allowing notifications (on purpose anyway).
Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.
I avoid medium where possible.
If I could pipe text content to my terminal with confidence, I would.
Seems ok for me on Firefox 143.0.1
Have you tried using reader mode?
Foyer is a great open source contribution from RisingWave
We built a S3 read-through cache service for s2.dev so that multiple clients could share a Foyer hybrid cache with key affinity, https://github.com/s2-streamstore/cachey
This looks really useful! Am I correct that there isn’t an S3 compatible API, just the “fetch” API?
Being able to set an S3 client’s endpoint to proxy traffic straight through this would be quite useful.
Yes, currently it has its own /fetch endpoint that then makes S3 GET(s) internally. One potential gotcha depending on how you are using it, an exact byte "Range" header is always required so that the request can be mapped to page-aligned byte range requests on the S3 object. But with that constraint, it is feasible to add an S3 shim.
It is also possible to stop requiring the header, but I think it would complicate the design around coalescing reads – the layer above foyer would have to track concurrent requests to the same object.
Does S3 really have that high of a latency? So high that —— if you run a static file several in an EC2, would that be faster than S3?
Yes, definitely. S3 has a time to first byte of 50-150ms (depending on how lucky you are). If you're serving from memory that goes to ~0, and if you're serving from disk, that goes to 0.2-1ms.
It will depend on your needs though, since some use cases won't want to trade off the scalability of S3's ability to serve arbitrary amounts of throughput.
In that case you run the proxy service load balanced to get desired throughput or run a sidecar/process in each compute instance where data is needed .
You are limited anyway by the network capacity of the instance you are fetching the data from .
S3 has a low-latency offering[0] which promises single digit millisecond latency, I’m surprised not to see it mentioned.
[0]: https://aws.amazon.com/s3/storage-classes/express-one-zone/
These are, effectively, different use cases. You want to use (and pay for) Express One Zone in situations in which you need the same object reused from multiple instances repeatedly, while it looks like this on-disk or in-memory cache is for when you may want the same file repeatedly used from the same instance.
Is it the same instance ? Rising wave (and similar tools )are designed to run in production on a lot of distributed compute nodes for processing data , serving/streaming queries and running control panes .
Even for any single query it will likely run on multiple nodes with distributed workers gathering and processing data from storage layer, that is whole idea behind MapReduce after all.
Also, aren't most people putting Cloudfront in front of S3 anyway?
For CDN use-cases yes, but not for DB storage-compute separation use-cases as described here.
Distributed Chroma, the open-source project backing Chroma Cloud, uses Foyer extensively:
https://github.com/chroma-core/chroma/blob/2cb5c00d2e97ef449...
"Zero-Copy In-Memory Cache Abstraction: Leveraging Rust's robust type system, the in-memory cache in foyer achieves a better performance with zero-copy abstraction." - what does this actually mean in practice?
Hi, foyer's author here. The "zero-copy in-memory abstraction" is compared to Facebook's CacheLib.
CacheLib requires entries to be copied to the CacheLib managed memory when it's inserted. It simplified some design trade-offs, but may affect the overall throughput when in-memory cache is involved more than nvm cache. FYI: https://cachelib.org/docs/Cache_Library_User_Guides/Write_da...
Foyer only requries entries to be serialized/deserialized when writing/reading from disk. The in-memory cache doesn't force a deep memory copy.
I see, thanks! I don't have much experience in Rust, aside from some pet projects. Which features of Rust's type system are needed to implement such behavior? (It's unclear to me why I wouldn't be able to do the same in, for example, C++.)
I think the article could use more on the cache invalidation and write-through (?) behavior. Are updates to the same file batched or written back to S3 immediately? Do you do anything with write conflicts, which one wins?
The article hints that cache invalidation is driven by the layers higher up the stack, relying on domain knowledge.
For example, the application may decide that all files are read-only, until expired a few days later.
Not clear about write-cache. My guess is that you will want some sort of redundancy when caching writes, so this goes beyond a library and becomes a service. Unless the domain level can absolve you of this concern by having redundancy elsewhere in the system (eg feed data from a durable store and replay if you lost some s3 writes).
Sounds exactly like AWS Storage Gateway, how does it compare?
Storage Gateway is an appliance that you connect multiple instances to, this appears to be a library that you use in your program to coordinate caching for that process.
How does this compare to S3 Mountpoint with caching?
S3 Mountpoint is exposing a POSIX-like file system abstraction for you to use with your file-based applications. Foyer appears to be a library that helps your application coordinate access to S3 (with a cache), for applications that don't need files and you can change the code for.
How does it compare to cachelib
Essentially CacheLib in Rust
> foyer draws inspiration from Facebook/CacheLib, a highly-regarded hybrid cache library written in C++, and ben-manes/caffeine, a popular Java caching library, among other projects.
https://github.com/foyer-rs/foyer
I haven’t used it yet but I have been looking for something like this for a long time. Kudos!
Very curious about comparison between the rclone etc.. s3 caching vs this one.
Interesting to compare against ZeroFs
Foyer is great!
[flagged]
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
https://news.ycombinator.com/newsguidelines.html
> Cost is reduced because far fewer requests hit S3
I wonder. Given how cheap are S3 GET requests you need a massive number of requests to make provisioning and maintaining the cache server cheaper than the alternative.