jitl 2 days ago

I wonder to what degree it’s actually necessary to explicitly manage memory spilling to disk like this. Want unified interface over non durable memory + disk? There already is one: memory with swap.

Materialize.com switched from explicit disk cache management to “just use swap” and saw substantial performance improvement. https://materialize.com/blog/scaling-beyond-memory/

To get good performance from this strategy your memory layout already needs to be optimized to pages boundaries etc to have good sympathy for underlying swap system, and you can explicitly force pages to stay in memory with a few syscalls.

  • MrCroxx 2 days ago

    Hi, foyer's author here. Page cache and swap are indeed good general proposed strategies and continuously evoluting. There are several reasons why foyer manages memory and disk control by itself rather than directly utilizing these mechanisms:

    1. Leverage asynchronous capabilities: Foyer exposes async interfaces so that while waiting for IO and other operations, the worker can still perform other tasks, thereby increasing overall throughput. If swap is used, a page fault will cause synchronous waiting, blocking the worker thread and resulting in performance degradation.

    2. Fine-grained control: As a dedicated cache system, foyer has a better understanding than a general proposed system like the operating system's page cache of which data should be cached and which should not. This is also why foyer has supported direct I/O to avoid duplication of abilities with the page cache since day one. Foyer can use its own strategies to know earlier when data should be cached or evicted.

  • hamandcheese 2 days ago

    > There already is one: memory with swap.

    Actually, there's already two. The other one: just read from disk (and let OS manage caches).

    • cowsandmilk 2 days ago

      I prefer this abstraction as it is more widely supported (I’ve had to deploy to hosts that intentionally kill when you swap) and results in development assuming that the access may be at disk speed. When you rely on swap, I often see developers assuming everything is accessible at memory speed and then surprised when swap causes sudden degradation.

      • rockwotj 2 days ago

        Yeah I mean if you have a latency sensitive workload, you don’t want page faults and swapping to give hidden latency spikes - it kills your P99 latencies

        • jitl 2 days ago

          Yeah but you can pin important pages in RAM with mlock family of syscalls and friends like move_pages for explicit page management. This is what materialize does as far as I understand it.

Eikon 2 days ago

I use Foyer in-memory (not hybrid) in ZeroFS [0] and had a great experience with it.

The only quirk I’ve experienced is that in-memory and hybrid modes don’t share the same invalidation behavior. In hybrid mode, there’s no way to await a value being actually discarded after deletion, while in-memory mode shows immediate deletion.

[0] https://github.com/Barre/ZeroFS

  • mmastrac 2 days ago

    This is interesting. I would be curious to try a setup where I keep a local hybrid cache and transition blocks to deep storage for long-term archival via S3 rules.

    Some napkin math suggests this could be a few dollars a month to keep a few TB of precious data nearline.

    Restore costs are pricy but hopefully this is something that's only hit in case of true disaster. Are there any techniques for reducing egress on restore?

    • Eikon 2 days ago

      The easy way to achieve that, is using ZeroFS NBD server with ZFS L2ARC (L2ARC with local storage and “main” pool on ZeroFS).

  • oulipo2 2 days ago

    Interesting! What would be a typical use-case of ZeroFS? could I use this to store my Immich and Jellyfin data on S3 so I don't need disk?

    • dexterdog 2 days ago

      If you don't mind paying about a dollar to stream one of your own movies as well as a couple of bucks per year to store it.

      • Eikon 2 days ago

        You don’t have to use AWS S3, any compatible implementation will work.

    • Eikon 2 days ago

      That should work!

      • ofek 2 days ago

        The Jellyfin metadata would certainly be a fit but what about streaming video content i.e. sequential reads of large files with random access?

        • Eikon 2 days ago

          If you have the network that matches, it should be perfectly fine.

Mizza 2 days ago

Has Medium stopped working on Firefox for anybody else? Once the page is finished loading, it stops responding to scroll events.

  • stevekemp 2 days ago

    No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.

  • overhead4075 2 days ago

    The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.

  • micw 2 days ago

    Same here. Meanwhile I close a link/page as soon as I realize it's on medium.

  • lukax 2 days ago

    Maybe you have an ad-blocker that just hides the popup but does not restore scrolling (scrolling is usually prevented when popups are visible)

    • osigurdson 2 days ago

      Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?

      • duttish 2 days ago

        Hm, I haven't seen that. Perhaps it's worth reviewing your plugins

        • osigurdson 2 days ago

          Thanks! I think it might have been notifications from futurism.com. I don't remember visiting that site or allowing notifications (on purpose anyway).

  • boldlybold 2 days ago

    Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.

  • tomrod 2 days ago

    I avoid medium where possible.

    If I could pipe text content to my terminal with confidence, I would.

shikhar 2 days ago

Foyer is a great open source contribution from RisingWave

We built a S3 read-through cache service for s2.dev so that multiple clients could share a Foyer hybrid cache with key affinity, https://github.com/s2-streamstore/cachey

  • erikcw 2 days ago

    This looks really useful! Am I correct that there isn’t an S3 compatible API, just the “fetch” API?

    Being able to set an S3 client’s endpoint to proxy traffic straight through this would be quite useful.

    • shikhar 2 days ago

      Yes, currently it has its own /fetch endpoint that then makes S3 GET(s) internally. One potential gotcha depending on how you are using it, an exact byte "Range" header is always required so that the request can be mapped to page-aligned byte range requests on the S3 object. But with that constraint, it is feasible to add an S3 shim.

      It is also possible to stop requiring the header, but I think it would complicate the design around coalescing reads – the layer above foyer would have to track concurrent requests to the same object.

winter_blue 2 days ago

Does S3 really have that high of a latency? So high that —— if you run a static file several in an EC2, would that be faster than S3?

  • huntaub 2 days ago

    Yes, definitely. S3 has a time to first byte of 50-150ms (depending on how lucky you are). If you're serving from memory that goes to ~0, and if you're serving from disk, that goes to 0.2-1ms.

    It will depend on your needs though, since some use cases won't want to trade off the scalability of S3's ability to serve arbitrary amounts of throughput.

    • manquer 2 days ago

      In that case you run the proxy service load balanced to get desired throughput or run a sidecar/process in each compute instance where data is needed .

      You are limited anyway by the network capacity of the instance you are fetching the data from .

  • reese_john 2 days ago

    S3 has a low-latency offering[0] which promises single digit millisecond latency, I’m surprised not to see it mentioned.

    [0]: https://aws.amazon.com/s3/storage-classes/express-one-zone/

    • huntaub 2 days ago

      These are, effectively, different use cases. You want to use (and pay for) Express One Zone in situations in which you need the same object reused from multiple instances repeatedly, while it looks like this on-disk or in-memory cache is for when you may want the same file repeatedly used from the same instance.

      • manquer 2 days ago

        Is it the same instance ? Rising wave (and similar tools )are designed to run in production on a lot of distributed compute nodes for processing data , serving/streaming queries and running control panes .

        Even for any single query it will likely run on multiple nodes with distributed workers gathering and processing data from storage layer, that is whole idea behind MapReduce after all.

    • artursapek 2 days ago

      Also, aren't most people putting Cloudfront in front of S3 anyway?

      • hobofan 2 days ago

        For CDN use-cases yes, but not for DB storage-compute separation use-cases as described here.

Hixon10 2 days ago

"Zero-Copy In-Memory Cache Abstraction: Leveraging Rust's robust type system, the in-memory cache in foyer achieves a better performance with zero-copy abstraction." - what does this actually mean in practice?

  • MrCroxx 2 days ago

    Hi, foyer's author here. The "zero-copy in-memory abstraction" is compared to Facebook's CacheLib.

    CacheLib requires entries to be copied to the CacheLib managed memory when it's inserted. It simplified some design trade-offs, but may affect the overall throughput when in-memory cache is involved more than nvm cache. FYI: https://cachelib.org/docs/Cache_Library_User_Guides/Write_da...

    Foyer only requries entries to be serialized/deserialized when writing/reading from disk. The in-memory cache doesn't force a deep memory copy.

    • Hixon10 20 hours ago

      I see, thanks! I don't have much experience in Rust, aside from some pet projects. Which features of Rust's type system are needed to implement such behavior? (It's unclear to me why I wouldn't be able to do the same in, for example, C++.)

ComputerGuru 2 days ago

I think the article could use more on the cache invalidation and write-through (?) behavior. Are updates to the same file batched or written back to S3 immediately? Do you do anything with write conflicts, which one wins?

  • DenisM 2 days ago

    The article hints that cache invalidation is driven by the layers higher up the stack, relying on domain knowledge.

    For example, the application may decide that all files are read-only, until expired a few days later.

    Not clear about write-cache. My guess is that you will want some sort of redundancy when caching writes, so this goes beyond a library and becomes a service. Unless the domain level can absolve you of this concern by having redundancy elsewhere in the system (eg feed data from a durable store and replay if you lost some s3 writes).

mystifyingpoi 2 days ago

Sounds exactly like AWS Storage Gateway, how does it compare?

  • huntaub 2 days ago

    Storage Gateway is an appliance that you connect multiple instances to, this appears to be a library that you use in your program to coordinate caching for that process.

jmpman 2 days ago

How does this compare to S3 Mountpoint with caching?

  • huntaub 2 days ago

    S3 Mountpoint is exposing a POSIX-like file system abstraction for you to use with your file-based applications. Foyer appears to be a library that helps your application coordinate access to S3 (with a cache), for applications that don't need files and you can change the code for.

freeloyer 2 days ago

How does it compare to cachelib

  • shikhar 2 days ago

    Essentially CacheLib in Rust

    > foyer draws inspiration from Facebook/CacheLib, a highly-regarded hybrid cache library written in C++, and ben-manes/caffeine, a popular Java caching library, among other projects.

    https://github.com/foyer-rs/foyer

nitishr 2 days ago

I haven’t used it yet but I have been looking for something like this for a long time. Kudos!

import 2 days ago

Very curious about comparison between the rclone etc.. s3 caching vs this one.

k_bx 2 days ago

Interesting to compare against ZeroFs

alongub 2 days ago

Foyer is great!

riquito 2 days ago

> Cost is reduced because far fewer requests hit S3

I wonder. Given how cheap are S3 GET requests you need a massive number of requests to make provisioning and maintaining the cache server cheaper than the alternative.