hubabuba44 an hour ago

Why doesn't it mention landlock?

  • anacrolix 16 minutes ago

    because it's shit

    • nhanb 11 minutes ago

      By "it" do you mean landlock or fil-c? Also elaboration would be nice.

pornel 13 hours ago

There's a hybrid approach of C -> WASM -> C compilation, which ends up controlling every OS interaction and sandboxing memory access like WASM, while technically remaining C code:

https://rlbox.dev/

  • jart 12 hours ago

    WASM sandboxes don't do much to guarantee the soundness of your program. It can hose your memory all it wants, it can just only do so within the confines of the sandbox.

    Using a sandbox also limits what you can do with a system. With stuff like SECCOMP you have to methodically define policies for all its interactions. Like you're dealing with two systems. It's very bureaucratic and the reason we do it, is because we don't trust our programs to behave.

    With Fil-C you get a different approach. The language and runtime offer a stronger level of assurance your program can only behave, so you can trust it more to have unfettered access to the actual system. You also have the choice to use Fil-C with a sandbox like SECCOMP as described in the blog post, since your Fil-C binaries are just normal executables that can access powerful Linux APIs like prctl. It took Linux twenty years to invent that interface, so you'll probably have to wait ten years to get something comparable from WASI.

    • IshKebab 6 hours ago

      > It can hose your memory all it wants, it can just only do so within the confines of the sandbox.

      True, although as I understand it the WASI component model at least allows multiple fine-grained sandboxes, so it's somewhere in-between per-object capabilities and one big sandbox for your entire program. I haven't actually used it yet so I might be wrong about that.

      > so you'll probably have to wait ten years to get something comparable from WASI

      I think for many WASI use cases the capability control would be done by the host program itself, so you don't need OS-level support for it. E.g. with Wasmtime I do

        WasiCtxBuilder::new()
              .allow_tcp(false)
              .allow_udp(false)
              .allow_ip_name_lookup(false)
      
      But yeah a standard WASI program can't itself decide to give up capabilities.
      • pjmlp 5 hours ago

        WASI is basically CORBA, and DCOM, PDO for newer generations.

        Or if you prefer the bytecode based evolution of them, RMI and .NET Remoting.

        I don't see it going that far.

        The WebAssembly development experience on the browser mostly still sucks, especially the debugging part, and on the server it is another yet another bytecode.

        Finally, there is hardly any benefit over OS processes, talking over JSON-RPC (aka how REST gets mostly used), GraphQL, gRPC, or plain traditional OS IPC.

        • IshKebab an hour ago

          > hardly any benefit over OS processes, talking over JSON-RPC

          Hardly any benefit except portability and sandboxing, the main reasons WASM exists?

          • pjmlp an hour ago

            WASM is only portable if the only thing it does is heating up CPU, given that everything else depends on the host.

  • pizlonator 13 hours ago

    That's a sandboxing technology but not a memory safety technology.

    You can totally achieve weird execution inside the rlbox.

    • ComputerGuru 13 hours ago

      Running ffmpeg compiled for wasm and watching as most codec selections lead to runtime crashes due to invalid memory accesses is fun. But, yeah, it’s runtime safety, so going to wasm as a middle step doesn’t do much.

      • pizlonator 12 hours ago

        > Running ffmpeg compiled for wasm and watching as most codec selections lead to runtime crashes due to invalid memory accesses is fun.

        For all you know that’s a bug in the wasm port of the codec.

        > it’s runtime safety

        So is Fil-C

        The problem with wasm is that an OOBA in one C allocation in the wasm guest can still give the attacker the power to clobber any memory in the guest. All that’s protected is the host. That’s enough to achieve weird execution.

        Hence why I say that wasm is a sandbox. It’s not memory safety.

        • ComputerGuru 11 hours ago

          I’m not disagreeing with anything you said about wasm?

      • pjmlp 4 hours ago

        Finally reality is catching up with the WASM sales pitch against other bytecode formats introduced since 1958, regarding security and how great it is over anything else.

        • singpolyma3 2 hours ago

          Warm was great because it was lightweight and easy to target from any language and create any custom interaction API with the host. That's becoming less true as they bolt on features no one needed (GC) and popularize standardized interfaces that contain the kitchen sink (WASI) but these things can still be treated as optional so it can still be used for much more flexible use cases than java or .net

          • pjmlp an hour ago

            Since 1958 (UNCOL) there have been more options than only Java or CLR MSIL.

    • zozbot234 6 hours ago

      Wasm now supports multiple modules and multiple linear memories per module, so it ought to be quite possible to compile C to Wasm in a way that enforces C's object access rules, much like CHERI if perhaps not Fil-C itself.

      • pjmlp an hour ago

        Some WebAssembly runtimes now do support those parts of the specification.

      • IshKebab 6 hours ago

        You wouldn't be able to get quite as fine-grained. One memory per object is probably horrifically slow. And I don't know about Fil-C, but CHERI at least allows capabilities (pointers with bounds) to overlap and subset each other. I.e. you could allocate an arena and get a capability for that, and then allocate an object inside that arena and get a smaller capability for that, and then get a pointer to a field in that object and get capability just for that field.

        • Findecanor 5 minutes ago

          Fil-C has like one "linear memory" per object and one capability to a whole object.

          But Fil-C has its compiler which does analysis passes for eliding bounds-checks where they are not needed, and it could do a better job than a WASM compiler because it works from the C source code which has more information. Unlike WASM, but like CHERI, every pointer in memory is tagged, and would lose its pointer status if overwritten by an integer, so it is still more memory-safe.

        • zozbot234 an hour ago

          One would probably just need to define WASM extensions that allow for such subsetting. Performance will probably be competitive with software implementations of CHERI (perhaps with varying levels of hardware acceleration down the road) which isn't that bad.

jagrsw 14 hours ago

The author has a knack for generating buzz (and making technically interesting inventions) :)

I'm a little concerned that no one (besides the author?) has checked the implementation to see if reducing the attack surface in one area (memory security) might cause problems in other layers.

For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic. Setuid applications need to be written super-defensively because they can be affected by envars, file descriptors (e.g. there could be funny logical bugs if fd=1/2 is closed for a set-uid app, and then it opens something, and starts using printf(), think about it:), rlimits, and signals. The custom modifications to ld.so likely don't account for this yet?

In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

OTOH, it's probably a good idea to test your codebase with it (provided it compiles, of course) - this phase could uncover some interesting problems (assuming there aren't too many false positives).

  • jart 12 hours ago

    I've been doing just that. If there's a way to break fil-c we're gonna find it.

    • yjftsjthsd-h 8 hours ago

      Wishful thinking: Any possible chance that means you might make a Fil-C APE hybrid? It would neatly address the fact that Fil-C already needs all of its dependencies to also use Fil-C.

  • jacquesm 14 hours ago

    If you are really concerned you should do this and then report back. Otherwise it is just a mild form of concern trolling.

    • jagrsw 13 hours ago

      I checked the the code, reported a bug, and Filip fixed it. Therefore, as I said, I was a little concerned.

      • jacquesm 13 hours ago

        Yes, but instead of remarking solely on the fact that the author has a pretty good turnaround time for fixing bugs (I wished all open source projects were that fast) and listens to input belies the tone of your comment, which makes me come away with a negative view of the project, when in fact the evidence points to the opposite.

        It's a 'damning with faint praise' thing and I'm not sure to what degree you are aware of it but I don't think it is a fair way to treat the author and the project. HN has enough of a habit of pissing on other people's accomplishments already. Critics have it easy, playwrights put in the hours.

        • jagrsw 13 hours ago

          I understand your point, and I have the utmost respect for the author who initiated, implemented, and published this project. It's a fantastic piece of work (I reviewed some part of it) that will very likely play an important role in the future - it's simply too good not to.

          At the same time, however, the author seems to be operating on the principle: "If I don't make big claims, no one will notice." The statements about the actual security benefits should be independently verified -this hasn't happened yet, but it probably will, as the project is gaining increasing attention.

          • pizlonator 13 hours ago

            > "If I don't make big claims, no one will notice."

            I am making big claims because there are big claims to be made.

            > he statements about the actual security benefits should be independently verified -this hasn't happened yet

            I don't know what this means. Folks other than me have independently verified my claims, just not exhaustively. No memory safe language runtime has been exhaustively verified, save maybe Spark. So you're either saying something that isn't true at all, or that could be said for any memory safe language runtime.

            • jagrsw 12 hours ago

              To clarify the position, my concern isn't that the project is bad - it's that security engineering is a two-front war. You have to add new protections (memory safety) without breaking existing contracts (like ld.so behavior).

              When a project makes 'big claims' about safety, less technical users might interpret that as 'production ready'. My caution is caused by the fact that modifying the runtime is high-risk territory where regressions can introduce vulns that are distinct from the memory safety issues you are solving.

              The goal is to prevent the regression in the first place. I'm looking forward to seeing how the verification matures and rooting for it.

              • pizlonator 12 hours ago

                > without breaking existing contracts (like ld.so behavior)

                If you think that Fil-C regresses ld.so then get specific. Otherwise what you’re doing is spreading fear, uncertainty, and doubt for no good reason.

                Fil-C has always honored the setuid behavior provided by ld.so. There was a bug - since fixed - that the Fil-C runtime called getenv instead of secure_getenv.

                > When a project makes 'big claims' about safety, less technical users might interpret that as 'production ready'.

                Fil-C is production ready and already has production users.

          • jacquesm 13 hours ago

            I would suggest you re-read your comment in a week or so to see if by then you are far enough away from writing it to see how others perceive it. If it wasn't your intention to be negative then maybe it is my non-native English capability that is the cause of this but even upon re-reading it that's how I perceive it.

            - You start off with commenting that the author has a knack for self promotion and invention. My impression is that he's putting in a status report for a project that is underway.

            - you follow this up with something that you can't possibly know and use that to put the project down, whilst at the same time positioning yourself as a higher grade authority because you are apparently able to see something that others do not, effectively doing that which you accuse the author of: self promotion.

            - You then double down on this by showing that it was you who pointed out to the author that there was a bug in the software, which in the normal course of open source development is not usually enough to place yourself morally or technically above the authors.

            - You then in your more or less official capacity of established critic warn others to hold off putting this project to the test until 'adults' have reviewed it.

            - And then finally you suggest they do it anyway, with your permission this time (and of course now amply warned) with the implicit assumption that problems will turn up (most likely this will be the case) and that you hope 'there won't be too many false positives', strongly suggesting that there might be.

            And in your comment prior to this reply you do that once again, making statements that put words in the mouth of the author.

            • jagrsw 12 hours ago

              You're right, my tone was off.

  • pizlonator 14 hours ago

    Posts like the one I made about how to do sandboxing are specifically to make the runtime transparent to folks so that meaningful auditing can happen.

    > For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic.

    The changes to ld.so are tiny and don’t affect anything interesting to setuid. Basically it’s just one change: teaching the ld.so that the layout of libc is different.

    More than a month ago, I fixed a setuid bug where the Fil-C runtime was calling getenv rather than secure_getenv. Now I’m just using secure_getenv.

    > In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

    There’s some truth to what you’re saying and there’s also some FUD to what you’re saying. Like a perfectly ambiguous mix of truth and FUD. Good job I guess?

    • fc417fc802 9 hours ago

      Is it FUD? Approximately speaking, all software has bugs. Being an early adopter for security critical things is bound to carry significant risk. It seems like a relevant topic to bring up in this sort of venue for a project of this sort.

      • pizlonator 8 hours ago

        It’s like half FUD.

        The FUDish part is that the only actual bug bro is referring to got fixed a while ago (and didn’t have to do with ld.so), and the rest is hypothetical

    • walterbell 13 hours ago

      > a perfectly ambiguous mix of truth and FUD

      Congrats on Fil-C reaching heisentroll levels!

  • quotemstr 9 hours ago

    It's difficult for me to have a positive opinion of the author when he responds with dismissal and derision to concerns others have raised about Fil-C and memory safety under data races.

    The fact is that Fil-C allows capability and pointer writes to tear. That is, when thread 1 writes pointer P2 to a memory location previously holding P1, thread 2 can observe, briefly, the pointer P2 combined with the capability for P1 (or vice versa, the capability for P2 coupled to the pointer bits for P1).

    Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

    The mismatch of pointer and capability breaks memory safety: an attacker can break the abstraction of pointers-as-handles and do nefarious things with pointers viewed instead as locations in RAM.

    On one hand, this break is minor and doesn't appear when memory access is correctly synchronized. Fil-C is plenty useful even if this corner case is unsafe.

    On the other hand, the Fil-C as author's reaction to discourse about this corner case makes me hesitant to use his system at all. He claims Java has the same problem. It does not. He claims it's not a memory safety violation because thread 1 could previously have seen P1 and its capability and therefore accessed any memory P1's capability allowed. That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

    The guy is technically talented, but he presents himself as Prometheus bringing the fire of memory safety to C-kind. He doesn't acknowledge corner cases like the one I've described. Nor does he acknowledge practical realities like the inevitability of some kind of unsafe escape hatch (e.g. for writing a debugger). He says such things are unnecessary because he's wrapped every system call and added code to enforce his memory model's invariants around it. Okay, is it possible to do that in the context of process_vm_writev?

    I hope, sincerely, the author is able to shift perspectives and acknowledge the limitations of his genuinely useful technology. The more he presents it as a panacea, the less I want to use it.

    • pizlonator 8 hours ago

      > Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

      Under Fil-C’s memory safety rules, „the object at which P points” is determined entirely by the capability and nothing else.

      You got the capability for P1? You can access P1. That’s all there is to it. And the stores and loads of the capability itself never tear. They are atomic and monotonic (LLVM’s way of saying they follow something like the JMM).

      This isn’t a violation of memory safety as most folks working in this space understand it. Memory safety is about preventing the weird execution that happens when an attacker can access all memory, not just the memory they happen to get a capability to.

      > He claims Java has the same problem. It does not.

      It does: in Java, what object you can access is entirely determined by what objects you got to load from memory, just like in Fil-C.

      You’re trying to define „object” in terms of the untrusted intval, which for Fil-C’s execution model is just a glorified index.

      Just because the nature of the guarantees doesn’t match your specific expectations does not mean that those guarantees are flawed. All type systems allow incorrect programs to do wrong things. Memory safety isn’t about 100% correctness - it’s about bounding the fallout of incorrect execution to a bounded set of memory.

      > That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

      Yes, kaboom. The kaboom you get is a safety panic because a nonadversarial program would have had in bounds pointers and the tear that arises from the race causes an OOB pointer that panics on access. No memory safe language prevents adversarial programs from doing bad things (that’s what sandboxes are for, as TFA elucidates).

      But that doesn’t matter. What matters is that someone attacking Fil-C cannot use a UAF or OOBA to access all memory. They can only use it to access whatever objects they happen to have visibility into based on local variables and whatever can be transitively loaded from them by the code being attacked.

      That’s memory safety.

      > He doesn't acknowledge corner cases like the one I've described.

      You know about this case because it’s clearly documented in the Fil-C documentation. You’re just disagreeing with the notion that the pointer’s intval is untrusted and irrelevant to the threat model.

      • quotemstr 8 hours ago

        > The kaboom you get is a safety panic

        You don't always get a panic. An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability. That's dangerous if a program has made a control decision based on the pointer bits being P2. IOW, an attacker controlled offset can transform P2 back into P1 and access memory using P1's capability even if program control flow has proceeded as though only P2 were accessible at the moment of adversarial access.

        That can definitely enable a "weird execution" in the sense that it can let an attacker make the program follow an execution path that a plain reading of the source code suggests it can't.

        Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

        You are trying to define the problem away with sleigh-of-hand about the pointer "really" being its capability while ignoring that programs make decisions based on pointer identity independent of capability -- because they're C programs and can't even observe these capabilities. The JVM doesn't have this problem, because in the JVM, the pointer is the capability.

        It's exactly this refusal to acknowledge limitations that spooks me about your whole system.

        • pizlonator 8 hours ago

          > An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability

          Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not. It’s the capability you loaded because the program was written in a way that let you have access to it. Like if you wrote a Java program in a way where a shared field F sometimes pointed to object P1. Of course that means loaders of F get to access P1.

          > That can definitely enable a "weird execution"

          Accessing a non-free object pointed by a pointer you loaded from the heap is not weird.

          I get the feeling that you’re not following me on what „weird execution” is. It’s when the attacker can use a bug in one part of the software to control the entire program’s behavior. Your example ain’t that.

          > Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

          I don’t care about whether it’s a corner case.

          My point is that there’s no capability model violation and no weird execution in your example.

          It’s exactly like what the JVM provides if you think of the intval as just a field selector.

          I’m not claiming it’s like what rust provides. Rust has stricter rules that are enforced less strictly (you can and do use the unsafe escape hatch in rust code to an extent that has no equal in Fil-C).

          • lifis an hour ago

            I think his argument is that you can have code this:

              user = s->user;
              if(user == bob)
                user->acls[s->idx]->has_all_privileges = true;
            
            And this happens: 1. s->user is initialized to alice 2. Thread 1 sets s->idx to ((alice - bob) / sizeof(...)) and s->user to Bob, but only the intval portion is executed and the capability still points to Alice 3. Thread 2 executes the if, which succeeds, and then gives all privileges to Alice unexpectedly since the bob intval plus the idx points to Alice, while the capability is still for Alice

            It does seem a real issue although perhaps not very likely to be present and exploitable.

            Seems perhaps fixable by making pointer equality require that capabilities are also equal.

          • quotemstr 8 hours ago

            > Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not.

            My program:

              if (p == P2) return p[attacker_controlled_index];
            
            If the return statement can access P1, disjoint from P2, that's a weird execution for any useful definition of "weird". You can't just define the problem away.

            Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program. Turns out you get memory safety only if you write that C program with Fil-C's memory model and its limits in mind. If someone's going to do that, why not write instead with Rust's memory model in mind and not pay a 4x performance penalty?

            • pizlonator 8 hours ago

              > that's a weird execution for any useful definition of "weird".

              Weird execution is a term of art in the security biz. This is not that.

              Weird execution happens when the attacker can control all of memory, not just objects the victim program rightly loaded from the heap.

              > Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program.

              Yes. Your program is memory safe. You get to access P1 if p pointed at P1.

              You don’t get to define what memory safety means in Fil-C. I have defined it here: https://fil-c.org/gimso

              Not every memory safe language defines it the same way. Python and JavaScript have a weaker definition since they both have powerful reflection including eval and similar superpowers. Rust has a weaker definition if you consider that you can use `unsafe`. Go has a weaker definition if you consider that tearing in Go leads to actual weird execution (attacker gets to pop the entire Go type system). Java’s definition is most similar to Fil-C’s, but even there you could argue both ways (Java has more unsafe code in its implementation while Fil-C doesn’t have the strict aliasing of Java’s type system).

              You can always argue that someone else’s language isn’t memory safe if you allow yourself to define memory safety in a different way. That’s not a super useful line of argumentation, though it is amusing and fun

              • tialaramex an hour ago

                > Rust has a weaker definition if you consider that you can use `unsafe`

                I don't see it. Rust makes the same guarantees regardless of the unsafe keyword. The difference is only that with the unsafe keyword you the programmer are responsible for upholding those guarantees whereas the compiler can check safe Rust.

              • torginus 3 hours ago

                Sorry to intrude on the discussion, but I have a hard time grasping how to produce the behavior mentioned by quotemstr. From what I understand the following program would do it:

                    int arr1[] = {1, 2, 3, 4, 5};
                    int arr2[] = {10, 20, 30, 40, 50};
                    int *p1 = &arr1[1];  
                    int *p2 = &arr2[2];  
                    int *p = choose_between(p1,p2);
                
                    //then sometime later, a function gets passed p
                    // and this snippet runs
                    if (p == p2) {
                     //p gets torn by another thread
                     return p; // this allows an illegal index/pointer combo, possibly returning p1[1]
                    }
                
                Is this program demonstrating the issue? Does this execute under Fil-C's rules without a memory fault? If not, could you provide some pseudocode that causes the described behavior?
              • quotemstr 7 hours ago

                You may define "memory safety" as you like. I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear. You can define a mathematical notation in which 2+3=9, but I'm under no obligation to accept it, and I'll take the attempt into consideration when evaluating the credibility of proofs in this strange notation.

                Nobody is trying to hide the existence of "eval" or "unsafe". You're making a categorical claim of safety that's true only under a tendentious reading of common English words. Users reading your claims will come away with a mistaken faith in your system's guarantees.

                Let us each invest according to our definitions.

burakemir 11 hours ago

My trouble with separate categories "memory safety technology" and "sandboxing technology" is that something like WASM execution is both:

* Depending on how WASM is used, one gets safety guarantees. For example, memory is not executable.

* Privileges are reduced as a WASM module interacts with the environment through the WASM runtime and the embedder

Now, when one compiles C to WASM one may well compile things with bugs. A memory access bug in C is still a memory access bug, but its consequences can be limited in WASM execution. Whether fail-stop behavior is guaranteed actually depends on the code the C compiler generates and the runtime (allocation/deallocation, concurrency) it sets up.

So when we enumerate immediately available security options and count WASM as sandboxing, this is not wrong. But WASM being an execution environment, one could do a lot of things, including a way of compiling and executing C that panics when a memory access bug is encountered.

  • pizlonator 9 hours ago

    Wasm is just sandboxing.

    Say your C program has sensitive information in module A and a memory safety bug in module B. Running that program in wasm won’t prevent the attacker from using the bug in B to get read/write access to the data in A.

    In practice what the attacker will really do is use the memory safety bug to achieve weird execution: even without control over the program counter, the fact that a memory safety bug inside the wasm memory gives read write access to all of that memory means the attacker can make the program do whatever they want, subject to the wasm sandbox limits (ie whatever the host allows the wasm guest to do).

    Basically wasm amounts to a lightweight and portable replacement for running native code in a sufficiently sandboxed process

    • azakai 9 hours ago

      Your general point stands - wasm's original goal was mainly sandboxing - but

      1. Wasm does provide some amount of memory safety even to compiled C code. For example, the call stack is entirely protected. Also, indirect calls are type-checked, etc.

      2. Wasm can provide memory safety if you compile to WasmGC. But, you can't really compile C to that, of course...

  • pjmlp 4 hours ago

    Depends on how it is used is already a sign that WebAssembly isn't really as safe as being sold, by many of its advocates, versus other bytecode formats.

    Like, C is actually really safe, it only depends on how it is being used.

    People only have to enumerate the various ways and tools to write safe code in C.

    Problem solved, or so we get to believe.

  • vlovich123 9 hours ago

    > including a way of compiling and executing C that panics when a memory access bug is encountered.

    WASM couldn’t do that because it doesn’t have a sense of the C memory model nor know what is and isn’t safe - that information has long been lost. That kind of protection is precisely what Fil-C is doing.

    WASM is memory safe in that you can’t escape the runtime. It’s not memory safe in that you can escape escape the program running within the sandbox, which you can’t do with a memory safe language like Rust or Fil-C.

hurturue 15 hours ago

MicroVMs seem to be getting more popular.

I wonder how they fit into the picture.

  • pizlonator 14 hours ago

    Good point!

    It would require a bit of porting (since Fil-C currently assumes you have all of the Linux syscalls). But you could probably even lift some of the microVM’s functionality into Fil-C’s memory safe userland.

razighter777 14 hours ago

I hope this project gets more traction. I would love to see a memory safe battle tested sudo or polkit in my package manager without having to install a potentially workflow breaking replacement.

iberator 3 hours ago

Any wizard here? Fill-C vs Rust? Realistic comments without hype.

  • forgotpwd16 2 hours ago

    Fil-C introduces a garbage collector and can result in significant slowdowns in some cases. Its main existence reason is making non-perf sensitive C/C++ memory safe, not improving the language design. If really want your stack to be C/C++ & Fil-C then your competition includes D/Nim/Go/etc, not (just) Rust/Zig. Even if it magically made C/C++ memory safe, no downsides, your question basically boils down to C vs C++ vs Rust. Don't know about you but prefer somewhat larger binaries and some compiler brawling over programming 70s style.

    • kokada an hour ago

      One thing that I am interested in is the performance of Fil-C compared to other compiled programming languages that are also GC'd, especially Go.

  • razighter777 an hour ago

    Both are great tech but solve the problem of safety differently. I would say Fil-c is great for non-performance-critical (think like somewhere between c and go/java, still very fast) existing C programs where compatibility with the existing program / security is a big concern. think ffmpeg, nginx, sudo.

    Fil-c:

    - You have a great existing c program that may have memory bugs, and you wanna make it safer.

    - Or you wanna write a new program in c, and be extra sure it's safe and don't mind a little performance penalty.

    - Or you wanna find subtle memory bugs by building your c program with fil-c (asan style) and disable it for performance in your release build.

    Rust is great when you want to build a new codebase from scratch, and have the time and patience to deal with the borrow checker. It also gives you some thread safety, (which is different from memory safety) at the development time cost of dealing with the borrow checker. Rust:

    - A new codebase where you need multithreading and safety, and want excellent performance

    - You need a broad ecosystem of existing packages

    - Your problem space benefits from a robust type system.

  • the-lazy-guy 2 hours ago

    Fil-C aborts your program if it detects unsafe memory operations. You very much can write code that is not memory safe, it will just crash. Also it has significant runtime cost.

    Rust tries to prevent you from writing memory-unsafe code. But it has official ways of overcoming these barriers ("unsafe" keyword, which tells compiler - "trust me bro, I know what I'm doing) and some soundness holes. But beause safety is proven statically by compiler, it is mostly zero-cost. ("Mostly" because some things compiler can't prove and people resort to "unsafe" + runtime checks)

    Two orthogonal approaches to safety. You could have Fil-C style runtime checks in Rust, in principle.

loeg 14 hours ago

Sort of similarly, I'd like to see more use of sandboxing in memory-safe language programs. But I don't see a ton of people using these OS primitives in, e.g., Rust or Go.

  • pornel 13 hours ago

    There's a need for some portable and composable way to do sandboxing.

    Library authors you can't configure seccomp themselves, because the allowlist must be coordinated with everything else in the whole process, and there's no established convention for negotiating that.

    Seccomp has its own pain points, like being sensitive to libc implementation details and kernel versions/architectures (it's hard to know what syscalls you really need). It can't filter by inputs behind pointers, most notably can't look at any file paths, which is very limiting and needs even more out-of-process setup.

    This makes seccomp sandboxing something you add yourself to your application, for your specific deployment environment, not something that's a language built-in or an ecosystem-wide feature.

    • silon42 4 hours ago

      To be properly useful as a sandbox, it would be nice to have a tool that would run another process/executable in a sandboxed environment.

      Basically a tool that would allow to run flatpaks/AppImages/ etc...

      Maybe firejail already does all that can be done without using a VM.

  • pizlonator 14 hours ago

    I think Rust is great for sandboxing because of how Rust has basically no runtime. This is one of the nice things about rust!

    Go has the same problems I’m describing in my post. Maybe those folks haven’t done the work to make the Go runtime safe for sandboxing, like what I did for Fil-C.

    • loeg 13 hours ago

      Sure, but even just setuiding to a restrictive uid or chrooting would go a long way, even in a managed runtime language where syscall restrictions are more challenging.

Too 8 hours ago

Can someone give a tldr of what makes fil-c different from just compiling with clang’s address sanitizer?

Calling it memory safe is a bit of a stretch when all it does is convert memory errors to runtime panics, or am I missing something? I mean, that’s still good, just less than I’d expect given the recent hype of fil-c being the savior for making C a competitive language again.

  • integralid 6 hours ago

    ASan does not make your code memory safe! It is quite good at catching unintentional bugs/oob memory writes in your code, and it is quite reliable (authors claim no false positives), but it has false negatives i.e. won't detect everything. Especially if you're against someone who tries to corrupt your memory intentionally.

    ASan works by (simplifying a lot) padding allocations and surrounding them with untouchable "red zone". So with some luck even this can work:

      char *a = new char[100];
      char *b = new char[1000];
      a[500] = 0; // may end up in b
  • procaryote 7 hours ago

    If you can rely on memory errors panicing before the memory error can have an effect, you're memory safe. Memory safety doesn't require "can't crash".

    • Too 2 hours ago

      From a definition point of view that might be right and it’s no doubt a good step up, compared to continuing with tainted data. In practice though, that is still not enough, these days we should expect higher degree of confidence from our code before it’s run. Especially with the mountains of code that LLMs will pour over us.

    • seabrookmx 6 hours ago

      Exactly. Or Rust wouldn't be memory safe due to the existence of unwrap().

      Not that crashing can't be bad, as we saw recently with Cloudflare's recent unwrap-based incident.

      • brabel 10 minutes ago

        Even without unwrap, Rust could still crash on array out of bounds access. And probably more similar cases.

fragmede 12 hours ago

Which requirements does a full blown virtual machine not meet? By leaning on that as the sandbox, we get Qubes, but maybe I don't know what I'm talking about.

  • pizlonator 11 hours ago

    It’s true that a full blown VM is an excellent sandbox.

    The usual situation is like what chrome or OpenSSH want:

    - They want to be able to do dangerous things by design. Chrome wants to save downloads. Chrome wants to call rendering APIs. OpenSSH wants to pop a shell.

    - They want to deal with untrusted inputs. Chrome downloads things off the internet and parses them. OpenSSH has a protocol that it parses.

    So you want to split your process into two with privilege separation:

    - one process has zero privileges and does the parsing of untrusted inputs.

    - another process has high privilege but never deals with untrusted inputs.

    And then the two processes have some carefully engineered IPC protocol for talking to one another.

    Could you run the deprivileged process in a VM for maximum security? Yeah that’s one way to do it. But it’s cleaner to run it as a normal process, ask the OS to sandbox it (deprivilege it), and then have a local domain socket (or whatever) that the two processes can use to communicate.

    If you used a VM for deprivileging then:

    - There’d be more overhead. Chrome wants to do this per origin per tab. OpenSSH wants to do it per connection. Maybe a VM is too much

    - You could put the whole browser into the VM but then you’d still need something outside it for saving files. And probably for talking to the GPU. You could run OpenSSH in the VM but then that defeats the purpose (you want to use it to pop a shell after all).

    - You can use vsocks and other things to communicate between host and guest but it’s much more gross than the options available when using traditional process sandboxing

    • integralid 6 hours ago

      Does it even work with openssh example? Pwning the parser progress will let attacker spoof arbitrary communication, which in case of SSH lets them execute arbitrary commands. Or is there a smart way to work around that?

      • deredede 5 hours ago

        You can send arbitrary commands, but they will be rejected unless you provide valid credentials first.

  • creatonez 10 hours ago

    When it comes to VMs, most things are solved and have near native performance, but desktop graphics are not. Due to limitations in GPU architecture, you usually have to dedicate an entire GPU to the VM to have reasonable graphical acceleration. Qubes doesn't solve this either, IIRC the apps running in VMs are glued to the host with X11 forwarding without any acceleration support.

oofbey 7 hours ago

Nit:The word “orthogonal” should not mean merely “different”. It should mean “completely unrelated” if we are drawing a proper analogy from linear algebra. Orthogonal vectors have a dot product of zero. No correlation whatsoever. As ML and linear algebra terms spread to more common language of course the terms will change their meaning. Just as “literally” now often means “figuratively” I’m not going to die on this hill. But I will try to resist degradation of terms that have specific technical meaning.

So I would very much disagree with the statement that memory safety and sandboxing are orthogonal. They are certainly different. Linearly independent even. But with a fair amount of overlap.

  • puilp0502 3 hours ago

    But it's much easier to say "orthogonal" than "linearly independent", no? As you mentioned, I think the word "orthogonal" has already lost its meaning of "dot product equals zero", and bears the meaning of "linearly independent" (i.e. dim(N) > 1) in casual speech.

fooker 10 hours ago

What's the rationale for naming it after yourself?

  • nospice 8 hours ago

    Cough Linux cough

    You're getting cool tech for free. You can use it or not use it. Complaining about the name is weird.

    • mxey 6 hours ago

      Linus didn’t name Linux, someone else named it after him.

      • fooker 6 hours ago

        He did name git after himself though :)

  • yjftsjthsd-h 8 hours ago

    Another option: It's genuinely easier, for what amounts to namespacing reasons. Like, if I came up with a cool new C compiler, I'd probably name it ${MYNAME}cc just because that's an easy identifier that is very unlikely to have a collision and doesn't require me to spend time thinking of some name that is clever, unique, and accurately conveys what the project is about.

  • quotemstr 9 hours ago

    Throughout history, the prospect of glory has motivated people to do great deeds. Nothing wrong with it now.

  • airstrike 10 hours ago

    Pride is as good a reason as any other