latch 3 days ago

Author here.

I finally got it working. I had to flush both the encrypted writer and then the stream writer. There was also some issues with reading. Streaming works, but it'll always return 0 on the first read because Writer.Fixed doesn't implement sendFile, and thus after the first call, it internally switches from streaming mode to reading mode (1) and then things magically work.

Currently trying to get compression re-enabled in my websocket library.

(1) https://github.com/ziglang/zig/blob/47a2f2ddae9cc47ff6df7a71...

  • dchest 3 days ago
    • jenadine 2 days ago

      That's why I like RAII.

      • AndyKelley 2 days ago

        It's a bug to flush (fallible operation) in a destructor (infallible operation).

        • oconnor663 2 days ago

          I know you've thought carefully about these issues, but still it can't be that simple, can it? Closing a file or a socket is a fallible operation too.

          • AndyKelley 2 days ago

            Wrong.

            • MrResearcher 2 days ago

              Why is he wrong?

              Here's an excerpt from the close(2) syscall description:

              RETURN VALUE close() returns zero on success. On error, -1 is returned, and errno is set to indicate the error.

              ERRORS EBADF fd isn't a valid open file descriptor.

                     EINTR  The close() call was interrupted by a signal; see signal(7).
              
                     EIO    An I/O error occurred.
              
                     ENOSPC
                     EDQUOT On NFS, these errors are not normally reported against the first write which exceeds the available storage space, but instead against a subsequent
                            write(2), fsync(2), or close().
              
                     See NOTES for a discussion of why close() should not be retried after an error.
              
              It obviously can fail due to a multitude of reasons.
              • AndyKelley 2 days ago

                It's unfortunate that the original authors of this interface didn't understand how important infallibility is to resource deallocation, and it's unfortunate that NFS authors didn't think carefully about this at all, but if you follow the advice of the text you pasted and read the section about how you can't retry close() after an error, it is clear that close is, in fact, a fundamentally infallible operation.

                • MrResearcher a day ago

                  If the flush (syscall) fails, it's not possible to recover in user space, therefore the only sensible option is to abort() immediately. It's not even safe to perror("Mayday, mayday, flush() failed"), you must simply abort().

                  And, the moment you start flushing correctly: if(flush(...)) { abort(); }, it becomes infallible from the program's point of view, and can be safely invoked in destructors.

                  File closure operations, on the other hand, do have legitimate reasons to fail. In one of my previous adventures, we were asking the operator to put the archival tape back, and then re-issuing the close() syscall, with the driver checking that the tape is inserted and passing the control to the mechanical arm for further positioning of the tape, all of that in the drivers running in the kernel space. The program actually had to retry close() syscalls, and kept asking the operator to handle the tape (there were multiple scenarios for the operator how to proceed).

                  • jcalvinowens a day ago

                    If the tape drive failed close() in a way that did not deallocate the file descriptor, that was just straight up a bug.

                    Retrying close() is dangerous, if the file descriptor was successfully deallocated, it might have already been re-allocated by another thread. I'd guess the program you're describing was single threaded though (it can still bite there though)

                  • zozbot234 a day ago

                    > In one of my previous adventures, we were asking the operator to put the archival tape back, and then re-issuing the close() syscall, with the driver checking that the tape is inserted and passing the control to the mechanical arm for further positioning of the tape, all of that in the drivers running in the kernel space.

                    Why can't the OS itself do the prompting in this case, as part of processing the original close()? MS-DOG had its (A)bort/(R)etry/(I)gnore prompt for failing I/O operations, and AmigaOS could track media labels and ask the user to "insert $MEDIA_LABEL in drive".

                    • MrResearcher a day ago

                      Because DOS relied on BIOS interrupt 10h to handle I/O:

                        mov si, GREETINGS_STRING
                        print_loop:
                          lodsb                  ; Load next byte into AL, advance SI
                          cmp al, 0              ; Check for null terminator
                          je done
                      
                          mov ah, 0Eh            ; BIOS teletype output
                          mov bh, 0              ; Page number = 0
                          mov bl, 07h            ; Light gray on black in text mode
                          int 10h                ; Print character in AL
                      
                          jmp print_loop
                        done:
                          ...
                      
                        GREETINGS_STRING db "Hello, BIOS world!", 0
                      
                      And linux doesn't rely on BIOS for output I/O, it provides TTY subsystem and then programs use devices like /dev/tty for I/O. Run $ lspci in your console: which of those devices should the kernel use for output? The kernel wouldn't know that and BIOS is no longer of any help.
                      • zozbot234 a day ago

                        > which of those devices should the kernel use for output?

                        Whatever facility it uses for showing kernel panics, perhaps. Though one could also use IPC facilities such as dbus to issue a prompt in the session of whatever user is currently managing that media device.

                • jcalvinowens a day ago

                  Yeah, close() can't fail, but it can return an error. It's kind of odd.

                  How could one fix that though? It seems pretty unavoidable to me because write() is more or less asynchronous to actual disk I/O.

                  You could add finalize() which is distinct from close(), but IMHO that's even more confusing.

  • tempodox 2 days ago

    Whatever happened to the principle of least surprise?

    • DrewADesign 2 days ago

      Unsurprisingly, it’s occasionally disregarded—seemingly when you least expect it.

  • j-krieger 2 days ago

    Going from the previous interface to what ever this is, is certainly something. Yeesh.

hardwaresofton 3 days ago

I'm not a Zig PM but the first obvious fix for the issues the OP wrote about is to write better documentation, including usage examples (the more the better, almost to a fault). Also doubles as a good time to reflect on whether the user is having to do too much.

If the tradeoff was absolute performance/avoiding introducing load-bearing performance-lowering abstraction I think that goal was achieved, but DX may have gone out the window.

  • brabel 3 days ago

    You’re not familiar with Zig’s culture, I guess. Complain about the lack of documentation and be prepared for the flood of “just read the stdlib code” helpful comments by pretty much everyone who writes Zig right now. Because most APIs are just as hard to use as in this post (check things like HTTP and even basic file system operations) only the strongest survive.

    • virtualritz 3 days ago

      I do not think this is a viable excuse any more.

      I am just editing docs now that Claude Code writes for me. I am fanatic about developer docs (and I guess an exception as I love writing them) but with a set of concise instructions for CC and some writing style examples I get 90% there, sometimes 99%.

      If you believe you don't have time for the last 1--10% you should not be in charge of writing any API used by anyone but yourself. Just my two c.

      • aDyslecticCrow 3 days ago

        Ai is great at block comments, there is no excuse. Add to that a small anotated usage example written by a human and this whole post would have not existed.

        Lack of docs also cripple AI from understanding, so future adoption becomes even more bleak.

        If an api or library developer didnt bother doing even bare minimum docs, my confidence in the library drops aswell.

        Did they skip testing aswell? Ran the happy path for a day and called it good?

        This post sour my interest in zig. Its now obvious to me now why rust took much of its market.

        • felixgallo 3 days ago

          Zig is just getting started and came way after rust.

          • oblio 2 days ago

            I like Zig, but if we look at the numbers, the difference is probably more to do with funding that anything:

            Zig (programming language) - First appeared 8 February 2016; 9 years ago

            Rust (programming language) - First appeared January 19, 2012; 13 years ago

            Also, Zig at this point isn't really a brand new language anymore. I have comments on their issues dating back to 2018, so it's been a very active language since at least then.

            • jorams 2 days ago

              Those are not comparable dates. The Zig "first appeared" date is a few months into development by Andrew in his spare time. The Rust "first appeared" date is after 3 years of development by Graydon in his spare time, followed by 3 years of development by a Mozilla-sponsored team of engineers.

          • aDyslecticCrow 2 days ago

            So they're gonna just finnish up their standard lib and THEN spend a year doing nothing but docs for everything they made?

            Just getting started is an even bigger reason to have good docs to clearly communicate how the libraries and APIs work!

            I wouldn't even read a push request containing a new function if the creator didn't bother writing a short description and usage clarification.

            Getting started is a good excuse for limited libraries or support (same situationwith rust). But lack of even basic docs is not acceptable if you want user adoption.

      • mikepurvis 2 days ago

        As a senior dev I’m also passionate about docs and communication, and actually kind of love the process of agonizing over a bunch of rST files to get the perfect manual that’s both an on-boarding guide and also a reference of all the nooks and crannies.

        I think what I’ve come to realize is that when I feel a barrier toward doing that work, it’s a sign that I don’t actually like the underlying API. I don’t want to document it or craft examples or tutorials because in my mind the API sucks, it’s transitional, and documenting it in its current state is going to somehow lock in a bad interface and further increase the effort required to change and fix it up later.

      • throwawaymaths 2 days ago

        It's just not reasonable to expect volunteers to write eloquent documentation when it's likely it will just get flushed

      • tel 2 days ago

        At the same time, if you want to use Claude to read the source and narrate how it works to you that’s trivial to do as a user.

      • foxes 2 days ago

        I don't want to read ai slop comments. If you cant be bothered writing docs, I cant be bothered learning to use your library.

        • BobbyJo 2 days ago

          Just because and AI produced it doesn't mean it is slop.

          • ioasuncvinvaer 2 days ago

            But it is a pretty good signal of low quality.

            • BobbyJo 2 days ago

              A signal is something you use to discern something in lue of direct information. In the case of docs, just look at them and if they suck, or are good, it doesn't really matter where they came from. That being said, Ill take AI generated docs over no docs, and no docs is very common.

              • ioasuncvinvaer 2 days ago

                But how do you know if you can trust the docs if they are AI generated?

                • BobbyJo 2 days ago

                  How do you know if you can trust then if they are human generated? Your trust the people. AI isn't going to jump and just generate docs, a person has to prompt it, and you should expect the person to proof read and correct it. If the person turns out to be untrustworthy, you stop trusting them.

            • pixl97 2 days ago

              I mean, the quality of docs before AI was pretty low quality and forums everywhere were filled with complaints about it, except in the case of a few exceptional pieces of software. If this is the case, just having docs at all seems to be a signal of low quality.

              • ioasuncvinvaer 2 days ago

                No! Well written docs are a sign of quality. Obviously AI generated docs make me question whether they are actually correct which makes them useless.

    • keyle 3 days ago

      That would hurt adoption. I understand things move fast but if you want people to make the switch other than hello world, it has to be at a minimum cosy. Sending them to hell and find your way out isn't a good move long term.

      I tried Zig a couple of times and I got that feeling: very powerful and clever language but not really for me, I don't have the headspace, sorry. I need something I can debug after an 8 hours dayjob, a commute and having put the kids to bed. It better be inviting & fun! (Hi, C).

      • skupig 2 days ago

        Yeah, I had the same experience. I had an embedded project where I would need to use C libraries, and it seemed like a great excuse to try Zig, but it spat out a ton of esoteric errors I couldn't be bothered to figure out and I went back to Nim.

      • throwawaymaths 2 days ago

        after chasing some insane macro gymnastics whivh are there to make the c portble, i came to understand that zig is more easy to debug than c

        • keyle 2 days ago

          I don't live in that universe! I thanos flicked it off!

    • littlestymaar 3 days ago

      The key problem with Zig nowadays is how much of its community and adoption is driven by anti-Rust sentiment. As a result, while Rust puts beginner onboarding and documentation at the center of its culture, as opposed to the “C neckbeard”'s culture, Zig is going the other way around.

      (Loris Cro being a key community figure isn't helping in any way, and it's a good remainder that if you don't clear up your community from bullies from the beginning, they will turn your entire community to a miserable place. And that's a shame because from what I've seen, Andrew Kelley seems to be a very cool guy in addition to being very smart).

      • kristoff_it 2 days ago

        > The key problem with Zig nowadays is how much of its community and adoption is driven by anti-Rust sentiment. As a result, while Rust puts beginner onboarding and documentation at the center of its culture, as opposed to the “C neckbeard”'s culture, Zig is going the other way around.

        Maybe, or maybe the fact that Zig is a small independent project with limited resources has also something to do with it, and this kind of shaming says less about Zig than you'd think.

        When I first joined the Zig project, Zig was still using the bootstrap compiler written in C++ that would not free memory (it took more than 4GB to compile the Zig compiler). Some people at the time were asking us to prioritize work on the package manager but Andrew rightfully wanted to prioritize rewriting the compiler instead. In hindsight this was the obviously right decision: a package manager implies that one can very easily add an order of magnitude more code to their project, stressing the performance of the compiler. If we had not prioritized core infrastructure over giving people what they wanted faster, today we would have people complaining that adding a single dependency to their project makes the build impossible to complete.

        The Zig project has a huge scope and we are a small independent organization. This makes us extremely nimble and efficient, but it does mean that we need to do things in the order that makes the most sense for the project, not for what the public wants.

        The fact that we develop in the open doesn't mean that the language is ready yet.

        People that already have the required domain knowledge (and who have a tolerance for breaking changes) will have the opportunity to be early adopters if they wish to do so, others will have to wait for Zig to become more mature. And we do make this clear in releases and all forms of public communication.

        We have gone a long way since the bootstrap compiler days, but we are still missing key infrastructure:

        - we have a x86_64 custom backend but aarch64 is not complete yet - incremental compilation is showing that we can get instant rebuilds of large projects, but it has missing features and it doesn't work on all platforms yet - we need native fuzzing since AFL keeps regressing everytime a new version of LLVM comes out - for the longest time we haven't had a strong I/O story, now we're finally working on it

        The time for paving the road for a new generation of programmers will come (it's in the ZSF mission statement btw), but first we need to finish the plumbing.

        • mirashii 2 days ago

          > Maybe, or maybe the fact that Zig is a small independent project with limited resources has also something to do with it

          Or, maybe it's this kind of redirection and evidence of a victim complex. Part of the reason that there's a patina of anti-Rust sentiment includes the dismissive attitude and swipes you, a the VP of Community at the Zig Software Foundation, take towards Rust and Rust developers by writing about topics you aren't involved in and don't have a solid grasp of.

          https://kristoff.it/blog/raii-rust-linux/ https://lobste.rs/s/hxerht/raii_rust_linux_drama#c_gbn1q8

          Or similarly, comments like this one in this thread: https://news.ycombinator.com/context?id=44994749

          • hitekker 20 hours ago

            Much of that bad blood comes from how Rust's leadership attacks other programming languages, online and offline.

            pcwalton infamously declared zig was "a massive step back for the industry" https://x.com/pcwalton/status/1568306598795431936?s=46&t=OCi.... He and the Rust Core Team had a big reputation for burning bridges. Even to this day, the new Rust leaders are happy to attack other memory safe languages like Go, declaring them "not memory safe" https://news.ycombinator.com/item?id=4467200

            I think Kristoff remembers these attacks, and crucially how very few voices within the Rust community push back against Rust supremacism.

            • zozbot234 16 hours ago

              > infamously declared zig was "a massive step back for the industry"

              > [Golang] "not memory safe"

              Both of these are entirely fair assessments, not "attacks". Golang really does have memory safety issues with concurrent code, and a memory-unsafe language like Zig is a step back even compared to Java/C#, let alone Rust.

              • hitekker 4 hours ago

                It's an entirely fair assessment within a framework of supremacism. "My language is the best. People who don't use it need to learn it. If they don't, they're bad programmers, maybe even bad people." It's a ugly spirit that no one is honest enough to admit to. But it's there. A few month ago, I saw a supposedly nice Rust leader calling SQLite "a terrible example of anything other than what you can accomplish when you pour enormous resources into a single C library."

                The end result is that Rust's leaders either avoid interacting with other languages, or engage in flamewars. I think it's a big reason why Java, the most popular and successful memory safe language in the world, has little-to-no formal contacts with the Rust team.

            • littlestymaar 4 hours ago

              > Rust supremacism.

              I can't believe you really wrote that.

          • littlestymaar 2 days ago

            Loris really isn't a person worth engaging with honestly, don't waste your time.

            • hardwaresofton 2 days ago

              Personal attacks like this have no place on HN.

              • littlestymaar 2 days ago

                Then Loris should have been perma-banned long ago. Until this is done, we'll have to warn people about engaging with him…

          • mirashii 2 days ago

            I just wanted to clarify that I point this out to say that there was a real opportunity in this thread to try to push back on the language wars perception. To me, good community management would be taking the opportunity to say:

              * Sorry you feel that way, we try not to foster an inclusive environment that avoids language flamewars and negativity towards other languages
              * Some specific examples of policies that are enacted in community spaces towards that end (e.g. some large programming language discord servers have specific rules against low effort language bashing content and memes)
              * Take the opportunity to link back to positive things that the Zig team has said about Rust and places where folks have recommended it.
            
            Instead, we got a shallow dismissal and redirection, which unfortunately starts to look like a pattern, especially when coming from a person who's in a leadership position in the community being discussed.
        • adwn 2 days ago

          > […] Zig was still using the bootstrap compiler written in C++ that would not free memory […]

          That sounds strange. Modern C++ requires very little manual memory management, at least when you're writing something high-level like a compiler. C++11 had been out for years when development on Zig started. Were they writing C++ old-school as C-with-classes and malloc() everywhere? Why not use a more appropriate language for the first prototype of a compiler for a brand new language?

          • messe 2 days ago

            IIRC, it was a performance thing, and it's not an uncommon pattern in CLI tools. Freeing memory can actually cost you performance, so why not just let the OS clean up for you at exit(2)?

            • adwn 2 days ago

              > IIRC, it was a performance thing […]

              Why would you care about these kinds of micro-optimizations at that stage of development, when you don't even know what exactly you need to build? We're not talking about serious algorithmic improvements like turning O(n²) into O(n) here.

              > Freeing memory can actually cost you performance, so why not just let the OS clean up for you at exit(2)?

              Because a compiler is not some simple CLI tool with a fixed upper bound on resource consumption.

              • throwawaymaths 2 days ago

                > Why would you care about these kinds of micro-optimizations

                it turns out that compiler speed is bound by a bunch of things, and it's death by a thousand cuts. If you have a slow compiler, and it takes forever to compile your compiler, your language becomes scelerotic, no one wants to make changes, and your language gets stuck in shitty choices.

                > Because a compiler is not some simple CLI tool with a fixed upper bound on resource consumption

                yes. thats right. a compiler is complex and should use several different allocation strategies for different parts. if your language steers you towards using malloc for everything then your compiler (assuming it's bootstrapped) will suffer, because sometimes there are better choices than malloc.

                • adwn 2 days ago

                  You're losing sight of the full picture: This "optimization" was such a hindrance that they had to rewrite the compiler before they could work on the ecosystem of their new language. From kristoff_it's comment [1]:

                  > When I first joined the Zig project, Zig was still using the bootstrap compiler written in C++ that would not free memory (it took more than 4GB to compile the Zig compiler). Some people at the time were asking us to prioritize work on the package manager but Andrew rightfully wanted to prioritize rewriting the compiler instead. In hindsight this was the obviously right decision: a package manager implies that one can very easily add an order of magnitude more code to their project, stressing the performance of the compiler. If we had not prioritized core infrastructure over giving people what they wanted faster, today we would have people complaining that adding a single dependency to their project makes the build impossible to complete.

                  [1] https://news.ycombinator.com/item?id=44994886

                  • messe 2 days ago

                    > they had to rewrite the compiler before they could work on the ecosystem of their new language.

                    I think you may have missed that the intention was always to rewrite the compiler to become self hosted. Improving the C++ implementation any more would've been pointless optimization.

                  • throwawaymaths 2 days ago

                    no. a global arena is not the right solution for every project. but believe it or not, the zig compiler does use arenas as one of its strategies (just not global arenas), so yes, access to that strategy for performance improvement is still important and absolutely being used in the zig compiler today.

      • pjmlp 2 days ago

        Not only, the whole handmade movement puts me off.

        It is the anti-intelectualism from Go culture, gone wild against C++, Rust, Swift, anything modern, or even tools, using game engines versus doing the whole computer from scratch for a game.

        • kristoff_it 2 days ago

          Zig is not really a handmade project, case in point both Andrew and I are blocked on social media by the two gods of the handmade movement (casey and john) and, according to their die hard fans, Andrew gave a talk at the last handmade conference that caused the community to split apart (the reality is a bit more complex than this, but Andrew's talk is certainly one that you wouldn't see at their new "better software" conference).

          Andrew's talk is here (second event after the two people chatting while sitting on chairs): https://handmadecities.com/media/seattle-2024/hms-day-one/

          Here you can see a particularly funny (but also sad) reaction by one of these people https://drive.proton.me/urls/MB1EB4EF34#YZdvmAvBFp1C

          > using game engines versus doing the whole computer from scratch for a game

          That said you are doing yourself a disservice if you think that not using an engine to make a game is a form of "anti-intellectualism".

          • pjmlp 2 days ago

            Thanks for clarification.

            Depends on the attitude, not using an engine, because one wants to learn the whole stack, makes all sense, after all people need to learn how to make game engines, if nothing else for the next generation.

            Not using one out of spite, because we do everything handmade over here attitude, is a completely different matter.

          • brodo 2 days ago

            Here is another comment on the whole Handmade-Kerfuffle from one of the presenters:

            https://wiki.xxiivv.com/site/2025.html (the entry under 19b)

            It seems this was a right vs. left (or liberal) split.

            • deagle50 2 days ago

              more like old school lib vs "new left"

          • secondary_op 2 days ago

            Unfortunately for everyone, people like you tie their entire identity and income to specific technologies, movements, or communities—becoming deeply tribal in the process. When something or someone undermines their sense of security—even with constructive criticism—they react defensively, often to the point of ignorance.

            All this, combined with the fact that Zig, at best is still in beta quality and at worst amounts to a massive waste of everyone’s time, makes it unsurprising that people block you and simply refuse to engage with your loud community efforts, endless churn and crust tied to beta quality compiler.

            > Zig is not really a handmade project, case in point both Andrew and I are blocked on social media by the two gods of the handmade movement (casey and john) and, according to their die hard fans, Andrew gave a talk at the last handmade conference that caused the community to split apart (the reality is a bit more complex than this, but Andrew's talk is certainly one that you wouldn't see at their new "better software" conference).

            > Andrew's talk is here (second event after the two people chatting while sitting on chairs): https://handmadecities.com/media/seattle-2024/hms-day-one/

            > Here you can see a particularly funny (but also sad) reaction by one of these people https://drive.proton.me/urls/MB1EB4EF34#YZdvmAvBFp1C

            Regarding the links you posted:

            In the first, at 2:30:40, Andrew Kelly publicly calls out a specific author of a competing technology in exaggerated, caricatured, and fabricated context.

            In the second video, yet another author of a yet another competing technology directly points out this unapologetic and concerning behavior on Andrew Kelly’s part.

            And now you—“VP of Community @ Zig Software Foundation”—assert your “righteous” stance by sharing these videos, while ironically pointing out that some of those same individuals (of competing technologies fame) block you on social media.

            Too bad that doing your job probably means being as loud and visible online as possible to spread the molecules of Zig no matter what.

      • ksec 2 days ago

        That is actually a valid point on expectation given the attracted people's demographic.

        I will hopefully wait for comments from Ghostty, Bun or Tigerbeetle Devs.

        On another point that is wroth mentioning, I hope Andrew will at least put it out publicly, IMO Zig isn't Anti-Rust. But it did attract the type of people who are not too happy with Rust. I dont remember a single time Zig came out to bash anything about Rust. It isn't Anti anything at all. Its goal is to be a decent C replacement as very high level assembly languages, aiming at Data Oriented Design with some interesting features, and extremely fast compilation speed. ( Which was the reason why I was interested in it in the first place. I miss Turbo Pascal )

        Zig reminds me of the old school, traditional projects. It isn't perfect for everything, it never claims to be, if you like it, use it. If not, there are plenty of options out there.

        At least Ghostty Dev seems to be enjoying it almost every day.

        • littlestymaar 2 days ago

          I don't think Andrew is anti-Rust. But Loris, whose hn profile says he's “vp of community of the zig software foundation” is as anti-Rust as you can get: he's pathologically obsessed by Rust and spent significant amount of energy outright insulting Rust maintainers here or on Twitter (when he's not buzzy writing anti-Rust rant on his blog).

          But as you say, there's no reason why Zig ought to be anti-Rust, both language are are fresh attempts at low level programming, both highly opinionated and with very different philosophies and trade offs and both language can cohabit peacefully (I've heard good things about using the Zig toolchain for cross compilation of C dependencies in rust projects, so the existence of Zig already has had a positive impact on the Rust ecosystem).

          • geodel 7 hours ago

            Huh, it all started with a Rust developer ceaselessly picking on Zig been memory-unsafe on social media. It would be one thing to write some technical post and be done with it but this obsession with Zig by Rust dev was infuriating to put it mildly. The same dev was very anti-Go in past and would post endless comments on Go here and elsewhere.

      • TUSF 2 days ago

        > anti-Rust sentiment

        I have not seen much of any anti-Rust sentiment in the community. There's a lot of people in the community who do Rust, like rust, and work on rust projects. If the Zig community has an anti-anything sentiment, it's against C++.

        • littlestymaar a day ago

          Tell that to the “VP of community @ zig software foundation” who spend a significant fraction of its time insulting Rust maintainers or mere users here or on Twitter when he's not busy writing anti-Rust rants on his blog.

          (You won't have to seek long in his HN history to find an instance of such behavior)

    • hardwaresofton 3 days ago

      Yeah, thinking about this attitude positively, maybe it’s a feature — if only hard core people can comfortably figure it out, you get higher quality contributions?

      Not trying to imply that’s an explicit goal (probably instead just a resource problem), but an observation

      • ksec 3 days ago

        I think it is a trade off for between zig's development speed and documentation. It is Pre 1.0, extreme beta mode with lots of breaking changes.

        Generally speaking I think it is the right trade off for now. Purely inferring from Andrew and the Zig's team online character as I don't know them in person, I think they do care a lot of DX, things like compiling speed and tools. So I think once 1.0 come I won't be surprised if it will have extremely good documentation as well.

        And I would argue, writing good, simple, clear, detailed documentation is actually harder than writing code itself.

        • brabel 3 days ago

          I've written many APIs. Never have I got it right without first writing lots of tests, finding the rough corners, improving it... and so on. Writing documentation after that is absolutely mandatory for the end result to be a high quality API. As you write how it is meant to work, you will definitely find things that don't really make sense, or that should not be as hard ( I think this post shows just such an API that hasn't gone through this process ). IMHO documentation is NOT optional. The implementation is NOT how you mean for the API to be used.

        • hardwaresofton 3 days ago

          On the one hand, I totally get that pre 1.0 is the wild west (somewhat) and should be. The team is right in jealously guarding their ability to make changes.

          That said, others have pointed out that writing documentation and tests helps improve quality quite a bit, and in this case it would also increase usability. I think I'd agree with this stance, but there is no way I could make the statement that even most of the code I've written for public consumption had excellent documentation or examples. So I've got no leg to stand on there, just the armchair.

          > And I would argue, writing good, simple, clear, detailed documentation is actually harder than writing code itself.

          All the more reason why it must be done! A little silly but from my armchair maybe it's one of those "start with the interface you want and work backwards", but the problem is that approach can be at odds with mechanical sympathy and we know which side Zig lands on (and arguably should land on based on it's values).

        • LambdaComplex 3 days ago

          My quick skim of Wikipedia may not be telling the complete story, but it says the initial release was 9 years ago (February 2016). After nearly a decade, I would hope that things would be out of "extreme beta mode," but I guess this isn't the case?

          • ksec 3 days ago

            For most of its time it is simply a single person and part time project. Even to this day the team is nowhere near Rust or Go's resources.

            • tialaramex 2 days ago

              In 2016, nine years ago, Andrew announced he'd been working on the new language "Zig" for a couple of months.

              In 2018, seven years ago, Andrew announced he'd go full-time on Zig and quit his paying job to live off donations instead.

              In 2020, so five years ago, Zig's 501(c)3 the ZSF was announced, to create a formal structure to hire more people in addition to the few already on Zig.

              So, "most of its time" is just not true. For "most of its time" Zig was a small, largely independently funded project for multiple people, for a tiny period it was a part-time project, and for a while after that it was solo, but those weren't the majority of its existence.

              • throwawaymaths 2 days ago

                yeah but i think you can count on one hand how many full time zig developers are paid by the foundation.

            • LambdaComplex 3 days ago

              Huh, I actually expected there to be a bigger team working on it. In that case: I'm really impressed.

          • HumanOstrich 3 days ago

            What's the benchmark for how long something can be pre-1.0? Seems like a nonsense argument.

            • Dylan16807 3 days ago

              It's the combination of pre-1.0 and having rapid development speed that is being questioned here. And it's a good question, not nonsense.

              If you keep up the development pace you're going to approach stability. Unless you're in a manic spiral of rewrites.

            • yxhuvud 3 days ago

              Something can be pre-1.0 as long as there are no stability guarantees.

            • pharrington 3 days ago

              There is no benchmark. As a species, we don't even know know what a good programming language is, let alone how to reliably develop one. This stuff takes time, and we're all learning it together.

              I like to compare this to real world cathedral building. There are some cathedrals that are literally taking centuries to build! It's OK if the important, but difficult thing takes a long time to build.

              • Dylan16807 3 days ago

                Cathedrals are the opposite of extreme beta mode with lots of breaking changes.

                • pharrington 2 days ago

                  Yes. I guess what I meant is that cathedrals are a complex system that we know how to do, and still they take ages to build properly.

          • littlestymaar 3 days ago

            Same as Rust being almost a decade old when the first 1.0 was published.

            Making a programming language from scratch is a long endeavor when it's a one man project.

        • sshine 3 days ago

          Unstable APIs are a good example of something that's extremely valuable early on.

          They unarguably cause confusion for everyone as they change.

          But it lets you choose the right abstractions that are going to stick for decades.

          If you're going to make a python2 -> python3 transition in your language, make sure it's X0 -> X1.

      • chrisandchris 3 days ago

        Contributions to the Zig language or contributions to software using Zig (the latter is the one the post is about as I understand)?

        If so, I believe Zig will stay within a niche. Lower entry barriers allow "script kiddies" to easily start withe language, and they eventually will become leading engineers. Only a few people tend to go straight for the highest practice without "playing around". IMHO the reason, why PHP got so popular (it was not good back then, just very very easy to start with).

        • hardwaresofton 3 days ago

          > Contributions to the Zig language or contributions to software using Zig (the latter is the one the post is about as I understand)?

          Yes.

          I think a contributor that really wanted to help the ecosystem would start in the stdlib and then start moving outwards. Even if it was LLM-assisted, I think it could be high value.

          IIRC Loris already has an engine for building websites with Zig, but making sure that every Zig library has docs (similar to rustdocs) might be a great start. It is incredibly useful to have a resource like rustdocs, both the tooling and the web sites that are easily browsable.

          Again, maybe everyone in the Zig ecosystem just has amazing editor setups and massive brains, but I personally really like the ease of browsing rustdoc.

          > If so, I believe Zig will stay within a niche. Lower entry barriers allow "script kiddies" to easily start withe language, and they eventually will become leading engineers. Only a few people tend to go straight for the highest practice without "playing around". IMHO the reason, why PHP got so popular (it was not good back then, just very very easy to start with).

          I agree, but I'd add that the niche they're aiming for is systems programming, so they're probably fine :). The average hacker there is expecting C/C++ or to be near the metal, and I think Zig is a great fit there. They're likely not going to convince people who write Ruby, but it feels reasonable for C hackers.

          Also I want to just be clear that I think Zig has a lot of motivating factors! They're doing amazing things like zig cc, unbelievably easy, "can't believe it's not butter" cross-compilation, their new explicit/managed I/O mechanism, explicit allocators as a default, comptime, better type ergonomics. It's a pretty impressive language.

          • flohofwoe 2 days ago

            > already has an engine for building websites with Zig, but making sure that every Zig library has docs

            Tbh, this sort of auto-generated docs from source code is not all that useful, since you get that same information right in the IDE via the language server.

            The important documentation part that's currently missing is how everything is supposed to work together in the stdlib, not the 'micro-documentation' of what a single type or function does. And for this sort of information it's currently indeed better to look at example code (e.g. the stdlib's testing code).

            IMHO it's way too early for this type of high-level documentation, since things change all the time in the stdlib. Putting much work into documenting concepts that are discarded again anyway doesn't make much sense.

            • BobbyJo 2 days ago

              Tests very often don't tell you the right way to use something, especially when you're talking about IO libraries. Examples themselves often don't even show the "correct" way, but rather just a way that will work in ideal circumstances.

      • rjh29 3 days ago

        I think it is intentional. They don't want to attract low-commitment beginners while the language is heavily changing (and explicitly in beta). Such people will ask questions and ask for documentation but contribute nothing.

      • DannyBee 2 days ago

        More likely you get people who have the same groupthink, which has both ups and downs. Their contribution quality is probably not that well correlated.

  • jakobnissen 3 days ago

    There is a cost to writing documentation - it takes time, which could be used to improve Zig in other areas. For code that is work-in-progress, it can make sense to not document until things are more settled.

    Of course documentation is good. But if you have to prioritize either a new feature, or a critical bugfix, or documentation, you often can't have it all

    • simonask 3 days ago

      I tend to actually disagree with this attitude, because I see writing documentation as really effective "rubber-ducking". If it's hard and time-consuming to properly document, it's probably hard to use, so extra effort should be spent to actually justify the design, not least to yourself in 6 months. If you can't justify it, it's probably wrong.

      • kelnos 2 days ago

        This really struck a chord with me. Writing documentation is an act of explaining something to others. Explaining something to others is a great way to test your own understanding. If it's hard to explain to someone else, then maybe it's the wrong design.

        If you don't through that exercise, you're much more likely to build confusing, difficult-to-use APIs.

        • throwawaymaths 2 days ago

          > you're much more likely to build confusing, difficult-to-use APIs.

          have not found this to be the case with zig in general. you could easily make the opposite argument, that documenting things (especially quirks) can give you license to build confusing APIs.

          • ioasuncvinvaer 2 days ago

            I think some documentation and a handful of examples would have helped the author of the article. How is the experience when porting to the new standard library improved by not including documentation?

          • CRConrad a day ago

            > > you're much more likely to build confusing, difficult-to-use APIs.

            > have not found this to be the case with zig in general.

            Dunno how general it is, but TFA we're discussing here contains an example, so AFAICS it seems at least a little too common.

      • pmarreck 2 days ago

        100%.

        Similar to how TDD forces you to first figure out the API of your code due to the test code being its first client.

    • crote 2 days ago

      On the other hand, not writing documentation also has a cost.

      Code is rarely written from start to finish in a single session. Would you rather spend 5 minutes before you do a git commit to write down some very basic documentation, or spend an hour rediscovering it when you pick up development two weeks later?

      Nobody is expecting extensive and well-polished documentation here, but is a "Flags X, Y, and Z are mandatory, buffer A is required: I tested with 10MB, smaller might be fine too" really too much to ask for?

      Is the time you saved by not writing that one line worth someone else spending several hours figuring out how to do a "hello world"?

    • 7bit 2 days ago

      Your argument implies that good documentation is not an improvement, which of course is wrong. It also belongs to the task of improving code. Why would you move away after half-assing the API, when you can add the docs and whole-ass it instead?

    • hardwaresofton 3 days ago

      This is a good point, but one could make a point for the usefulness of documentation in “thinking like a user” which is a valuable exercise.

      I do very much prefer moving fast though, so I get it, docs-later is obviously a very valid way of doing things.

      If someone is excited about Zig and wanted to make a difference I guess it’s now obvious where they could have outsized impact!

      • 7bit 2 days ago

        Docs later is an okay approach, when you build something in a closed environment, where the only users know the code inside out.

        But when working in open-source and your goal is to have people adopt your software, then it's a bad point and a lazy excuse.

    • aDyslecticCrow 3 days ago

      Then they should not have relead the new api at all. Why release half finnished library.

      • flohofwoe 3 days ago

        Zig as a whole is half-finished, should it be kept under wraps until it is ready?

        There's a reason for the 0.x version number, if you can't live with breaking changes, don't use Zig yet. It's as simple as that.

        • aDyslecticCrow 2 days ago

          Breaking changes and limited support is one thing. But docs is not something you add at at the end. Lacking docs and Lacking tests is code that should not be released.

          They target embedded c market. Limited support and features is expected and common. But being lax with docs and testing is not acceptable.

          • flohofwoe 2 days ago

            > But being lax with docs and testing is not acceptable.

            First: Those two things are not the same. The Zig stdlib and compiler have tons of tests, and it's actually quite rare to stumble over implementation bugs.

            Second: I'd rather take no docs at all than per-function or per-type doc-headers which just repeat what the code does in natural language (and then may quickly get stale). Some parent wrote about generating docs with an LLM which is just a laughably bad idea, since even with perfect results you'll just get a redundant reiteration of what the code below does, just in imprecise human language, it would literally just be redundant noise.

            Documentation mostly makes sense up on the systems level to explain high level concepts, not as 'this function does this and that...' level, this information already exists in much more precise form in the source code below the doc header.

            But this sort of high level documentation doesn't make sense as long as a project is in the experimentation phase, and Zig is still one big experiment.

            • aDyslecticCrow a day ago

              I fundamentally disagree on code readability. A comment on the function header can save an hour of code reverse engineering for a complex library. Reading code is a last resort, and stale code docs is itself a red flag.

              This post itself is a great example. The code is not self explanatory unless you understand the library below. So if this was library code; you would have to reverse through two abstraktion walls to understand it.

              And as for high level documentation... it would be very helpful to have that DURING the development process to discuss the implementation, and quickly get people up-to-speed in the plan during code review. Id write that before the code.

              I mention tests, because they are 10x more work to write than docs. So if the developer skips out on docs, my trust in their testing is also destroyed.

      • sureglymop 3 days ago

        Because the language is not stable at this point and hasn't reached 1.0?

        Are you saying one should never make anything half finished available to the public? This post proves why it is valuable to do so, they are getting valuable feedback and a discussion on hacker news for free.

        • viraptor 2 days ago

          There's an alternative of being much more up front about the status. For example the project page doesn't really say it's unstable/experimental. It only says "Zig has not yet reached v1.0" on the "getting started" page, which doesn't really mean that much - for example Putty is still at 0.83 after 26 years.

          If the project invites me to use it "for maintaining robust, optimal and reusable software." without putting "super unstable, we don't even care about docs" on the same page... that's also saying something.

          • throwawaymaths 2 days ago

            AIUI the team has a geberal consensus on what is left tp do before 1.0, there's even a tracking list of requirements on the github.

            if you have that attitude about docs, then likely you are being gatekept from the project until it hits 1.0.

    • wordofx 3 days ago

      Na. It’s flat out laziness. Don’t make excuses. Either write docs or stop worrying code.

  • pharrington 3 days ago

    I'm not a Zig developer, but I imagine one reason why the Zig documentation is so spartan is because the language is still young and constantly evolving. It's really hard to devote the time and energy to writing documentation when you know that what you've written will just be wrong at some uncertain point in the future.

  • ulbu 3 days ago

    i find that zig is too oriented at doling out directives for what not to do instead of just collecting and teaching variants of how and what to do. the lack of documentation on this interface is a sore case in point.

    • geon 3 days ago

      You can’t expect documentation this early. The new interface was just released.

      • viraptor 3 days ago

        In serious codebases docs are not an afterthought. There's lots of places where you're expected to add both a new interface and docs together.

        • rjh29 3 days ago

          It's pre-1.0 beta. Nothing has been 'released' yet .

          • viraptor 2 days ago

            That's ok. "It's an unstable, experimental, early version." is a valid explanation. GP put the lack of docs and the new interface together, which... isn't an excuse.

            • geon 2 days ago

              That Zig is pre 1.0 is a given. Not sure how you could take my statement as ”even post 1.0 releases don’t need docs.”

              • viraptor 2 days ago

                > Not sure how you could take my statement as ...

                I didn't.

      • speed_spread 2 days ago

        You absolutely should expect documentation this early. The chances of quality documentation being added as an afterthought are not good. If proper documentation isn't part of the original goals after that time, things aren't looking well. Nobody is going to come and just write docs for months.

        • geon 2 days ago

          Zig is open source, so the docs are pretty much a wiki. You can spend 5 minutes.

      • wordofx 3 days ago

        So zig isn’t a serious language. It’s just some trash apis thrown together.

        • maxbond 2 days ago

          It's a serious language, it's just not a stable language yet.

      • 7bit 2 days ago

        [flagged]

  • 0x696C6961 3 days ago

    Writing good docs/examples takes a lot of effort. It would be a waste considering the amount of churn that happens in zig at this point.

    • DannyBee 2 days ago

      Only in a world where features are made without thought. Documentation is not just for your users. Writing the development documentation helps you make better features in the first place.

      The zig users on this thread seem to not understand this, and all seem to think documentation is a thing you write later for users when everything settles down. Or is somehow otherwise "in the way" of feature and API development speed.

      That is a very strange view.

      If writing developer documentation is having serious affect on your language feature velocity, you are doing something very wrong. Instead, writing it should make things move faster, becuase, at a minimum, others understand what you are trying to do, how it is going to work, and can help. Also, it helps you think through it yourself and whether what you are writing makes any sense, etc.

      Yes there are people who can do this all without documentation, but there are 100x as many who can't, but will still give high quality contributions that will move you along faster if you enable them to help you. Throwing out the ability to have these folks help you is, at a minimum, self-defeating.

      I learned this the hard way, because i'm one of those folks who can just stare at random undocumented messy code and know what it actually does, what the author was probably trying to do, etc, and it took years till i learned most people were not like this.

      • 0x696C6961 2 days ago

        You're conflating specs/RFCs with end-user documentation.

        • CRConrad a day ago

          No, AFAICS GP wasn't conflating anything. Why would you think they were?

    • dns_snek 2 days ago

      That's a false dichotomy, I'll take minimal bitrotten docs in a community wiki over no docs. There's no excuse not to at least have a stub that says "these features are evolving quickly but here are 10 open source projects that should serve as good references".

      Something - anything. As much as I like Zig, I dread returning to it after a few months of being out of the loop.

      • 0x696C6961 2 days ago

        It's not a false dichotomy, it's a bandwidth issue. I'd rather the core team focus on stuff like incremental compilation.

        • dns_snek 2 days ago

          "No documentation" and "Core devs 'wasting' time on writing high quality documentation" aren't the only two options, that's what a false dichotomy means.

          Other options include but are not limited to providing minimal, low effort examples, high-level overview, linking to projects using these features, linking to relevant tests, commits, or source code, setting up an official community wiki and encouraging people to contribute.

          • 0x696C6961 2 days ago

            "No documentation" was never presented as an option. Documentation exists. By treating "insufficient docs" as "no docs", you're the one making the false equivalency.

            • dns_snek 2 days ago

              > Documentation exists

              Where does a beginner go to learn how to use the package manager these days? It looks like they still won't find any clues in the "Learn" section of Zig's website.

              There's a promising page titled "Zig Build System" in there which references a "Package Management" section, but when you click on it, it doesn't exist!

              • 0x696C6961 2 days ago

                So in your mind, one missing section means "no documentation"? This isn't the checkmate you think it is, you're just moving goalposts.

                But to answer your question, it exists in the comments of the auto-generated build.zig.zon file

                • dns_snek 2 days ago

                  It's not just "one missing section", and we're not just talking about some fringe language feature but the build system which is probably the most complex, important, and under documented part of the language.

                  The official answer to complaints about missing documentation has always been "ask in Discord". Pretending that this isn't the case is just disingenuous.

                  > But to answer your question, it exists in the comments of the auto-generated build.zig.zon file

                  The comments document the file format of build.zig.zon, they don't tell you anything about how to actually use a dependency in your build system.

                  • kristoff_it 2 days ago

                    I've added more explanatory comments in the new template that ships with 0.15.1, also you might be interested in this video https://www.youtube.com/watch?v=jy7w_7JZYyw

                    • dns_snek 2 days ago

                      Thank you! I've learned how to use it well enough for own my needs by now but I think it would be really helpful to beginners if links to this video and any other useful resources you know about were added to the "Zig Build System" page on your website.

                • CRConrad a day ago

                  > you're just moving goalposts.

                  Pretty funny, coming from someone who went from "false dichotomy" to "the false equivalency".

    • geon 3 days ago

      Yes. For now, that effort is better spent writing clear test cases that can serve to illustrate the intended usage.

      While tests aren’t quite as good documentation as actual documentation, they are guaranteed to not be out of date.

Galanwe 3 days ago

The Zig's language is really good, but the standard library is really a big work in progress, constantly shifting, missing a lot of bits, overly abstracted at some places and too low level at other places.

I would say just stay away from the standard library for now and use your OS API, unless you're willing to be a beta tester.

  • omgtehlion 2 days ago

    Yup, exactly how I use zig most of the time: just use good old OS apis, it is very easy to use cImports, when I'm too lazy to create zig definitions )

  • norir 2 days ago

    From my perspective, Zig is trying to do far too many things to ever reach a baseline of goodness that I consider acceptable. They are in my view quite disrespectful to their users who they force to endure churn at the whims of their dictator. Now that enough people have bought in and accepted that a broken tool is ok so long as it hasn't been blessed with a 1.0 all of its clear flaws can be overlooked in the hope of the coming utopia (spoiler alert: that day will never arrive).

    Personally, I think it is wrong to inflict your experiments on other people and when you pull the rug out from underneath say, well, we told you it was unstable, you should't have depended on us in the first place.

    I don't even understand what zig is supposed to be. Matklad seems to think it is a machine level language: https://lobste.rs/s/ntruuu/lobsters_interview_with_matklad. This contrasts with the official language landing page: Zig is a general-purpose programming language and toolchain for maintaining robust, optimal and reusable software. These two definitions are mutually incompatible. Moreover, zig is clearly not a general purpose language because there are plenty of programming problems where manual memory management is neither needed nor desirable.

    All of this confusion is manifest in zig's instability and bloated standard library. Indeed a huge standard library is incompatible with the claims of simplicity and generality they frequently make. Async is not a feature that can be implemented universally without adding overhead and indirection because of the fundamental differences in capabilities exposed by the various platforms. Again, they are promising a silver bullet even though their prior attempt, in which they publicly proclaimed function coloring to be solved, has been abandoned. Why would we trust them to get it right a second time?

    There are a very small number of assembly primitives that every platform provides that are necessary to implement a compiler. Load/store/mov/inc/jeq/jump and perhaps a few others. Luajit implements its parser in pure assembly and I am not aware of an important platform that luajit runs on that zig goes. I do the vast majority of my programming in lua and _never_ run into bugs in the interpreter. I truly cannot think of a single problem that I think zig would solve better than luajit. Even if that did exist, I could embed the zig code in my lua file and use lua to drive the zig compiler and then call into the specialized code using the lua ffi. But the vast majority of code does not need to be optimized to the level of machine code where it is worth putting up with all of the other headaches that adopting zig will create.

    The hype around zig is truly reaching llm levels of disconnection from reality. Again, to believe in zig, one has to believe it will magically develop capacities that it does not presently have and for which there is no plan to actually execute besides vague plans of just wait.

stop50 3 days ago

I have never understood libraries or imterfaces that want me to allocate buffers for their type. I can't parse them (no need for the lib then) or write to them (would probably break the exchange).

The weird interface of go is probably due the fact that some interfaces can be used to extemd the writer like the hijacker interface (ResponseWriter.(http.Hijacker)) and the request object is used multiple times with different middlewares interacting with it. In short: request does not need to be extended, but the response can be an websocket, an wrapped tcp connection or something else.

  • kelnos 2 days ago

    > I have never understood libraries or imterfaces that want me to allocate buffers for their type.

    That doesn't seem that odd to me. It's a trade off: more flexibility, but more manual work. Maybe I have a buffer that I've allocated that I'm not using anymore (say I have a buffer pool) and want to use it again. If the type allocates its own behind the scenes, I can't do that. Or maybe I'm working in an environment where I need to statically allocate all of my resources up-front, and can't allocate later.

    The big downside is that if 90% of people are just going to allocate a buffer and pass it in, it sucks that 90% of people need to do more work and understand more minutiae when only 10% of the people actually need to. The holy grail is to give lots of flexibility, but make the simple/common case easy.

    A simple improvement to this interface might be to allow the caller to pass a zero-length buffer (or Zig's version of null), and then the type will allocate its own buffer. Of course, there's still a documentation burden so people know they can do that. Another option could be to have second constructor function that takes no buffer arguments at all, which allocates the buffers and passes them to the fully-flexible constructor function.

    • jeroenhd 2 days ago

      > If the type allocates its own behind the scenes, I can't do that.

      Isn't that the reason why Zig passes around allocators everywhere? If you're using a buffer pool, you should probably be handing out some kind of buffer pool allocator.

      Requiring all allocation to have happened before execution is still a good reason to pass buffers around, but I feel like the other situations you describe can be solved by just passing the right allocators.

  • benreesman 3 days ago

    It's just a different convention like radians and degrees.

    You can lift/unlift in or out of arbitrary IO, in some languages one direction is called a mock, in other languages the opposite is called unsafeFoo.

    Andrew Kelley independently rediscovered on a live stream 30 years of the best minds in Haskell writing papers.

    So the future is Zig. He got there first.

    • pjmlp 3 days ago

      Only if use after free story actually gets fixed, and not by repurposing what has already existed in the C and C++ ecosystems for the last 30 years, like PurifyPlus or VC++ debug allocator.

      • benreesman 3 days ago

        If you mean running clang-tidy as a separate build step or ASAN in a different category than other soundness checks?

        Compute is getting tight, lots of trends, the age of C++ is winding down gracefully. The age of Zig is emerging delibetately, and the stuff in the middle will end up in the same historical trash bin as everything else in the Altman Era: the misfortunes of losing sight of the technology.

        • pjmlp 3 days ago

          I mean those and other ones, we already have enough unsafe languages as it is.

          The age of C++ is going great, despite all its warts and unsafety, thanks to compiler frameworks like GCC and LLVM, games industry, GPGPU and Khronos APIs.

          Even if C++ loses everywhere else, it has enough industry mindshare to keep being relevant.

          Same applies to C, in the context of UNIX clones, POSIX, Khronos, embedded.

          Being like Modula-2 or Object Pascal in safety, in C like syntax, isn't enough.

          • pron 2 days ago

            > we already have enough unsafe languages as it is

            By that logic, we definitely have enough safe languages as it is, as there are many more. But this safe/unsafe dichotomy is silly, and is coloured by languages that are unsafe in some particular ways.

            1. Memory safety is important because memory-safety violations are a common cause of dangerous security vulnerabilities. But once you remove out-of-bounds access, as Zig does, memory safety doesn't even make it to the top 5: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html I.e. the same logic that says we should focus on safety would lead us to conclude we should focus on something else.

            2. Memory safety has a cost. To get it, you have to give up something else (there could even be a cost to correctness). That means that you have to consider what you're getting and what you're losing in the context of the domain you're targeting, which is not the same for all languages. C++, with its "zero-cost abstractions", believed it could be everything for everyone. That turned out not to be the case at all, and Zig is a very different language, with different goals, than C++ originally had.

            Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing? Of course, but that's not an option. Even Java and Rust could prevent many more dangerous bugs - including those that are higher risk than use-after-free - if they had more facilities like those of ATS or Idris. But they don't because their designers think that the gains wouldn't be worth the cost.

            If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs. That's a nice sentiment, but how and at what cost?

            • pjmlp 2 days ago

              We actually already have enough safe languages as well.

              I am a firm beliver in the vision of Xerox PARC for computing, and think the only reason we aren't yet there are politics, lack of funding from management for doing the right thing pushing them into the market, always looking to shareholders and the next quarter, and naturally programming language religion.

              We were already on the right direction with languages like Modula-3 and Active Oberon, following up on Cedar influences, unfortunately that isn't how the industry goes.

              • pron 2 days ago

                But software isn't developed for its own sake. It's built to serve some purpose, and it's through its purpose(s) that the selection pressures work. It's like Betamax fans saying that people were wrong to want a longer recording time than better picture quality. It's not enough to say that you like some approach better or to even claim that some untaken path would yield a more desirable outcome. You need to show that it actually works in the real world, with all of its complex variables. For example, in the nineties I worked on safety-critical software in Ada, but we ended up dumping it in favour of C++. It's not because we didn't recognise Ada's advantages, but because, in addition to those advantages over C++, it also had some very significant disadvantages, and in the end C++ allowed us to do what we were supposed to do better. Ada's build times alone made it so that we could write and run fewer tests, which hurt the software correctness overall more than it helped. We also ended up spending more time understanding the intricacies of the language, leaving us less time to think about the algorithm.

                • pjmlp 2 days ago

                  Ada was impacted by the tooling price and high demands on developer workstations.

                  Rational started as a company selling Ada Machines, that didn't had such issues with compilation times, but again goes down to reasons I listed why mainstream keeps ignoring such tools until finally governments are stepping in.

              • ksec 2 days ago

                > vision of Xerox PARC for computing

                What is that in relation to Zig and memory safety? Am I missing some context?

                • pjmlp 2 days ago

                  Smalltalk, Interlisp-D, and Mesa/Cedar as the languages for full graphical workstations.

                  Instead we got UNIX and C.

                  • pron a day ago

                    We also got Java and Python (and VB for a while), which means there is no intrinsic, irrational bias against those approaches. A romantic view of those languages tends to ignore their serious shortcomings at the time they were presented. It's like claiming the market was irrational when it preferred VHS to Betamax despite the latter's better quality, while neglecting to mention it had a worse recording time, which mattered more to more people. When compating two things, it's not enough to mention the areas where X is better than Y; you also need to consider those where X is worse.

                    • pjmlp 16 hours ago

                      Yes we did, and unfortunately all of them have done design mistakes that keep to this day trying to fix.

                      Had Java actually been designed for where it stands today, instead for set top boxes and browser applets, retrofitting the value types, AOT compilation and low level capabilities already available in Modula-3 and Eiffel wouldn't be such an engineering feat that is already on 15 years mark.

                      Interestingly enough, value classes approach remind very much how expanded classes work in Eiffel.

                      Likewise money keeps being burned into making industry adopt a JIT compiler into CPython, which they could have been enjoying with Smalltalk and Lisp.

                      • pron 10 hours ago

                        Smalltalk had design mistakes aplenty. Comparing the downsides of the languages you like less with the upsides of the languages you like more is not a great way to understand why things are done the way they're done. Usually, it's not because people chose the worse options, but because each option has pros and cons, and at the time a choice is made, the chosen option's balance of pros and cons was more suitable to people's needs than the alternatives.

                        It may sometimes be the case that we say, oh, that particular feature of some unchosen language can be very useful for us now, but that is something very different from we should have chosen that other language - with all its pros and cons - back then.

            • tialaramex 2 days ago

              > there could even be a cost to correctness

              Notice that this cost, which proponents of Zig scoff at just like C++ programmers before them, is in fact the price of admission. "OK, we're not correct but..." is actually the end of the conversation. Everybody can already do "Not correct", we had "Not correct" without a program, so all effort expended on a program was wasted unless you're correct. Correctness isn't optional.

              • pron 2 days ago

                It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris. From those languages' vantage point, the guarantees Rust makes are almost indistinguishable from C. Yet no one says, "the conversation is over unless we all program in languages that can actually guarantee correctness" (rather than one that guarantees the lack of the eighth most dangerous software weakness). Why? Because it's too expensive.

                A language like Rust exists precisely because correctness isn't the only concern, as most software is already written in languages that make at least as many guarantees. Rust exists because some people decide they don't want to pay the price other languages take in exchange for their guarantees, but they can afford to pay Rust's price. But the very same reasoning applies to Rust itself. If Rust exists because not all tradeoffs are attractive to everyone, then clearly its own tradeoffs are not attractive to everyone.

                The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints. If you can't do it in those constraints, it doesn't matter what guarantees you make, because the program won't exist. If you can meet the constraints, then you need to ask whether the program's correctness, performance, user-friendliness etc. are good enough to serve the software's purpose.

                And that's how you learn what software correctness researchers have known for a long time: sometimes increasing correctness guarantees can have unintuitive cost/benefit interactions that they may even end up harming correctness.

                There are similar unintuitive results in other disciplines. For example, in software security there's what I call the FP/FN paradox. It's better to have more FN (false negatives, i.e. let some attacks go through) than more FP (false positives, i.e. block interactions that aren't attacks) because FPs are more likely to lead to misconfiguration or even to abandonment of the security mechanism altogether, resulting in weaker security. So, in software security it's a well known thing that to get better security you sometimes need to make fewer guarantees or try less hard to stop all attacks.

                • Ygg2 2 days ago

                  > It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris.

                  In a better, saner world, we'd writing Ada++ not C++. However, we don't live in a perfect world.

                  > The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints.

                  The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product.

                  This goal can be reached by learning to write Rust; practice makes perfect.

                  If GC is acceptable or you need lower compilation times, then yes, go and write your code in C#, Java, or JavaScript.

                  • pron a day ago

                    > In a better, saner world, we'd writing Ada++ not C++.

                    As someone who worked on safety-critical air-traffic-control software in the nineties, I can tell you that our reasons for shifting to C++ were completely sane. Ada had some correctness advantages compared to C++, but also disadvantages. It had drastically slower build times, which meant we couldn't test the software as frequently, and the language was very complicated that we had to spend more time digging into the minutiae of the language and less time thinking about the algorithm (C++ was simpler back then than it is now). When Java became good enough, we switched to Java.

                    Build times and language complexity are important for correctness, and because of them, we were able to get better correctness with C++ than with Ada. I'm not saying this is universal and always the case, but the point is that correctness is impacted by many factors, and different projects may find achieving higher correctness in different ways. Trading off fewer use-after-free for longer build times and a more complex language may be a good tradeoff for the correctness of some projects, and a bad tradeoff for others.

                    > If GC is acceptable or you

                    BTW, a tracing GC - whose costs are now virtually entirely limited to a higher RAM footprint - is acceptable much more frequently than you may think. Sometimes, without being aware, languages like C, C++, Rust, or Zig may sacrifice CPU to reduce footprint, even when this tradeoff doesn't make sense. I would strongly recommend watching this talk (from the 2025 International Symposium on Memory Management), and the following Q&A about the CPU/footprint tradeoff in memory management: https://www.youtube.com/watch?v=mLNFVNXbw7I

                    • tialaramex a day ago

                      > a tracing GC is acceptable much more frequently than you may think

                      Two things here, the easy one first: The RAM trade off is excellent for normal sizes but if you scale enormously the trade off eventually reverses. $100 per month for the outfit's cloud servers to have more RAM is a bargain compared to the eye-watering price of a Rust developer. But $1M per month when you own whole data centres in multiple countries makes hiring a Rust dev look attractive after all.

                      As I understand it this one is why Microsoft are rewriting Office backend stuff in Rust after writing it originally in C#. They need a safe language, so C++ was never an option, but at their scale higher resource costs add up quickly so a rewrite to a safe non-GC language, though not free, makes economic sense.

                      Second thing though: Unpredictability. GC means you can't be sure when reclamation happens. So that means both latency spikes (nasty in some applications) and weird bugs around delayed reclamation, things running out though you aren't using too many at once, etc. If your only resource is RAM you can just buy more, but otherwise you either have to compromise the language (e.g. a defer type mechanism, Python's "with" construct) or just suck it up. Java tried to fix this and gave up.

                      I agree that often you can pick GC. Personally I don't much like GC but it is effective and I can't disagree there. However you might have overestimated just how often, or missed more edge cases where it isn't a good choice.

                      • pron a day ago

                        > The RAM trade off is excellent for normal sizes but if you scale enormously the trade off eventually reverses

                        I don't think you've watched the talk. The minimal RAM-per-core is quite high, and often sits there unused even though it could be used to reduce the usage of the more expensive CPU. You pay for RAM that you could use to reduce CPU utilisation and then don't use it. What you want to aim for is a RAM/CPU usage that matches the RAM/CPU ratio on the machine, as that's what you pay for. Doubling the CPU often doubles your cost, but doubling RAM costs much less than that (5-15%).

                        If two implementations of an algorithm use different amounts of memory (assuming they're reasonable implementations), then the one using less memory has to use more CPU (e.g. it could be compressing the memory or freeing and reusing it more frequently). Using more CPU to save on memory that you've already paid for is just wasteful.

                        Another way to think about it is consider the extreme case (although it works for any interim value) where a program, say a short-running one, uses 100% of the CPU. While that program runs, no other program can use the machine, anyway, so if you don't use up to 100% of the machine's RAM to reduce the program's duration, then you're wasting it.

                        As the talk says, it's hard to find less than 1GB per core, so if a program uses computational resources that correspond to a full core yet uses less than 1GB, it's wasteful in the sense that it's spending more of a more expensive resource to save on a less expensive one. The same applies if it uses 50% of a core and less than 500MB of RAM.

                        Of course, if you're looking at kernels or drivers or VMs or some sorts of agents - things that are effectively pure overhead (rather than direct business value) - then their economics could be different.

                        > Second thing though: Unpredictability. GC means you can't be sure when reclamation happens.

                        What you say may have been true with older generations of GCs (or even something like Go's GC, which is basically Java's old CMS, recently removed after two newer GC generations). OpenJDK's current GCs, like ZGC, do zero work in stop-the-world pauses. Their work is more evenly spread out and predictable, and even their latency is more predictable than what you'd get with something like Rust's reference-counting GC. C#'s GC isn't that stellar either, but most important server-side software uses Java, anyway.

                        The one area where manual memory management still beats the efficiency of a modern tracing GC (although maybe not for long) is when there's a very regular memory usage pattern through the use of arenas, which is another reason why I find Zig particularly interesting - it's most powerful where modern GCs are weakest.

                        By design or happy accident, Zig is very focused on where the problems are: the biggest security issue for low-level languages is out-of-bounds access, and Zig focuses on that; the biggest shortcoming of modern tracing GCs is arena-like memory usage, and Zig focuses on that. When it comes to the importance of UAF, compilation times, and language complexity, I think the jury is still out, and Rust and Zig obviously make very different tradeoffs here. Zig's bottom-line impact, like that of Rust, may still be too low for widespread adoption, but at least I find it more interesting.

                        > As I understand it this one is why Microsoft are rewriting Office backend stuff in Rust after writing it originally in C#

                        The rate at which MS is doing that is nowhere near where it would be if there were some significant economic value. You can compare that to the rate of adoption of other programming languages or even techniques like unit testing or code review. With any new product, you can expect some noise and experimentation, but the adoption of products that offer a big economic value is usually very, very fast, even in programming.

                        • tialaramex 13 hours ago

                          That talk is mainly a GC person assuring you and perhaps themselves that all this churn is actually desirable. While "Actually this was a terrible idea and I regret it" is a topic sometimes (e.g. Tony Hoare has done this more than once) the vast majority of such talks exist to assure us that the speaker was correct, or at worst that they made some brief mistake and have now corrected it. So there's nothing unexpected here, I would not expect something else from the GC maintainer.

                          The part you've mistaken for being somehow general and relevant to our conversation is about trade-offs within GC. So it's completely irrelevant rather than being, as you seemed to imagine, an important insight. It actually reminds me of the early 1940s British feedback on intelligence for the V2 rocket. British and American Scientists were quite sure that Germany could not develop such a rocket, toy rockets work but at scale this rocket cannot work.

                          You can buy those toys today, a model store or similar will sell you the basics so you can see for yourself, and indeed if you scale that up it won't make an effective weapon. However intelligence sources eventually revealed how the German V2 was actually fuelled. To us today having seen space rockets it's obvious, the fuel is a liquid not a solid like the toy, which makes fuel loading rather difficult but delivers enormously more thrust. The experts furiously recalculated and discovered that of course these rockets would work unlike the scaled up toy they'd been assessing before. The weapon was very real.

                          Anyway. The speaker is assuming that we're discussing how much RAM to use on garbage. Because they're assuming a garbage collector, because this is a talk about GC. But a language like Rust isn't using any RAM for this. "The fuel is liquid" isn't one of the options they're looking at, it's not what their talk is even about so of course they don't cover it.

                          > With any new product, you can expect some noise and experimentation, but the adoption of products that offer a big economic value is usually very, very fast, even in programming.

                          What you've got here is the perfect market fallacy. This is very, very silly. If you're young enough to actually believe it due to lack of first hand experience then you're going to have a rude awakening, but I think sadly it probably just means you don't like the reality here and are trying to explain it away.

                          • pron 11 hours ago

                            > That talk is mainly a GC person assuring you and perhaps themselves that all this churn is actually desirable. While "Actually this was a terrible idea and I regret it" ...

                            The speaker is one of the world's leading experts on memory management, and the "mistake" is one of the biggest breakthroughs in software history, which is, today, the leading chosen memory management approach. Tony Hoare has done this when his mistakes became apparent; it's hard to find people who say "it was a terrible idea and I regret it" when the idea has won spectacularly and is widely recognised to be quite good.

                            > Anyway. The speaker is assuming that we're discussing how much RAM to use on garbage. Because they're assuming a garbage collector, because this is a talk about GC. But a language like Rust isn't using any RAM for this. "The fuel is liquid" isn't one of the options they're looking at, it's not what their talk is even about so of course they don't cover it.

                            Hmm, so you didn't really understand the talk, then. You can reduce the amount of garbage to zero, and the point still holds. To consume less RAM - by whatever means - you have spend CPU. After programming in C++ for about 25 years, this is obvious to me, as it should be to anyone who does low-level programming.

                            The point is that a tracing-moving algorithm can use otherwise unused RAM to reduce the amount of CPU spent on memory management. And yes, you're right, usually in languages like C++, Zig, or Rust we typically do not use extra RAM to reduce the amount of CPU we spend on memory management, but that doesn't mean that we couldn't or shouldn't.

                            > What you've got here is the perfect market fallacy.

                            A market failure/fallacy isn't something you can say to justify any opinion you have that isn't supported by empirical economics. A claim that the market is irrational may well be true, but it is recognised, even by those who make it, as something that requires a lot of evidence. Saying that you know what the most important thing is and that you know the best way to get it, and then use the fact that most experts and practitioners don't support your position as evidence that it is correct. That's the Galileo complex: what proves I'm right is that people say I'm wrong. Anyway, a market failure isn't something that's merely stated; it's something that's demonstrated.

                            BTW, one of those times Tony Hoare said he wrong? It was when he claimed the industry doesn't value correctness or won't be able to achieve it without formal proofs. One of the lesssons from that in the software correctness community is to stop claiming or believing we have found the universally best path to correctness, and that's why we stopped doing that in the nineties. Today it's well accepted there can be many effective paths to correctness, and the research is more varied as well.

                            I started programming in 1988-9, I think, and there's been a clear improvement in quality even since then (despite the growth in size and complexity of software). Rust makes me nostalgic because it looks and feels and behaves like a 1980s programming language - ML meets C++ - and I get its retro charm (and Zig has it, too, in its Sceme meets C sort of way), but we've learnt, what Tony Hoare has learnt, is that there are many valid approaces to correctness. Rust offers one approach, which may be attractive to some, but there are others that may be attractive to more.

                            • tialaramex 10 hours ago

                              > You can reduce the amount of garbage to zero, and the point still holds.

                              Nah, in the model you're imagining now the program takes infinite time. But we can observe that our garbage free Rust program doesn't take infinite time, it's actually very fast. That's because your model is of a GC system - where ensuring no garbage really would need infinite time and a language without GC isn't a GC with zero garbage, it's entirely different, that's the whole point.

                              More generally, a GC-less solution may be more CPU intensive, or it may not, and although there are rules of thumb it's difficult to have any general conclusions. If you work on Java this is irrelevant, your language requires a GC, so this isn't even a question and thus isn't worth mentioning in a talk about, again, the Java GC.

                              > A claim that the market is irrational may well be true

                              Which makes your entire thrust stupid. You depend upon the perfect market fallacy for your point there, the claim that if this was a good idea people would necessarily already be doing it - once you accept that's a fallacy you have nothing.

                              • pron 10 hours ago

                                > Nah, in the model you're imagining now the program takes infinite time.

                                Wat? Where did you get that?

                                > That's because your model is of a GC system - where ensuring no garbage really would need infinite time and a language without GC isn't a GC with zero garbage, it's entirely different, that's the whole point.

                                Except it's not different, and working "without garbag"e doesn't take infinite time even with a garbage collector. For example, a reference counting GC also has zero garbage (in fact, that's the GC Rust uses to manage Rc/Arc) and doesn't take infinite time. It does, however, sacrifice CPU for lower RAM footprint. Have you studied the theory of memory management? The CPU/memory tradeoff exists regardless of what algorithm you use for memory management, it's just that some algorithms allow you to control that tradeoff within the algorithms and others require you to commit to a tradeoff upfront.

                                For example, using an arena in C++/Rust/Zig is exactly about exploiting that tradeoff (Zig in particular is very big on arenas) to reduce the CPU spent on memory management by using more RAM: the arena isn't cleared until the transaction is done, which isn't minimal in terms of RAM, but requires less CPU. Oh, and if you use an arena (or even a pool), you have garbage, BTW. Garbage means an object that isn't used by the program, but whose memory cannot yet be reused for the next allocation.

                                If you think low-level languages don't have garbage, then you haven't done enough low-level programming (and learn about arenas; they're great). There are many pros and many cons to the Rust approach, and it sure is a good tradeoff in some situations, but the reason the biggest Rust zealots - those who believe it's universally superior - are those who haven't done much low-level programming and don't understand the tradeoffs it involves. It's also them who think that the reason those of us who were there first picked up C++ and later abandoned it for some use-cases did so only because it wasn't memory-safe. That was one reason, but there were many others, at least as equally decisive. Rust fixes some of C++'s shortcomings but certainly not all; Zig fixes others (those that happen to be more important to me) but certainly not all. They're both useful, each in their own way, and neither comes close to being "the right way to program". Knowledgeable, experienced people don't even claim that, and are careful to point out that they may genuinely believe some universal superiority but that they don't actually have the proof.

                                Whether you use a GC (tracing-moving as in Java, refcounting as in Rust's Rc or Swift, or tracing-sweeping as in Go), use arenas, or manually do malloc/free, the same principles and tradeoffs of memory management apply. That's because abstract models of computation - Turing machines, the lambda calculus, or the game of life, pick your favourte - have infinite memory, but real machines don't, which means we have to reuse memory, and doing that requires computational work. That's what memory management means. Some algorithms, like Rust's primitive refcounting GC, aim to reuse memory very aggressively, which means they do some work (such as updating free-lists) as soon as some object is unused so that its memory can be reused immediately, while other approaches postpone the reuse of memory to do less work. That's what tracing collectors or arenas do, and that's precisely why we who do low-level programming like arenas so much.

                                Anyway, the point of the talk is this: the CPU/memory tradeoff is, of course, inherent to all approaches of memory management (though not necessarily in every particular use case), and so it's worth thinking about the fact that the minimal amount of RAM per core on most machines these days is high. This applies to everything. Then he explains that the trancing-moving collectors allow you to turn the tradeoff dial - within limits - within the same algorithm. Does it mean tracing-moving collection is always the best choice? No. But do approaches that strive to minimise footprint somehow evade the CPU/memory tradeoff? Absolutely not.

                                A lesson that a low-level programmer may take away from that could be something like, maybe I should rely on RC less and, instead, try to use larger arenas if I can.

                                > Which makes your entire thrust stupid. You depend upon the perfect market fallacy for your point there

                                Except I don't depend on it. I say it's evidence that cannot be ignored. The market can be wrong, but you cannot assume it is.

                                • tialaramex 6 hours ago

                                  > Wat? Where did you get that?

                                  The talk you're so excited about actually shows this asymptote. In reality a GC doesn't actually want zero garbage because we're trading away RAM to get better performance. So they don't go there, but it ought to have pulled you up short when you thought you could apply this understanding to an entirely unrelated paradigm.

                                  Hence the V2 comparison. So long as you're thinking about those solid fuel toy rockets the V2 makes no sense, such a thing can't possibly work. But of course the V2 wasn't a solid fuel rocket at all and it works just fine. Rust isn't a garbage collected language and it works just fine.

                                  GC is very good for not caring about who owns anything. This can lead to amusing design goofs (e.g. the for-each bug in C# until C# 5) but it also frees programmers who might otherwise spend all their time worrying about ownership to do something useful which is great. However it really isn't the panacea you seem to have imagined though at least it is more widely applicable than arenas.

                                  > it's worth thinking about the fact that the minimal amount of RAM per core on most machines these days is high.

                                  Like I said, this means you are less likely to value the non-GC approach when you're small. You can put twice as much RAM in the server for $50 so you do that, you do not hire a Rust programmer to make the software fit.

                                  But at scale the other side matters - not the minimum but the maximum, and so whereas doubling from 16GB RAM to 32GB RAM was very cheap, doubling from 16 servers to 32 servers because they're full means paying twice as much, both as capital expenditure and in many cases operationally.

                                  > I say it's evidence

                                  I didn't see any evidence. Where is the evidence? All I saw was the usual perfect market stuff where you claim that if it worked then they'd have already completed it by some unspecified prior time, in contrast to the fact I mentioned that they've hired people to do it. I think facts are evidence and the perfect market fallacy is just a fallacy.

                                  • pron 5 hours ago

                                    > The talk you're so excited about actually shows this asymptote.

                                    Oh, I see the confusion. That asymptote is for the hypothetical case where the allocation rate grows to infinity (i.e. remains constant per core and we add more cores) while the heap remains constant. Yes, with an allocation rate growing to infinity, the cost of memory management (using any algorithm) also grows to infinity. That it's so obvious was his point showing why certain benchmarks don't make sense as they increase the allocation rate but keep the heap constant.

                                    > So they don't go there, but it ought to have pulled you up short when you thought you could apply this understanding to an entirely unrelated paradigm.

                                    I'm sorry, but I don't think you understand the theory of memory management. You obviously run into these problems even in C. If you haven't then you haven't been doing that kind of programming long enough. Some results are just true for any kind of memory management, and have nothing to do with a particular algorithm. It's like how in computational complexity theory, certain problems have a minimal cost regardless of the algorithm chosen.

                                    > But at scale the other side matters - not the minimum but the maximum, and so whereas doubling from 16GB RAM to 32GB RAM was very cheap, doubling from 16 servers to 32 servers because they're full means paying twice as much, both as capital expenditure and in many cases operationally.

                                    I've been working on servers in C++ for over 20 years, and I know the tradeoffs, and when doing this kind of programming seeing CPU exhausted before RAM is very common. I'm not saying there are never any other situations, but if you don't know how common this is, then it seems like you don't have much experience with low-level programming. Implying that the more common reason to need more servers is because what's exhausted first is the RAM is just not something you hear from people with experience in this industry. You think that the primary reason for horizontal scaling is dearth of RAM?? Seriously?! I remember that in the '90s or even early '00s we had some problems of not enough RAM on client workstations, but it's been a while.

                                    In the talk he tells memory-management researchers about the economics in industry, as they reasonably might not be familiar with them, but decision-makers in industry - as a whole - are.

                                    Now, don't get me wrong, low level languages do give you more control over resource tradeoffs. When we use those languages, sometimes we choose to sacrifice CPU for footprint and use a refcounting GC or a pool when it's appropriate, and sometimes we sacrifice footprint for less CPU usage and use an arena when it's appropriate. This control is the benefit of low level languages, but it also comes at a significant cost, which is why we use such languages primarily for software that doesn't deliver direct business value but is "pure overhead", like kernels, drivers, VMs, and browsers, or for software running on very constrained hardware.

                                    > I didn't see any evidence.

                                    The evidence is that in a highly competitive environment of great economic significance, where getting big payoffs is a way to get an edge over the competition, technologies that deliver high economic payoffs spread quickly. If they don't, it could be a case of some market failure, but then you'd have to explain why companies that can increase their profits significantly and/or lower their prices choose not to do so.

                                    When you claim some technique would give a corporation a significant competitive edge, and yet most corporations don't take it (at least not for most projects), then that is evidence against that claim because usually companies are highly motivated to gain an advantage. I'm not saying it's a closed case, but it is evidence.

                        • zozbot234 a day ago

                          > What you want to aim for is a RAM/CPU usage that matches the RAM/CPU ratio on the machine, as that's what you pay for.

                          This totally ignores the role of memory bandwidth, which is often the key bottleneck on multicore workloads. It turns out that using more RAM costs you more CPU, too, because the CPU time is being wasted waiting for DRAM transfers. Manual memory management (augmented with optional reference counting and "borrowed" references - not the pervasive refcounting of Swift, which performs less well than modern tracing GC) still wins unless you're dealing with the messy kind of workload where your reference graphs are totally unpredictable and spaghetti-like. That's the kind of problem that GC was really meant for. It's no coincidence that tracing GC was originally developed in combination with LISP, the language of graph-intensive GOFAI.

                          • pron a day ago

                            > It turns out that using more RAM costs you more CPU

                            Yes, memory bandwidth adds another layer of complication, but it doesn't matter so much once your live set is much larger than your L3 cache. I.e. a 200MB live set and a 100GB live set are likely to require the same bandwidth. Add to that the fact that tracing GCs' compaction can also help (with prefetching) and the situation isn't so clear.

                            > That's the kind of problem that GC was really meant for.

                            Given the huge strides in tracing GCs over the past ten and even five years, and their incredible performance today, I don't think it matters what those of 40+ years ago were meant for, but I agree there are still some workloads - not anything that isn't spaghetti-like, but specifically arenas - that are more efficient than tracing GCs (young-gen works a little like an arena but not quite), which is why GCs are now turning their attention to that kind of workload, too. The point remains that it's very useful to have a memory management approach that can turn the RAM you've already paid for to reduce CPU consumption.

                            Indeed, we're not seeing any kind of abandonment of tracing GC at a rate that is even close to suggesting some significant economic value in abandoning them (outside of very RAM-constrained hardware, at least).

                            • mwcampbell 8 hours ago

                              > (outside of very RAM-constrained hardware, at least)

                              I've spent much of my career working on desktop software, especially on Windows, and especially programs that run continuously in the background. I've become convinced that it's my responsibility to treat my user's machines as RAM-constrained, and, outside of any long-running compute-heavy loops, to value RAM over CPU as long as the program has no noticeable lag. My latest desktop product was Electron-based, and I think it's pretty light as Electron apps go, but I wish I'd had the luxury of writing it all in Rust so it could be as light as possible (at least one background service is in Rust). My next planned desktop project will be in Rust.

                              A recent anecdote has reinforced my conviction on this. One of my employees has a PC with 16 GB of RAM, and he couldn't run a VMware VM with 4 GB of guest RAM on that machine. My old laptop, also with 16 GB of RAM, had plenty of room for two such VMs. I didn't dig into this with him, but I'm guessing that his machine is infested with crap software, much of which is probably using Electron these days, each program assuming it can use as much RAM as it wants. I want to fight that trend.

                              • pron 2 hours ago

                                It's perfectly valid to choose RAM over CPU. What isn't valid is believing that this tradeoff doesn't exist. However, cloud deployments are usually more CPU-constrained than RAM constrained, so it's important to know that more RAM can be used to save CPU when significant processing is spent on memory management.

                            • zozbot234 a day ago

                              > The point remains that it's very useful to have a memory management approach that can turn the RAM you've already paid for to reduce CPU consumption.

                              That approach is specifically arenas: if you can put useful bounds on the maximum size of your "dead" data, it can pay to allocate everything in an arena and free it all in one go. This saves you the memory traffic of both manual management and tracing GC. But coming up with such bounds involves manual choices, of course.

                              It goes without saying that memory compaction involves a whole lot of extra traffic on the memory subsystem, so it's unlikely to help when memory bandwidth is the key bottleneck. Your claim that a 200MB working set is probably the same as a 100GB working set (or, for that matter, a 500MB or 1GB working set, which is more in the ballpark of real-world comparisons) when it comes to how it's impacted by the memory bottleneck is one that I have some trouble understanding also - especially since you've been arguing for using up more memory for the exact same workload.

                              Your broader claim wrt. memory makes a whole lot of sense in the context of how to tune an existing tracing GC when that's a forced choice anyway (which, AIUI, is also what the talk is about!) but it just doesn't seem all that relevant to the merits of tracing GC vs. manual memory management.

                              > we're not seeing any kind of abandonment of tracing GC at a rate that is even close to suggesting some significant economic value in abandoning them

                              We're certainly seeing a lot of "economic value" being put on modern concurrent GC's that can at least perform tolerably well even without a lot of memory headroom. That's how the Golang GC works, after all.

                              • pron a day ago

                                > It goes without saying that memory compaction involves a whole lot of extra traffic on the memory subsystem

                                It doesn't go without saying that compaction involves a lot of memory traffic, because memory is utilised to reduce the frequency of GC cycles and only live objects are copied. The whole point of tracing collection is that extra RAM is used to reduce the total amount of memory management work. If we ignore the old generation (which the talk covers separately), the idea is that you allocate more and more in the young gen, and when it's exhausted you compact only the remaining live objects (which is a constant for the app); the more memory you assign to the young gen, the less frequently you need to do even that work. There is no work for dead objects.

                                > when it comes to how it's impacted by the memory bottleneck is one that I have some trouble understanding also - especially since you've been arguing for using up more memory for the exact same workload.

                                Memory bandwidth - at least as far as latency is concerned - is used when you have a cache miss. Once your live set is much bigger than your L3 cache, you get cache misses even when you want to read it. If you have good temporal locality (few cache misses), it doesn't matter how big your live set is, but the same is if you have bad temporal locality (many cache misses).

                                > which, AIUI, is also what the talk is about

                                The talk focuses on tracing GCs, but it applies equally to manual memory management (as discussed in the Q&A; using less memory for the same algorithm requires CPU work regardless if it's manual or automatic)

                                > when that's a forced choice

                                I don't think tracing GCs are ever a forced choice. They keep getting chosen over and over for heavy workloads on machines with >= 1GB/core because they offer a more attractive tradeoff than other approaches for some of the most popular application domains. There's little reason for that to change unless the economics of DRAM/CPU change significantly.

                                • Ygg2 a day ago

                                  > It doesn't go without saying that compaction involves a lot of memory traffic

                                  It definitely tracks with my experience. Did you see Chrome on AMD EPYC with 2TB of memory? It reached like 10% of Mem utility but over 46% of CPU around 6000 tabs. Mem usage climbed steeply at first but got overtaken by CPU usage.

                                  • pron a day ago

                                    I have no idea what it's using its CPU on, whether it has anything to do with memory management, or what memory management algorithm is in use. Obviously, the GC doesn't need to do any compaction if the program isn't allocating, and the program can only allocate if it's actually doing some computation. Also, I don't know the ratio of live set to total heap. A tracing GC needs to do very little work if most of the heap is garbage (i.e. the ration of live set to the total memory is low), but any form of memory management - tracing or manual - needs to do a lot of work if the ratio is low. Remember, a tracing-moving GC doesn't spend any cycles on garbage; it spends cycles on live objects only. The more heap you give it (assuming the same allocation rate and live set), means more garbage and less CPU consumption (as GC cycles are less frequent).

                                    All you know is that CPU is exhausted before the RAM is, which, if anything, means that it may have been useful for Chrome to use more RAM (and reduce the liveset-to-heap ratio) to reduce CPU utilisation, assuming this CPU consumption has anything to do with memory management.

                                    There is no situation in which, given the same allocation rate and live set, adding more heap to a tracing GC makes it work more. That's why in the talk he says that a DIMM is a hardware accelerator for memory management if you have a tracing-moving collector: increase the heap and voila, less CPU is spent on memory management.

                                    That's why tracing-moving garbage collection is a great choice for any program that spends a significant amount of CPU on memory management, because then you can reduce that work by adding more RAM, which is cheaper than adding more CPU (assuming you're running on a machine that isn't RAM-constrained, like small embedded devices).

                  • CRConrad a day ago

                    > The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product.

                    ...to the extent possible within their project budget. Otherwise the product would — as GP already pointed out — not exist at all, because the project wouldn't be undertaken in the first place.

                    > This goal can be reached by learning to write Rust; practice makes perfect.

                    Pretty sure it could (at least) equally well be reached by learning to write Ada.

                    This one-note Rust cult is really getting rather tiresome.

                    • Ygg2 a day ago

                      > The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product. > > ...to the extent possible within their project budget.

                      Sure, but when other engineers discover that shit caused us many defects (e.g., asbestos as a fire insulator), they don't turn around and say, "Well, asbestos sure did cause a lot of cancer, but Cellulose Fibre doesn't shield us from neutron radiation. So it won't be preventing all cancers. Ergo, we are going back to asbestos."

                      And then you have team Asbestos and team Lead paint quarrelling who has more uses.

                      That's my biggest problem. The cyclic, Fad Driven Development that permeates software engineering.

                      > Pretty sure it could (at least) equally well be reached by learning to write Ada.

                      Not really. Ada isn't that memory safe. It mostly relies on runtime checking [1]. You need to use formal proofs with Ada SPARK to actually get memory safety on par with Rust.

                      > Pretty sure it could (at least) equally well be reached by learning to write Ada.

                      See above. You need Ada with Spark. At that point you get two files for each method like .c/.h, one with method definition and one with proof. For example:

                          // increment.ads - the proof
                          procedure Increment
                              (X : in out Integer)
                          with
                            Global  => null,
                            Depends => (X => X),
                            Pre     => X < Integer'Last,
                            Post    => X = X'Old + 1;
                      
                          // increment.adb - the program
                          procedure Increment
                            (X : in out Integer)
                          is
                          begin
                            X := X + 1;
                          end Increment;
                      
                      But you're way past what you call programming, and are now entering proof theory.

                      [1] https://ajxs.me/blog/How_Does_Adas_Memory_Safety_Compare_Aga...

            • Ygg2 2 days ago

              > Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing?

              The problem with this statement is that without a memory safety invariant your code doesn't compose. Some code might assume no UAF and other parts could and you'd have a mismatch. Just like borrow checker is viral, so is the unsafety.

              > If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs.

              The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

              Here is how engineering in any other non-computer science field takes place. You build something. See where it breaks; try to build it again given time and budget constraints. Eventually you discover certain laws and rules. You learn the rules and commit them to a shared repository of knowledge. You work hard to codify those laws and rules into your tools and practice (via actual government laws). Furthermore, you try to build something again, with all the previous rules, tools, and accumulated knowledge.

              How it works in tech. You build something. See where it breaks, say that whoever built it was a cream-for-brain moron and you can do it better and cheaper. Completely forget what you learned building the previous iteration. See where it breaks. Blame the tools for failure; remove any forms of safety. Project cancelled due to excessive deaths. Bemoan the lack of mental power in newer hires or lack of mental swiftness in older hires. Go to step 1.

              You'll notice a stark contrast between Engineering and Computer Tech. Computer tech is pop culture. It's a place where people wage wars about whether lang X or lang Y is better. How many times did programming trend go from static to dynamically typed? How many times did programming learned a valuable lesson, only for everyone to forget it, until decades later another language resurrected it?

              Ideally, each successive language would bring us closer and closer to minimizing defects, with more (types of) safety and better guarantees. Is Rust a huge leap compared to Idris? No, it's better than Ada at memory safety that's for sure.

              But it's managed to capture a lot of attention, and it is a much stricter language than many others. It's a step towards ideal. And how do programmers react to it? With disgust and a desire for less safety.

              Sigh. I guess we deserve all the ridicule we can get.

              • pron a day ago

                > The problem with this statement is that without a memory safety invariant your code doesn't compose

                Yes, but that holds for any correctness property, not just the 0.0001% of them that memory safe languages guarantee. That's why we have bugs. The reason memory safety is a focus is because out-of-bounds access is the leading cause of dangerous vulnerabilities.

                > The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

                Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

                > You'll notice a stark contrast between Engineering and Computer Tech.

                I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure. As to learning our lessons, I think we do when they are actually real. Software is a large and competitive economic activity, and where's there's a real secret to more valuable software, it spreads like wildfire. For example, high-level programming languages spread like wildfire; unit tests and code review did, too. And when it comes to static and dynamic typing, the studies on the matter were inconclusive except in certain cases such as JS vs TS; and guess what? TS has spread very quickly.

                The selective pressures are high enough, and we see how well they work frequently enough that we can actually say that if some idea doesn't spread quickly, then it's likely that its impact isn't as high as its fans may claim.

                > And how do programmers react to it? With disgust and a desire for less safety.

                I don't think so. In such a large and competitive economic activity, the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me. Rust has had some measure of adoption and the likeliest explanation for why it doesn't have more is the usual one for any product: it costs too much and delivers too little.

                Let's say that the value, within memory safety, between spatial and temporal safety is split 70-30; you know what? let's say 60-40. If I can get 60% of Rust's value for 10% of Rust's cost, that a very rational thing to do. I may even be able to translate my savings into an investment in correctness that is more valuable than use-after-free.

                • Ygg2 a day ago

                  > Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

                  Rust achieves practical minimization, if not outright eradication, of a set of errors even in practice. And not just memory safety errors.

                  > Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

                  The reason Rust exists is that the field hasn't matured enough to accept better engineering practices. If everyone could write and think in pre/post/invariant way, we'd see a lot fewer issues.

                  > I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure.

                  Dude, the front page was about how Comet AI browser can be hacked by your page and ordered to empty your bank account. That's like your fork deciding to gut you like a fish.

                  > the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me.

                  Why? Just because you are intelligent doesn't mean you are rational. Plenty of smart people go bonkers. And looking at the state of the field as a whole, I'd have to ask for proof it's rational.

                  • pron 20 hours ago

                    > Rust achieves practical minimization, if not outright eradication, of a set of errors even in practice. And not just memory safety errors.

                    It achieves something in one way, while requiring you to pay some price, while other languages achieve something in a different way, with a different cost. I've been involved with software correctness for many, many years (early in my career I worked on safety-critical, hard realtime software, and then practised and taught formal methods for use in industry), and there is simply no research, none, suggesting that Rust's approach is necessarily the best. Remember that most software these days - not the kind that flies aeroplanes or controls pacemakers, that's written mostly in C, but the kind that runs your bank, telecom supplier, power company, healthcare etc. but also Facebook - is written in languages that offer the same guarantees as Rust, for better or worse.

                    > If everyone could write and think in pre/post/invariant way, we'd see a lot fewer issues.

                    Except I've worked with formal methods in a far more rigorous way than Rust offers (well, it offers almost nothing), and in this field there is now the acknowledgement that software correctness can be achieved in many different ways. In the seventies there was a consensus about how to write correct software, and then in the nineties it all got turned around.

                    > That's like your fork deciding to gut you like a fish.

                    I don't think so, because everyone uses a fork but this is the first time I hear about Comet AI. The most common way to empty people's bank accounts is still by conning them.

                    > Why? Just because you are intelligent doesn't mean you are rational.

                    Intelligence has nothing to do with it. But "rational" here means "in accordance with reality", and since software is a major competitive economic activity, which means that if some organisations act irrationally, there's both a strong motivation and an ability to take their lunch money. If they still have it, they're probably not behaving irrationally.

          • benreesman 3 days ago

            Haskell makes guarantees. Modern C++ makes predictions to within a quantifiable epsilon.

            Rust makes false promises in practical situations. It invented a notion of safety that is neither well posed, nor particularly useful, nor compatible with ergonomic and efficient computing.

            It's speciality is marketing and we already know the bounding box on its impact or relevance. "Vibe coding" will be a more colorful and better remembered mile marker of this lousy decade in computers than Rust, which will be an obscurity in an appendix in 100 years.

            • simonask 3 days ago

              There is almost nothing accurate about this comment.

              "Makes predictions to within a quantifiable epsilon"? What in the world do you mean? The industry experience with C++ is that it is extremely difficult (i.e., expensive) to get right, and C++20 or newer does not change anything about that. Whatever "epsilon" you are talking about here surely has to be very large for a number bearing that sobriquet.

              As for the mindless anti-Rust slander... I'm not sure it's worth addressing, because it reflects a complete lack of the faintest idea about what it actually does, or what problem it solves. Let me just say there's a reason the Rust community is rife with highly competent C++ refugees.

              • Ygg2 2 days ago

                To be fair to GP, an error bar of 3±300 is still a quantifiable epsilon. Utterly useless, but quantifiable.

            • sshine 3 days ago

              > "Vibe coding" will be a more colorful and better remembered mile marker of this lousy decade in computers than Rust, which will be an obscurity in an appendix in 100 years.

              I doubt it.

              I'm teaching a course on C this fall. As textbook I've chosen "Modern C" by Jens Gustedt (updated for C23).

              I'm asked by students "Why don't you choose K&R like everyone else?"

              And while the book is from 1978 (ANSI C edition in 1988), and something I've read joyously more than once, I'm reminded of how decades of C programmers have been doing things "the old way" because that's how they're taught. As a result, the world is made of old C programs.

              With this momentum of religiously rewriting things in Rust we see in the last few years (how many other languages have rewritten OpenSSL and the GNU coreutils?), the amount of things we depend on that was incidentally rewritten in Rust grows significantly.

              Hopefully people won't be writing Rust in 100 years. Since 100 years ago mathematicians were programming mechanical calculators and analog computers, and today kids are making games. But I bet you a whole lot of infrastructure still runs Rust.

              In fact, anything that is convenient to Vibe code in the coming years will drown out other languages by volume. Rust ain't so bad for vibe coding.

              • pjmlp 3 days ago

                Kudos for going with modern C practices.

                There is a place to learn about history of computing, and that is where K&R C book belongs to.

                Not only is the old way, this is from the age of dumb C compilers, not taking advantage of all stuff recent standards allow compiler writers to take to next level on optimizations, not always with expected results.

                Maybe getting students to understand the ISO C draft is also an interesting exercise.

            • kelnos 2 days ago

              I hope in 100 years we're not using any of the languages available today. I do like Rust, and use it whenever it's appropriate, but it has its warts and sharp edges. Hopefully we'll come up with something better in the next century.

            • Ygg2 3 days ago

              > Rust makes false promises in practical situations. It invented a notion of safety that is neither well posed, nor particularly useful, nor compatible with ergonomic and efficient computing.

              Please stop. Rust's promise is very simple. You get safety without the tracing GC. It also gives you tools to implement your own safe abstraction on top of unsafe, but you are mostly on your own (miri, asan, and ubsan can still be used).

              Neither Rust nor Ada nor Lean nor Haskell can guarantee there are no errors in their implementations.

              Similarly, none of the listed languages can even try to show that a bad actor can't write bad code or design bad hardware in a way that maintains their promises. If you need that, you need to invent the Omniscient Oracle, not a program.

              I hate this oft repeated Nirvana fallacy. Yes, Rust is offering you a car with seatbelts and airbags. It is not offering a car that guarantees immortality in the event of a universe collapse.

              • zelphirkalt 2 days ago

                People state these things about Rust's own implementation (or one of the other gazillion safe langs) potentially not being safe all the time, but the difference to unsafe languages is, that once any bug is fixed, everyone profits from it being fixed in the implementation of Rust. Everyone who uses the language and updates to a newer version that is, which often goes without code changes or minimal changes for a project. Now compare that with unsafe languages. Every single project needs to "fix" the same kind of safety issues over and over again. The language implementation can do almost nothing, except change the language to disallow unsafe stuff, which is not done, because people like backwards compatibility too much.

                • Ygg2 2 days ago

                  > People state these things about Rust's own implementation (or one of the other gazillion safe langs) potentially not being safe all the time

                  Because it's technically true. The best kind of true!

                  Sorry, I meant to say the opposite of truth. Neither Rust nor Ada.Spark, which use LLVM as a backend, can prove via that they are correct if LLVM has bugs.

                  In the same way, I can't guarantee tomorrow I won't be killed by a rogue planet hitting Earth at 0.3c. So I should probably start gambling and doing coke, because we might be killed tomorrow.

                  > Every single project needs to "fix" the same kind of safety issues over and over again

                  I doubt that's the biggest problem. Each of the unsafe libraries in C/C++/Zig can be perfectly safe given invariants X and Y, respectively. What happens if you have two (or more) libraries with subtly non-compatible invariants? You get non-composable libraries. You end up with the reverse problem of the NPM world.

                  • tialaramex 2 days ago

                    To be fair, although LLVM has several pretty annoying bugs which result in miscompiling Rust (and C, and any other language capable of expressing the same ideas) and it sure would be nice if they fixed them, there are also Rust bugs that live in the Rust compiler itself and aren't LLVM's responsibility.

                    There are some scary soundness holes in Rust's compiler that will get patched eventually but in principle you could trip them today. They're often "But why would anybody even do that?" problems, but it's technically legal Rust and the compiler doesn't reject your program or even ICE it just miscompiles your input which is not what we want.

                    • Ygg2 2 days ago

                      > To be fair, although LLVM has several pretty annoying bugs which result in miscompiling Rust (and C, and any other language capable of expressing the same ideas) and it sure would be nice if they fixed them, there are also Rust bugs that live in the Rust compiler itself and aren't LLVM's responsibility.

                      Sure. Again, I didn't say there are no bugs in Rust codebase ever, or that Rust will prevent all of errors forever.

                      They are working on them, but a large chunk ~50% (45 out of 103) are either nightly-only bugs (due to nightly-only features) or LLVM bugs that are difficult to coordinate/solve.

                      Some part of them will probably be helped or solved by a new trait resolver, Polonius. Granted, this still won't resolve all issues. Or even all trait related issues.

                      > There are some scary soundness holes in Rust's compiler that will get patched eventually but in principle you could trip them today.

                      In principle, yes. In principle you should cower in a corner, because there are many scary things in the world. From microbes, to insects, to malevolent proteins, to meteors, to gamma rays, to strangelets, to rogue black holes, to gravity entering a new stable state.

                      In practice, you don't. Because in practice we empirically adjust our expectations to the phenomena that occur, or have occurred. So a war is very likely, but being vaporized by a stray cosmic ray is not very likely.

                      And most of those UB require either strange code or a weird combination of hardware, compilers, or flags. It's probably why most people don't encounter UB in everyday Rust.

        • wolvesechoes 2 days ago

          > The age of Zig is emerging delibetately

          Pet projects are nice, but slow down with the copium intake.

    • kllrnohj 2 days ago

      > So the future is Zig. He got there first.

      The future is many things, but a love letter to C is definitely not it.

      Zig is cute and a fun hobby project that might see above average success for a hobby project. But that's about it. It doesn't address the problems people actually have with C++, not like Rust or Swift do, and it certainly isn't going to attract any attention from the Java, JavaScript, C#, Python, etc... of the world.

      • davemp 2 days ago

        Zig is not a hobby project.

        > It doesn't address the problems people actually have with C++, not like Rust or Swift do

        Speak for yourself. Zig addresses pretty much all of my problems with C++:

        - terrible compile times - overly complex and still underpowered template/meta programming - fragmented/ancient build tools - actually good enums/tagged unions - exceptions - outdated OOP baggage

        I don’t actually mind C++ that much, but Zig checks pretty much all of my boxes. Rust/swift check some of these but not all and add a few of their own.

        > and it certainly isn't going to attract any attention from the Java, JavaScript, C#, Python, etc... of the world.

        Yeah of course. Zig isn’t trying to be a max ergonomics scripting language…

  • geon 3 days ago

    Isn’t the whole point of an external buffer that the function won’t need to allocate?

laserbeam 2 days ago

I… will not update my zig side projects to 0.15.x. I can see why Andrew wanted to release this, and I can appreciate getting the new Io in people’s hands… but it’s merely a few weeks after merging massive amounts of breaking changes on readers and writers.

For those working on the standard library, it’s a great thing. For one like me who casually uses zig, it feels like waiting for 0.16.0 for most of the IO dust to settle is the right thing to do.

  • debo_ 2 days ago

    If you're a language called Zig, I guess you sometimes have to Zag.

  • audunw 2 days ago

    Even Loris Cro one of the Zig core members, mentioned in a very recent interview that he’ll wait for all the fallout of the IO changes to settle before updating and continuing work on his project using Zig.

    The future looks bright after that though. Both Andrew and Loris seem to think this is the last major breaking change, so I hope we can see a 1.0 not too long after. The biggest uncertainty now is what the consequences of (re-)introducing stack-less coroutines will be.

    • Galanwe 2 days ago

      > Both Andrew and Loris seem to think this is the last major breaking change, so I hope we can see a 1.0 not too long after.

      Well I mean the standard library still needs a _lot_ of work IMHO. Process management for instance is at a stage where it's barely usable for simple use cases.

noobermin 2 days ago

Having seen the posts about the new IO interface, I decided to steer clear from Zig. Looks like that fortuitious instinct was proven valid as this looks more and more like the verbosity from pre C++11 for different reasons but with a similar result.

This pattern (new language evolves to be as complex as the languages it was supposed to replace) seems familiar.

  • ivanjermakov 2 days ago

    As being one of the post authors you mentioned, I don't think such posts should scare people from trying it out. Zig team is eager to discuss and break things if a better solution is found.

    Such approach is not a great fit for those who treat Zig as a future job opportunity, but for personal and small-team projects it's _already_ a neat language with clear goals and great tooling.

jedisct1 3 days ago

The new I/O interface makes printing a simple “Hello, world!” more complicated, but once you get used to it, the design is actually very clean, versatile, and future-proof.

Since 0.15, though, I feel too dumb for Zig’s ArrayList.

  • simonask 3 days ago

    Is it future-proof though? Last I saw, it relied on some yet-to-be-determined design for compiling async variants of everything that uses IO, and it was still unclear whether it was possible at all to support dynamic dispatch.

    My info could be outdated - I don't follow Zig very closely, but I am curious.

    • kristoff_it 2 days ago

      The Reader/Writer changes are perfectly compatible with the upcoming async I/O stuff and you won't need to change any code that just deals with streams.

      No promises about potential future changes though :^)

  • throwawaymaths 2 days ago

    i think hello world is actually easier? getstdout doesnt exist anymore and its equivalent is nonfailable.

j1elo 2 days ago

> To convert the Stream.Reader to an std.Io.Reader, we need to call its interface() method. To get a std.io.Writer from an Stream.Writer, we need the address of its &interface field. This doesn't seem particularly consistent.

That made me think of how that change would be received in Go (probably would be discarded). They way they approach changes in extremely deep analysis and taking as much time as it needs to avoid mistakes and reach a consistent solution (or as close as possible).

This has been my favorite for a while: https://github.com/golang/go/issues/45624

4 years to decide on something relatively minor, that right now can be done with a bit of a one-liner extra work. But things need to be well thought out. Inconsistencies are pointed out. Design concerns are raised. Actual code usage in the real world are taken into account... too slow for some people, but I think it's just as slow as it needs to be. The final decision is shaping out to be very nice.

  • josephg 2 days ago

    Rust is the same. It grinds my goat a little how many useful features are implemented - but only available in nightly rust. Things like generators.

    But when rust ships features to stable, they’re usually pretty well thought through. I’m impatient. But the rust language & compiler teams probably have the right idea.

    • hu3 2 days ago

      I beg to differ. Rust async implementation is contentious and criticized often. Sometimes you just miss the mark despite pondering about it for a while. Same with Go.

      • josephg 2 days ago

        Yeah async rust is definitely the exception. That and Pin, which in my opinion totally missed the mark. The feature rust needs is the ability to have self reference in a struct. Pin is a hacky, inadequate half solution.

        • mwcampbell 2 days ago

          Can we all please stop complaining about async Rust and just acknowledge that the problem they were solving was both really hard and urgent for the success of the language (particularly given priorities among the most well-resourced tech companies in the late 2010s)? All things considered, I think they did a good job. I certainly wouldn't have done any better. And the feature was well thought through, with years of work before it shipped.

          • josephg a day ago

            The problem is really hard. But I think bringing it up is useful, because it motivates people to think about how they would solve it better. It’s certainly a question I’ve asked myself a bunch of times.

            If we all stopped complaining about memory safety in C and C++, we would never have gotten rust in the first place. Rob pike would never have let generics make it into Go if people didn’t spend years pestering him about it.

            I think I fail by taking for granted all the work that’s gone into languages like rust. Lots of smart people have poured themselves into these discussions and designs. Thankyou. I don’t say that enough.

            But yeah, async rust seems like one of those places where people arguing it out on github didn’t converge at a great design. It’s a really hard problem - really, combining a borrow checker with stack suspension is a CS research problem. I think it’s interesting and important to acknowledge that committees don’t always do great research. Sometimes you need a bunch of iterations of an idea. And a bright cookie or two who can work without justifying their designs.

            So no, I don’t think I’ll stop talking about it. Making async rust v1 was an incredible amount of work and I’m very grateful. But I also really want to seed the idea space so some bright sparks can think about what async rust v2 might look like.

  • dchest 2 days ago

    This wasn't exactly the case before Go 1.0 -- it changed quite rapidly, although in a less fundamental way (for example, removing semicolons, changing error types, etc.), and they usually provided an option to transform code automatically.

    It works now that way because they promised it to be stable after 1.0.

JaggerJo 3 days ago

Zig would be my go to language for low level stuff.

I think fact that Zig can be used as a C/C++ cross compiler is brilliant.

eps 3 days ago

Sounds mostly like a documentation issue, or the lack of thereof.

  • jeroenhd 2 days ago

    Because of how many parts of Zig change every other month, writing documentation doesn't seem to be a priority with Zig. The same way the Zig tutorial is just "here's a bunch of examples of Zig code" (that doesn't always compile on the latest version of the compiler), you're expected to read the source code for a lot of Zig standard library definitions.

    For a lot of simple calls, that works out pretty well, once you know all the tricks to Zig's syntax. A lot of requirements and implications are generally written out in very simple and short functions that are often logically named. Things like Allocators are pretty easy conceptually even if you probably don't want to write one yourself.

    It all breaks down when you start dealing with complex concepts. Zig's new I/O system looks a lot like Java's streams and wrappers and readers and writers, all wrapping around each other to make sending encrypted text over a secure channel as simple as output.write("hello").

    I think the new I/O system combined with the lack of documentation about how to use it was a mistake. I'm not even sure if expressing a typing system as complicated as this in the Zig standard library is a good idea. The entire language runs on clear, concise, short and readable methods, and the new system doesn't seem idiomatic in that way.

danieltanfh95 3 days ago

its bad because they are mixing what was supposed to just be execution boundaries into the overall runtime engine without making it explicit how to bridge between one and another.

8s2ngy 3 days ago

I’m sorry, but any non-trivial Zig code gives me PTSD flashbacks of C. I don’t understand who Zig is targeting: with pervasive mutability, manual allocation, and a lack of proper sum types, it feels like a step back from languages such as Rust. If it is indeed a different way to write code, one that embraces default memory unsafety, why would I choose it over C, which has decades of work behind it?

Am I missing some context? I’d love to hear it.

  • sothatsit 3 days ago

    I love Zig precisely because it is so similar to C. Honestly, if you don't like C, I can totally understand why you wouldn't like Zig. But I love C, and I love Zig.

    Zig has become my go-to for projects where I would previously have reached for C, largely because Zig has such good compatibility with other C projects.

    Rust, on the other hand, is a completely different beast. It is very different from C, and it is far more complicated. That makes it harder to justify using, whereas Zig is a very easy choice as an alternative to using C itself.

    • simonask 2 days ago

      C is entirely as complicated as Rust, if your goal is to write correct software that doesn't crash all the time. It's only a syntactically simple language. Actually making anything interesting with it is _not_ simple.

      • kstenerud 2 days ago

        Quite right. I have 35 years of C under my belt. I can write it in my sleep.

        But even so, I can't for the life of me write C code that's as safe as Rust. There are just too many ways to make subtle little mistakes here and there, incrementing a typed pointer by a sizeof by mistake thinking it's a uintptr_t, losing track of ownership and getting a use-after-free, messing up atomic access, mutex deadlocks oh my...

        And that's with ALL warnings enabled in CLANG. It's even worse with the default warnings.

      • sothatsit 2 days ago

        It depends on the project. Most of the projects I write in C are very simple, and getting them to work reliably is really not a problem at all.

        If you are writing more complicated or "interesting" programs, then I agree, C doesn't give you a good set of tools. But if all you are writing is small libraries or utility programs, C is just fine. In these cases, Rust feels like pulling out a sniper rifle to shoot a target a meter in front of your face (i.e., overkill).

        If you are writing complex, large, or very mission-critical programs, then Rust is great to have as a tool as well. But we don't have to take such a black and white view to think that Rust is always the best tool for the job. Or C or Zig or whatever languages for that matter.

      • uecker 2 days ago

        Most software I use daily on my Linux system is actually written in C and I can't remember any of it crashing in the last decade or so.

        • simonask 2 days ago

          Yeah, a ton of engineering hours went into making that happen.

          • uecker 2 days ago

            I also write C all the time, and it does not crash. There are certainly memory safety concerns with C, but there are also certainly many programmers that can write C code that does not crash all the time.

            • antonvs 2 days ago

              Survivor bias and selection bias. The list of CVEs tells a different story.

              • uecker a day ago

                Bias is a good keyword with respect to CVEs. As long as there is not much Rust code which is relevant to my daily life I think this is not comparable. And the few Rust packages which now ended up on my system, see no regular security support because they pose a maintenance burden, so actually make me less safe: https://www.debian.org/releases/trixie/release-notes/issues....

                But the original claim was that "C code crashes all the time" which is blatantly wrong.

                • simonask 16 hours ago

                  No, the original claim was that "writing C code that does not crash all the time is significantly harder/expensive", which I think is uncontroversial.

                  • uecker 4 hours ago

                    To be precise, it was "C is entirely as complicated as Rust, if your goal is to write correct software that doesn't crash all the time." and I think we wouldn't have this discussion if it were uncontroversial. As somebody writing C code and managing a team that writes C code, there is not a particular high effort needed to make C software not crash. It may be too easy to write messy C code that crashes all the time, but this is not at all the same thing.

      • taminka 2 days ago
        • simonask 2 days ago

          I don't think a 100-line function signature is representative, but I will point out that the alternative is at least 100 lines of runtime checks instead. In both cases, what a nightmare.

        • jeroenhd 2 days ago

          Typing code from hell for sure, but how would you write an API with the same guarantees in C? Some kind of method specific struct that composes all other kinds of structs/unions to satisfy these requirements?

        • kelnos 2 days ago

          To me that's more an indictment of Diesel than of Rust. I've been using sea-orm for a project I'm working on, and my (generic) pagination function is a hell of a lot simpler and readable than that one.

        • viraptor 2 days ago

          This is an extremely generic interface to some meta magic DSL. It's complex but not really that complicated and yeah, it's going to be a bit long. But that's going to happen in every language where you rely on types for early validation.

        • lll-o-lll 2 days ago

          Yuck. I thought some of the signatures you end up with when building “Modern C++” in the Andrei Alexandrescu style were hairy, but this looks sick. Not in a good way.

          Probably does something cool for all that crazy though?

          • jeroenhd 2 days ago

            Every requirement on the types is commented on why it's necessary.

            This is a generic method in the middle of some database DSL code that does a bunch of SQL operations on a type safe manner. Code like this takes "SELECT ?+* FROM ?+* WHERE ? ORDER BY ?+* LIMIT ? OFFSET ?", specifically the limit and offset part, and returns a type that will always map to the database column. If the query is selecting a count of how many Foo each Baz references, this will map to a paginated Foo to Baz count type.

            The alternative is to manually write this stuff out in SQL, then manually cast the right types into the basic primitives, which is what a language like Zig probably does.

            You'll find similar (though sometimes less type-safe) complex code in just about any ORM/DSL, whether it's written in Java or PHP.

            I don't think you can accomplish this in C without some kind of recursive macro parser generating either structs or maybe function pointers on the fly. It'd be hell to make that stuff not leak or double free memory, though.

        • norskeld 2 days ago

          As a TypeScript developer experienced in type-level acrobatics, this looks just fine...

  • ozgrakkurt 3 days ago

    Compared to C:

    Discriminated unions, error handling, comptime, defer.

    Better default integer type casting, ability to choose between releaseSafe/releaseFast

    And probably other things.

    As for comparison to Rust, you do want very low level memory handling for writing databases as an example. It is extremely difficult to write low level libraries in Rust

    • simonask 3 days ago

      I think the argument is that it is also extremely difficult to write low level libraries in Zig, just as it is in C. You will just only notice the difficulty at some later point after writing the code, potentially in production.

      • flohofwoe 3 days ago

        > low level libraries in Zig, just as it is in C

        Did you write any Zig code yet? In terms of enforced correctness in the language (e.g. no integer promotion, no implicit 'dangerous' casts, null-safety, enforced error handling, etc...) and runtime safety (range-, nullptr-, integer-overflow-checks etc...), Zig is much closer to Rust than it is to C and C++.

        It "just" doesn't solve static memory safety and some (admittedly important) temporal memory safety issues (aka "use-after-free"), but it still makes it much harder to accidentially trigger memory corruption as a side effect in most situations that C and C++ let slip through via a mix of compile errors and runtime checks (and you get ASAN/UBSAN automatically enabled in debug builds, a debug allocator which detects memory leaks and use-after-free for heap-allocations (unfortunately not for stack allocations), and proper runtime stack traces - things that many C/C++ toolchains are still missing or don't enable by default).

        There is still one notable issue: returning a reference to stack memory from a function - this is something that many unexperienced Zig programmers seem to stumble into, especially since Zig's slice syntax looks so 'innocent' (slices look too similar to arrays, but arrays are values, while slices are references - e.g. 'fat pointers') - and which IMHO needs some sort of solution (either a compile time error via watertight escape analysis, or at least some sort runtime check which panics when trying to access 'stale' data on the stack) - and maybe giving slices their own distinct syntax that doesn't overlap with arrays might also help a bit.

        • simonask 2 days ago

          I mean, there's no question that Zig, also in its current state, is vast improvement over C or even C++ - for the "small stuff". It is much more pleasant to use.

          But there is still the "big stuff" - the things that have a fundamental, architectural impact. Things like: Will my program be multithreaded? Will I have many systems that interact? Will my program be maximally memory-efficient? Do I have the capacity (or money) to ensure correctness if I say "yes" to any of that?

          The most important consideration in any software project is managing architectural complexity. Zig is better, yes, but not a paradigm shift. If you say "yes" to any of the above, you are in the same world of pain (or expenses) as you would be in C or C++. This is the reason that Rust is interesting: It makes things feasible/cheap that were previously very hard/expensive, at a fundamental level.

          • flohofwoe 2 days ago

            > Zig is better, yes, but not a paradigm shift.

            But it doesn't have to be and it shouldn't. Rust also isn't a paradigm shift, it "just" solved static memory safety (admittedly a great engineering feat) but other languages solved memory safety too decades ago, just with more of it happening at runtime.

            But in many other areas Rust copied too many bad ideas from C++ (and many of those "other things" Zig already does much better than Rust - but different people will have vastly different opinions about whether one solution is actually better than another - so discussions about those features usually run in circles).

            There shouldn't be a single language that solves all problems - this will just result in stagnation, it's much better to have many languages to pick from - and even if they just differ in personal opinions of the language author of how to do things or entirely subjective 'syntax sugar' features. Competition is good, monocultures are bad.

            > The most important consideration in any software project is managing architectural complexity

            No language will help you managing "architectural complexity" in any meaningful way except by imposing tons of arbitrary restrictions which then get in the way for smaller projects that don't need much "architecture". We have plenty of "Enterprise-scale languages" already (Java, C#, Rust, C++, ...), what we need is more languages for small teams that don't get in the way of just getting shit done, since the really interesting and innovative stuff doesn't happen in enterprise environments but in small 'basement and bedroom teams' :)

            • simonask 2 days ago

              You put "just" in scare quotes, but that word does a lot of heavy lifting there. Static memory safety is an extremely useful thing, because it enables you to do things competent programmers would never dare in C, C++, or Zig. Things like borrowing data from one thread's stack in another thread, or returning anything but `std::string` from a function. These things were simply not feasible before without a huge bulky runtime and GC.

              Keep in mind that Rust's definition of "memory safety" covers much more than just use-after-free, the most important being thread safety. It is a blanket guarantee of no undefined behavior in any code that doesn't contain the word `unsafe`. Undefined behavior, including data race conditions, is a major time sink in all non-hobby C or C++ projects.

              What bad ideas from C++ did Rust copy, in your opinion? I'm really not sure what you mean. Smart pointers? RAII?

              There are plenty of languages that enable quick iteration, prototyping, or "just getting shit done". If that's what you need, why not use them? I'm personally more concerned about the finished product.

              • Mawr 2 days ago

                You know, I'm beginning to understand why people complain about the "Rust Evangelism Strike Force". Can we discuss a language without the constant "But why not Rust instead?!", pretty please?

                • simonask 2 days ago

                  I did not bring up Rust.

                  What are you saying? Do you think I was sent here by some sinister cabal that organizes an effort to direct any discussion about programming towards Rust?

              • flohofwoe 2 days ago

                > You put "just" in scare quotes

                ...I don't use them as "scare quotes", it's more like in "can't you just..." - e.g. something that looks simple from the outside but is hard to do / complex on the inside - e.g. I do recognize the work that went into Rust's memory safety, but I question the effort that went into it compared to more traditional memory safety methods, especially when looking at the restrictions that Rust imposed on the programmer - it's a pretty hefty tradeoff (IMHO of course).

                > What bad ideas from C++ did Rust copy, in your opinion?

                Mainly doing things in the stdlib that should be built into the language (e.g. Option, Result, Box, Cell, RefCell, Arc, and probably a dozen other types and traits...), resulting in what I call 'bird droppings syntax' of too many chained function calls to get to the thing you actually want (.unwrap, .into_iter, .iter, .iter_mut, .as_ref, .as_mut, .expect, .unwrap_or_else, .unwrap_or_default, .ok, .ok_or_else, .and_then, .or_else ... like, wtf?). The absurd amount of `::` and `<>` in typical Rust code. The missing separation line between stdlib and language (like for-loops using the Iterator trait, or more obviously: operator overloading). The stdlib doing memory allocations behind your back through a global allocator - which we know from C++ do be a really bad idea for decades already... etc etc... I think most Rust programmers are blind towards those issues the same way that C++ programmers are blind towards C++ issues (which is another thing both language ecosystems have in common though).

                • simonask 2 days ago

                  I mean, there's no winning here. Either the language is too complex and does too many things, or it's not complex enough and relegates fundamental things to the standard library.

                  I don't thing there is any substantial difference between `Option<Thing>` and `@Nullable Thing` or `Thing | null`, I don't think there's anything wrong with choosing `::` over `.` for namespace resolution (it means you can have local variables with the same name as a module), and you have to have some way to declare generic parameters.

                  Rust generally does not allocate behind your back, but custom allocators is a work in progress. The reason it takes time is precisely that they want to avoid the mistakes of C++. A couple of mistakes were already avoided - for example, async/await in Rust does not allocate behind your back, while C++ coroutines do.

                  • tialaramex 2 days ago

                    > I don't thing there is any substantial difference between `Option<Thing>` and `@Nullable Thing` or `Thing | null`

                    I object to @Nullable and similar arrangements as a magic special case. If the sum types only solved this one issue then it's a wash but they do a lot more too.

                    Either of the sum types, the concrete `Option<Thing>` or the ad hoc `Thing | null` are OK because they're not magic, less magic is better.

                    • simonask 2 days ago

                      I’m confused. You seem to think Option is magical? It is not, and neither is Result. They are regular sum types defined in the standard library, nothing special about them.

                      • tialaramex 2 days ago

                        That's certainly not what I was trying to get across. Perhaps I wrote something confusing, for which I apologise, but, since we're here anyway...

                        Technically Option is actually magic, though it's for a subtle reason unconnected to the current topic. If you go read its source Option is a langitem, that is, Rust literally isn't allowed to exist without this type. Rust's core libraries are defined, so you shouldn't and most people never will use Rust without them, but the Rust language actually doesn't require that all of core exists, it does require a handful of types, functions etc. and those are called "langitems".

                        So, why is Option a langitem? Well, Rust's language has for-each loops. But what it actually does (there's literally a compiler step doing this) is treat those loops as if they'd been written as a slightly clunky loop { } block making and then consuming an Iterator. To do that the IntoIterator and Iterator traits must exist, and further as the return type from the next call needed in Iterator is an Option the Option generic type must exist too!

                        Technically this type wouldn't have to be called Option, the Rust compiler doesn't care what it is called, but it must exist or else the compiler doesn't know how a for-each loop can work.

                        • simonask 2 days ago

                          Being a langitem means that the compiler can use it in desugaring, and `for ... in ...` happens to be syntactic sugar in Rust, just like `for (auto x: y) {}` is in C++. `Range` et al are also compiler builtins, because `a..b` desugars to that type. It doesn't imply that the type itself is a compiler builtin, for example.

                          Another example is the `Deref` trait, which roughly corresponds to `operator->()` or `operator*()`, or an implicit reference conversion. It is called implicitly so you can use `smart_ptr.foo` without special syntax for dereferencing.

                          I think I fundamentally don't understand why any of this is a problem?

                          • tialaramex a day ago

                            I think maybe your fundamental misunderstanding was just a misreading of what I actually wrote?

                            I don't like @Nullable (or the ? syntax used in C# for example) because they're a special case magic. They handle exactly one narrow idea (Tony's Billion Dollar Mistake, the "null" pointer) and nothing else.

                            I prefer sum types - both Option<T> and T | null are syntax for sum types. The former, which Rust has, is an explicit generic type, while the latter is an ad hoc typing feature. I don't have a strong preference between them.

                  • throwawaymaths 2 days ago

                    doesn't the llvm coroutines require c's malloc? this is part of the reason why zig scrapped async. i would suspect Rust's async/await does allocate behind your back

                    • steveklabnik 2 days ago

                      Rust does not. It also doesn’t use LLVM’s coroutines.

      • kristoff_it 2 days ago

        > I think the argument is that it is also extremely difficult to write low level libraries in Zig, just as it is in C.

        This has been not my experience at all in the ~6 years I've been writing Zig. I started having very little experience writing C (<1000, lines all written while in university) and since day 1 Zig has been a tremendous improvement over it, even back when it was at version 0.4.0.

        • simonask 2 days ago

          Glad you're having a great time with it. :-)

          I'm informed by having shipped a lot of C++ code in my time, which has taught me a lot about actually delivering stable software. Being better than C is a very low bar here.

      • ozgrakkurt 2 days ago

        This is a subjective argument. You don’t know me, I don’t know you. There is no meaning in assuming anything.

        There is plenty of working software written with pretty much any language

    • kstenerud 2 days ago

      > It is extremely difficult to write low level libraries in Rust

      Really? I've not found it at all difficult to write low level libraries in Rust.

      • ozgrakkurt 2 days ago

        For me it was very difficult to make an io library on io_uring that is properly safe.

        Also using arena and other special allocators in different sections of the program. While maintaining hard memory limits for different sections of the program.

        These are possible to do in rust but it is very difficult for me so I decided to just not do it. Otherwise I have to spend 5x the normal amount of time to make sure the library cannot be misused by the user ever.

        This is pretty pointless since I’m the only one who is going to use that code anyway.

        Could be skill issue or w/e but I just find zig easier to make progress in

  • flohofwoe 3 days ago

    > a lack of proper sum types

    Do you consider Rust enums 'proper sum types'? If yes what are Zig's tagged unions missing?

    E.g.:

        const Variant = union(enum) {
            int: i32,
            boolean: bool,
            none,
    
            fn truthy(self: Variant) bool {
                return switch (self) {
                    Variant.int => |x_int| x_int != 0,
                    Variant.boolean => |x_bool| x_bool,
                    Variant.none => false,
                };
            }
        };
  • kllrnohj 2 days ago

    Zig is for people who want to write C, that's really it. It's not a replacement for C++ or Rust or Go or Swift or anything "serious".

    As for why you would choose it over C, because C has too many problems for even the C lovers to ignore. Zig fixes a tiny amount of them, just enough to pretend it's not problematic, but not enough to be useful in any non-hobby capacity. Which is fine, very few languages do achieve non-hobby status after all.

  • LAC-Tech 3 days ago

    Zig is a systems programming language. I think that's probably who it's targeting.

    People do systems programming in rust, but that's not really what most of the community is doing. And it's DEFINITELY not what the standard library is designed for.

    • konart 3 days ago

      >People do systems programming in rust, but that's not really what most of the community is doing.

      As someone who haven't done any systems programming after university: wait, what?

      I was under impression that this is exactly what people where doing with Rust.(system apps, even linux kernel, no?)

      If not - what do they (most if the community) are doing with Rust?

      • LAC-Tech 3 days ago

        Web servers, games, and applications, that sort of thing.

        Some people definitely do systems programming in, but it's a minority. The std library is not set up for it at all, you need something like rustix, but even that results in very unidiomatic ("unsafe") rust code.

        In Zig it's all in the std library by default. Because it's a systems programming language, first and foremost.

        • Ar-Curunir 2 days ago

          Rust is in the Linux kernel. Doesn’t get more systems than that…

        • porridgeraisin 2 days ago

          Actually I was also under OPs impression... can you tell me few specific problems with using rust for systems programming? BTW, I have only ever done something that resembles systems programming in C.

    • simonask 3 days ago

      Which part of the Rust standard library are you referring to here?

      As far as I can tell, it contains many, many features that are irrelevant outside of systems programming scenarios with highly particular needs.

      • LAC-Tech 3 days ago

        Let me answer your question with a question - how do you memory map in rust with the standard library?

        In zig it's std.posix.mmap.

        • tialaramex 2 days ago

          Because Rust's standard library doesn't provide memory mapping you will need to use platform specific APIs.

          In Zig it's exactly the same except that they decided to provide the POSIX platform specific APIs, which if you're using a POSIX system is I guess useful and otherwise it's dead weight.

          It's a choice. I don't think it's a good choice but it's a choice.

        • zelphirkalt 2 days ago

          Your question might hint at a questionable presumption. So let me answer your question with a question - Does one have to memory map in Rust? Perhaps there are alternatives available in Rust, that you are not considering.

        • simonask 2 days ago

          I think you are moving the goal posts. You use the `memmap2` crate, or the `libc` crate if you want to be reckless about it. The question was how the standard library gets in your way, not whether it includes everything you need.

          And I don't think that including every feature of every possible OS is a sensible position to have for a standard library. Or would you argue that it should also include things like `CreateWindowExW()`?

          If all you use is the Rust standard library, you can be reasonably sure that your program works on all platforms, and memory mapping is something that is highly platform specific, even among POSIX-likes. I would not like to attempt designing a singular cross-platform API for it that has to be maintained in perpetuity.

          (There are a few OS-specific APIs in the Rust standard library, mostly because they are required anyway for things like I/O and process management. But the limit has to be set somewhere.)

        • kryptiskt 2 days ago

              extern "C" mmap(addr:*mut c_void, length:c_size_t, prot:c_int, flags:c_int, fd:c_int, offset:c_ssize_t) -> *mut c_void;
          
          Piece of cake. Or you could install a crate with bindings if you are afraid of writing code yourself.
xmorse 2 days ago

If you are too dumb for it I don't have any hope