points by spinningslate 1 year ago

this. Erlang's concurrency support is one of those things you can't unsee. Going back to sequential-by-design languages (which is pretty much every other industrial quality language bar go[1]) just feels cumbersome:

C/C++/C#/Python/...: "You want concurrency? Sure. We have OS processes, and threads, and this cool new async doohickey. Pick whatever you fancy! Oh, but by the way: you can't use very many processes cos they're _really_ heavyweight. You can have lots more threads, but not too many, and beware corrupting the shared state. Asyc though, you can have _loads_ of things going on at once. Just, y'know, don't mix the colours up".

With Erlang/Elixir it's just:

"You want concurrency? Sure, here's Erlang processes. You can have millions of them. Oh, you need to communicate between them? Yep, no probs, messages and mailboxes. What's that? Error handling? Yep, got that covered too - meet the Supervisors"

--

[1] Counting Elixir as "Erlang" in this context given it also sits on the BEAM VM.

binary132 1 year ago

C++: zero-cost leaky abstraction with unlimited cognitive and development cost

Functional programming languages: Unlimited good abstractions of unknown cost

I don't feel like there's a great third option. Go is pretty good.

  • TwentyPosts 1 year ago

    I hate to be that guy, but if you want "C++ but also functional programming" then... Well, then Rust is in fact the language which you're looking for.

    • binary132 1 year ago

      If anything the cognitive load of rust is worse than C++

      • flembat 1 year ago

        Rust is certainly challenging, even copilot finds it difficult to write correct rust, it's fun to see the AI fight the compiler.

        • binary132 1 year ago

          I guess the way I feel about using rust is kind of the opposite of how I feel about using Go. Go has such plain and straightforward semantics that it’s very easy to make simple, straightforward packages with it. It is actually even simpler than Python. In C++, it’s a pain to get there, but those simple anbstractions are still at least _possible_. In Rust, the nuances of the semantics are not only convoluted, but also very hard to encapsulate and abstract over. It’s not that the abstractions leak quite as much as it is that the nuances of the semantics are inextricably and necessarily _part of the package API_.

          It could probably be argued that this is simply acknowledging reality, and that C++ lets you get away with murder if you want to. But maybe that’s what I want.

          I think a good feature for C++ would be a “safe” (or “verified”?) block or method decorator that could enforce some set of restrictions on semantics, e.g. single mutable borrow. The problem is that those functionalities are part of the compiler APIs and not standardized.

          The Rust compiler is also not standardized, but they can get away with it.

neonsunset 1 year ago

C# tasks are lightweight, and I'd expect for per-task overhead to be significantly lower than that of Erlang's.

e.g.:

    var delay = Task.Delay(3_000);
    var tasks = Enumerable
        .Repeat(async () => await delay, 1_000_000)
        .Select(f => f());

    Console.WriteLine("Waiting for 1M tasks...");
    await Task.WhenAll(tasks);
    Console.WriteLine("Finished!");

edit: consider suggesting a comparable example in Erlang before downvoting :)

  • rdtsc 1 year ago

    Do they have isolated heaps and can they be preempted, even if they spin in an infinite loop doing some CPU intensive things?

    • neonsunset 1 year ago

      Tasks are not processes, and that would be a wrong thing to do, and so would be "isolated heaps" given performance requirements faced by .NET - you do want to share memory through concurrent data structures (which e.g. channels are despite what go apologists say), and easily await them when you want to.

      CSP, while is nice on paper, has the same issues as e.g. partitioning in Kafka, just at a much lower level where it becomes critical bottleneck - you can't trivially "fork" and "join" the flows of execution, which well-implemented async model enables.

      It's not "what about x" but rather how you end up applying the concurrent model in practice, and C# tasks allow you to idiomatically mix in concurrency and/or parallelism in otherwise regular code (as you can see in the example).

      I'm just clarifying on the parent comment that concurrency in .NET is not like in Java/C++/Python (even if the latter does share similarities, there are constraints of Python itself).

      • rdtsc 1 year ago

        > and that would be a wrong thing to do, and so would be "isolated heaps" - you do want to share memory through concurrent data structures (which e.g. channels are despite what go apologists say), and easily await them when you want to.

        It depends on the context. In some contexts absolutely not. If we share memory, and these tasks start modifying global data or taking locks and then crash, can those tasks be safely restarted, can we reason about the state of the whole node any longer?

        > CSP, while is nice on paper

        Not sure if Erlang's module is CSP or Actor's (it started as neither actually) but it's not just nice on paper. We have nodes with millions of concurrent processes running comfortably, I know they can crash or I can restart various subsets of them safely. That's no small thing and it's not just paper-theoretical.

        • neonsunset 1 year ago

          What value does isolated heap offer for memory-safe languages?

          Task exceptions can simply be handled via try-catch at the desired level. Millions of concurrently handled tasks is not that high of a number for .NET's threadpool. It's one thing among many that is "nothingburger" in .NET ecosystem which somehow ends up being sold as major advantage in other languages (you can see it with other features too - Nest.js as a "major improvement" for back-end, while it just looks like something we had 10 years ago, "structured concurrency" which is simple task interleaving, etc.).

          It's a different, lower-level model, but it comes with the fact that you are not locked into particular (even if good) way of doing concurrency in Erlang.

          • asabil 1 year ago

            GC determinism is one of the things you get. Another one is non cooperative asynchronous termination.

            • neonsunset 1 year ago

              Pretty much all efficient GC implementations are inherently non-deterministic, even if predictable.

              How can this improve predictability of GC impact?

              • asabil 1 year ago

                No global GC. Each erlang process does its own GC, and the GC only happens when the process runs out of space (ie. the heap and stack meet).

                You can for example configure a process to have enough initial memory so as not to ever run into GC, this is especially useful if you have a process that does a specific task before terminating. Once terminated the entire process memory is reclaimed.

                • neonsunset 1 year ago

                  There is no free lunch in software - the tradeoff is binary serialization and/or data copying over simple function calls. The same goes for GC - for efficient GC, it has to come with quite involved state which has additional cost of spawning. At this point, might use bump allocator, or an arena. Either way, Gen0 (it's a generational GC) in .NET acts like one, STW pauses can be sub-millisecond and are pretty much non-issue, given that you don't even need to allocate that often compared to many other high-level languages.

          • asa400 1 year ago

            Briefly, the tradeoff that Erlang and its independent process heaps model make is that garbage collection (and execution in general) occurs per-process. In practical terms, this means you have lots of little garbage collections and much fewer "large" (think "full OS process heap") collections.

            This provides value in a few ways:

            - conceptually: it is very simple. i.e., the garbage collection of one process is not logically tied to the garbage collection of another.

            - practically: it lends itself well to low-latency operations, where the garbage collection of one process is able to happen concurrently to the the normal operation of another process.

            Please note that I am not claiming this model is superior to any other. That is of course situational. I am just trying to be informative.

            This is a good post with more information, if you're interested: https://hamidreza-s.github.io/erlang%20garbage%20collection%...

        • neonsunset 1 year ago

          RE: locks and concurrently modified data-structures

          It comes down to the kind of lock being used. Scenarios which require strict data sharing handle them as they see fit - for recoverable states the lock can simply be released in a `finally` block. Synchronous/blocking `lock` statement does this automatically. All concurrent containers offered by standard library either do not throw or their exceptions indicate a wrong operation/failed precondition/etc. and can be recovered from (most exceptions in C# are, in general).

          This does not preclude the use of channel/mailbox and other actor patterns (after all, .NET has Channel<T> and ConcurrentQueue<T> or if you would like to go from 0 to 100 - Akka and Orleans, and the language offers all the tools to write your own fast implementation should you want that).

          Overall, I can see value of switching to Erlang if you are using a platform/language with much worse concurrency primitives, but with F# and C#, personally, Erlang and Elixir appear to be a sidegrade as .NET applications tend to scale really well with cores even when implemented sloppily.

          • weatherlight 1 year ago

            If you use an 96 core machine, or 96 individual machines with single core each, the Erlang code is going to look pretty much the same.

  • solid_fuel 1 year ago

    I'm not really sure what this proves, since there aren't really good reasons for spawning 1 million processes that do nothing except sleeping. A more convincing demonstration would be spawning 1 million state machines that each maintain their own state and process messages or otherwise do useful work. But examples of that on the BEAM have been around for years.

    So, in interest of matching this code I wrote an example of spawning 1_000_000 processes that each wait for 3 seconds and then exit.

    This is Elixir, but this is trivial to do on the BEAM and could easily be done in Erlang as well:

        #!/usr/bin/env elixir
        
        [process_count | _] = System.argv()
        
        count = String.to_integer(process_count)
        
        IO.puts "spawning #{count} processes"
        
        1..count
        |> Enum.map(fn _c ->
          Task.async(fn -> 
            Process.sleep(3_000) 
          end)
        end)
        |> Task.await_many()
    
    

    The default process limit is 262,000-ish for historical reasons but it is easy to override when running the script:

        » time elixir --erl "+P 1000001" process_demo.exs 1000000
        spawning 1000000 processes
        
        ________________________________________________________
        Executed in    6.85 secs    fish           external
           usr time   11.79 secs   60.00 micros   11.79 secs
           sys time   15.81 secs  714.00 micros   15.81 secs
    
    

    I tried to get dotnet set up on my mac to run the code in your example to provide a timing comparison, but it has been a few years since I wrote C# professionally and I wasn't able to quickly finish the required boilerplate set up to run it.

    Ultimately, although imo the BEAM performs quite well here, I think these kind of showy-but-simple tests miss the advantages of what OTP provides: unparalleled introspection abilities in production on a running system. Unfortunately, it is more difficult to demonstrate the runtime tools in a small code example.

    • neonsunset 1 year ago

      The argument regarding representativeness is fair. But I think it is just as important for the basics to be fast, as they represent a constant overhead most other code makes use of. There are edge cases where unconsumed results get optimized away and other issues that make the results impossible to interpret, and these must be accounted for, but there is also a risk of just reducing the discussion to "No true Scotsman" which is not helpful in pursuit of "how do we write fast concurrent code without unnecessary complexity".

      I have adjusted the example to match yours and be more expensive on .NET - previous one was spawning 1 million tasks waiting for the same asynchronous timer captured by a closure, each with own state machine, but nonetheless as cheap as it gets - spawning an asynchronously yielding C# task still costs 96B[0] even if we count state machine box allocation (closer to 112B in this case iirc).

      To match your snippet, this now spawns 1M tasks that wait the respective 1M asynchronous timers, approximately tripling the allocation traffic.

          var count = int.Parse(args[0]);
      
          Console.WriteLine($"spawning {count} tasks");
      
          var tasks = Enumerable
              .Range(0, count)
              .Select(async _ => await Task.Delay(3_000));
      
          await Task.WhenAll(tasks);
      

      In order to run this, you only need an SDK from https://dot.net/download. You can also get it from homebrew with `brew install dotnet-sdk` but I do not recommend daily driving this type of installation as Homebrew using separate path sometimes conflicts with other tooling and breaks SDK packs discovery of .NET's build system should you install another SDK in a different location.

      After that, the setup process is just

          mkdir CSTasks && cd CSTasks
          dotnet new console --aot
          echo '{snippet above}' > Program.cs
          dotnet publish -o .
          time ./CSTasks
      

      Note: The use of AOT here is to avoid it spamming files as the default publish mode is "separate file per assembly + host-provided runtime" which is not as nice to use (historical default). Otherwise, the impact on the code execution time is minimal. Keep in mind that upon doing the first AOT compilation, it will have to pull IL AOT compiler from nuget feed.

      Once done, you can just nuke the `/usr/local/share/dotnet` folder if you don't wish to keep the SDK.

      Either way, thank you for putting together your comment - Elixir does seem like a REPL-friendly language[1] in many ways similar to F#. It would be impolite for me to not give it a try as you are willing to do the same for .NET.

      [0]: https://devblogs.microsoft.com/dotnet/performance-improvemen...

      [1]: there exist dotnet fsi as well as dotnet-script which allow using F# and C# for shell files in a similar way, but I found the startup latency of the latter underwhelming even with the cached compilation it does. It's okay, but not sub-100ms an sub-20ms you get with properly compiled JIT and AOT executables.