> In addition, we are exploring more seamless interoperability with C++ through Carbon, as a means to accelerate even more of our transition to MSLs.
Carbon doesn’t really care about memory safety except as a vague “in the future we’ll evolve the language somehow to add more memory safety” statement. The primary goal seems to be to parse code as quickly as possible. So I’m not sure I understand how it improves the MSL story. Or is it saying that interfacing Carbon with a MSL like Rust is easier so carbon is a better bridge to MSL than things like autocxx?
Carbon is still a small team, so it is going to take time to achieve all of our goals. Carbon will need to demonstrate both C++ interop and memory safety to be considered a successful experiment, worth pushing to a 1.0 version. Once those are achieved, we do expect it will be easier to get C++ code to memory safety through Carbon, since that is the purpose it is being created for. The impedance mismatch between C++ and Carbon will be much lower than with Rust.
Parsing code quickly is merely one of those goals we can demonstrate today.
Does memory safety have a flushed out idea? Is it a form of borrow checking or something else? And if I recall correctly thread safety isn’t a target at all right? Not sure how to square that with memory safety as a goal given the two are linked.
It's weird that he excludes data races from the memory safety even though data races can be in the other classes he calls out. Not sure I buy that it's not as relevant for security since use-after-free is a common side effect of data races.
I am also a little confused by it, but we'll see how it goes. Since everything is still subject to change, it's possible that this won't be the final word on the matter, so I'm not too concerned at this stage.
I would have thought carbon would have memory safety from the get go, but having a language where memory safety is one of the primary goals is meaningful in and of itself (as opposed to C++, where memory safety is one of many competing priorities).
I think we can all agree that whenever Rust comes up there's a high amount of "I like to program in C/C++ and don't plan to change, thankyouverymuch"-style comments. Was there a similar backlash against the rise of Java, Swift, Kotlin, or TypeScript? Or is this somehow a unique case? Nothing is new under the sun and all that, but I can't remember similarly animated discourse before.
I think it is a special case because the Rust people is especially annoying in their evangelism.
I remember a degree of pushback on FP, because the people that were into things such as Haskell and Scala were a little annoying with their desire to add FP everywhere too.
My takeaway is that annoying evangelists get a level pushback similar to how annoying they are.
One could make the argument that the rust people had to become annoying evangelists because otherwise the default is still pushback from c/c++ folks that don’t want changes.
It's more than that because rust evangelists often preach replacing GC'd languages with rust in cases where it is not the best fit (mundane web dev for example).
I think Rust is a good fit for webdev, if you have programmers who already know or actively want to learn Rust, and if you don't need libraries not available for Rust. Admittedly those are some big caveats, but better latency and throughput are never bad things. The lack of GC is not actually a problem once you have internalized the rules of the borrow checker.
The only big problem I see with Rust that makes it a bad fit for webdev is the young ecosystem. The compile times are not great, but I don't think that's a significant issue when writing APIs.
Part of the magic of learning Rust is realizing that it's a fast compiled language that one doesn't fear to apply to services connected to the open internet. This is so unusual, it may be unique. And I think that leads to a lot of experimentation and application of the language to the sorts of problem which typically receive attention in memory safe languages, but not compiled languages. Rust, by definition, straddles both.
> Part of the magic of learning Rust is realizing that it's a fast compiled language that one doesn't fear to apply to services connected to the open internet. This is so unusual, it may be unique.
Is that sarcasm? If not, thanks for proving my point.
From the top of my head, Go/.NET/Java are fast and safe choices. Also safe in the sense that you won't have a hard time hiring devs that are productive from the get go.
> often preach replacing GC'd languages with rust in cases where it is not the best fit (mundane web dev for example).
> proving my point.
Was your point misunderstanding that garbage collection != memory safety? Having garbage collection doesn't make a language any better for web dev. Being memory safe does. If you do not understand the difference and needlessly associate the two, you are validating the work put in by the evangelists.
> What?! You really think memory safety is binary?
Not sure why you'd assume that. The languages you offered are only as memory safe as their runtimes, which are usually implemented in something C flavored, and the compiler for same. Rust identifies memory unsafe areas instead with the unsafe tag. Functionally equivalent but I know which I like more.
> It does though. It's more ergonomic and does adds safety over manual memory management.
This is the misunderstanding. Rust manages to be wonderfully ergonomic without GC. The type system is really the only thing that makes it wordier than C++. And I've written large Rust applications with no manual memory management whatsoever. Occasionally I'll have to wrap something in an Arc Mutex because something needs to run in another thread. One or two variables in a whole application.
> This is the misunderstanding. Rust manages to be wonderfully ergonomic without GC. The type system is really the only thing that makes it wordier than C++
Thanks for the laugh. Maybe lifetimes/borrow checking syntax got invisible after a while to you. The only explanation.
I'll stop here. Don't have the time nor desire to argue against what seems like passionate opinions.
Lifetime syntax is only more visible if you try to write high performance code. If you wrap everything in Rc<RefCell> or Arc<Mutex> or Box you’ll not see it show up anywhere but you’ll probably still get a performance profile better than tracing GC because time and time again it’s been shown that reliable tail latencies and peak throughputs are generally harder to achieve with a tracing GC. Where lifetimes start to pop up more is in more complicated situations where you are trying to avoid memory allocations and have even higher performing code.
And op is correct. In terms of memory safety, Java and C# aren’t as safe as you might hope because significant chunks of the runtime are still in C/C++ (not true for Go) and because of JIT miscompilation bugs (also not relevant for go). Of course JIT miscompilation in theory should be the same as a compiler miscompilation, but in practice it turns out to be worse because you have to maintain consistency between the interpreted and all the tiers of JIT at runtime and in practice humans have a hard time doing that which is why V8 constantly has security vulnerabilities in this area. That’s why GraalVM is interesting because it proposes a JIT design where you only have the compiler implemented once.
Go is not magically exempt from constraints other languages are subject to, no matter how much blind faith is put into this mantra.
Java has different compiler implementations not all of which necessarily go through interpreter stage. .NET CoreCLR does not have one in the first place (there is technically interpreter code in the codebase but it has bitrotted and is unused). JIT or not does not make it inherently more prone to miscompilation bugs. One thing that Go has going for its compiler is it is a massively simpler design. The downside of that, of course, is much worse compiler output that cannot cope with complex code and is inadequate at high-performance scenarios, where the users of Go either have to try to work around this with goasm or might not even have a solution at all, short of rewriting the component in a different language and praying that the FFI call overhead is low enough for the workload in question.
In addition to that, runtime implemented in C/C++ vs Java/C# or other language does not make it inherently safer. It is always a lower-level flavour of the language, where it cannot depend on itself, and with likely numerous relaxations in assurances to stay performant that the authors uphold manually. Even when C or C++ are used - it's usually a limited, focused subset of the language. And it is a bold claim that these runtimes are not extensively tested, fuzzed and otherwise verified against defects before going to production. If you look at bug and CVE history of .NET - pretty much all issues happen in complex "non-core" APIs, often at the unmanaged side provided by operating system rather than .NET itself (the cryptography bits). I can't recall any that are directly or indirectly related to a compiler bugs.
I feel like this reply ignores the wider context, which is plain old web dev, where the use of Rust and the decision fatigue that it carries is likely to be misplaced. Just pick C# or Kotlin.
> If you wrap everything in Rc<RefCell> or Arc<Mutex> ... performance profile better than tracing GC
I seriously doubt it. Naively incrementing Ref counters is extremely expensive. Tracing GCs in modern languages are very advanced. The least you could do is try to elide some of the ref increments - but you can't do that in Rust. Also boxing every object is very unpleasant. If you're gonna do that, use a language with reference semantics, like Java or C#
> significant chunks of the runtime are still in C/C++
Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
> Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
As security experts like to repeat, it’s a numbers game. You only have to make a mistake once to result in an insecure system. And C++ encourages you to make those mistakes.
Same with JIT - because the compilation happens at runtime, a common attack mechanism is to exploit a particular bug in the JIT implementation so that the generated code violates some assumption and results in unsafe execution. This happens all the time with V8 and a large part of that problem comes from having to implement the language twice (once in the interpreter and once in the JIT and any deviation between the two implementations is an opportunity to exploit a vulnerability). This isn’t something I made up. This is from talking to people who are familiar with this and reading analyses by people who know what they’re talking about.
Compilers btw also have miscompilation bugs all over the place. The reason it’s less of a problem is because the code being executed isn’t “malicious” - it’s your code and you’re not going out of your way to structure code to find bugs in the compiler. This also protects most JIT and why most C# programs don’t have this problem - they’re not loading arbitrary code at runtime to execute. It is a particular problem for V8 though where that’s an expected behavior.
I don’t fundamentally disagree with your points but I do want to point out that in Rust, you have far less recount traffic than in a language that uses reference counting as a GC strategy, because you only increment and decrement on new owners, and not on every access. You can take a regular old &T to the interior of an Rc<T> at any time.
I didn't know this, that's pretty cool. Definitely more manual than what Swift is trying to do with RC elision since you have to then deal with runtimes, but also, I'd imagine much more performant due to a finer-grained nature.
Yes, and beyond that, you also have the Rc<T> vs Arc<T> split, so if you're not doing multi-threaded, it's not even an atomic increment/decrement.
That said, this is just my intuition here, I've never seen a good benchmark trying to do the comparison, so I don't want to over sell it either. But a lot of times, it's like, "okay I have four threads so I need the refcount to go to four" and then it's just using normal references to actually use the value inside.
Rc is cheap to the point of free unless it frees - it’s literally just an addition or subtraction with no data dependency in the path. Your CPU can do > 6 billion of these per second on a 6Ghz CPU (if I recall correctly you could do 3 per clock cycle in a micro benchmark due to out-of-order but at a minimum you should be able to do 1 per cycle).
Arc is quite expensive, particularly on x86 because of its memory model. You can maybe do a few million of them per second. But as you said, you shouldn’t be incrementing ownership once you’ve sent it to a new thread and instead using references directly or using something Rc-like to track the lifetime (i.e. when the current thread’s Rc count hits 0, release your Arc reference).
> Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
Runtime bugs are potentially exploitable, even from within the language implemented by the runtime. But I don't think anyone's suggested that they'd present as simply as you suggest. Many such bugs have been found in runtimes of many languages.
Compiler bugs can be similarly exploitable, with the caveat that any code eliminated at compile time won't be accessible at runtime, so compiled languages will tend to have a smaller surface area for exploitation in general.
> Maybe lifetimes/borrow checking syntax got invisible after a while to you.
I just did a quick scan, in a couple thousand lines of Rust I see two, maybe three lifetime annotations. Nah, not a huge contributor to language verbosity. Most Rust folk seem to work with the compiler's notion of what lifetimes should be, and everything's implicit.
Yes, Rust does depend on Clang for the lowest level of compilation at the moment. MIR (where the borrow checker lives and most of memory safety is implemented) and HLR are both implemented in Rust.
What argument? Seems more like a misunderstanding to me.
I don't think I've made any contradictory argument. Perhaps someone else did? Or maybe you assumed a binary where I see gradients. I'm quite sure I didn't claim anywhere that any language was perfectly memory safe. With Rust and Cheri we are getting closer, but there is still progress to be made.
I can say that Rust is well architected such that the borrow checker and most of memory safety is implemented in the midlevel intermediate representation, which is implemented using Rust, and the bits of Clang upon which MIR and HLR are built are both well tested by virtue of being used by multiple language frontends, and limited in scope and modularity. It'd be great to see an all-rust replacement though, as bugs in Clang have been found. I believe there are several projects in the works and existing pure-Rust bootstrap compilers sufficient to get the toolchain working.
OK, dude. If you never correct your misconception, you can be annoyed when people attempt to educate you forever X)
No amount of complaining about Rust evangelists will change how they use the language, but you could learn to understand why they're using it the way they are.
Not the only one. Maybe the fastest, yes. But... at what cost?
Also, I think people seem to believe that memory safety is all safeties. Attacks come in many shapes and forms, from supply chain injection, to SQL injection, to unsanitized input, overflows, secret leaking... memory safety is one of the weak points of a chain, but not the only one.
All require runtimes. Which means they're not in the same class of language as C, C++, and Rust.
> Also, I think people seem to believe that memory safety is all safeties.
It's 70% of CVEs in C and C++. People focus on it because it's the biggest single category. Enums and Option types and the whole type system for that matter goes a long way toward addressing logic bugs in Rust as well.
> it makes no sense to say "no don't use C# for webdev!! Use Rust!"
Well, first of all, I don't see anywhere where anyone said that. Perhaps in another thread? People did argue against the use of Rust for web dev, for which it's a perfectly reasonable choice among many.
Second, I have an embedded HTTP server running on a microcontroller with 500kb of ram in my current project. Works great in Rust. Simply not possible in C#.
> If [any language] is good for that use case then use it.
Agreed. The best camera is the one in your pocket, and the best language is the one you know.
> Also there are genuine benefits to having a runtime.
Things which are commonly associated with runtimes - memory safety, automatic memory management, various type systems, high level language abstractions - are all genuinely useful. Folks in this thread seem to want to hold on to a categorization of languages which Rust breaks by virtue of providing features typically only found in languages with runtimes. What Rust shows is that those benefits can be a part of a well designed language, with or without a runtime. And not having a runtime is a hard requirement in areas like embedded, OS development, realtime, etc. There is typically a separation between languages you'd use for that sort of thing and languages you might use for web dev, but the reasons for that separation do not apply to Rust. Which is fun.
Some people seem to be salty about it though, lol.
One significant downside to having a runtime is that it significantly complicates the process of program verification. Making it much harder to do statically. Necessitating certain things happen at runtime like error handling and performance profiling. Rust mostly has what it needs to get that work done at compile time, which means the developer has an opportunity to fix it, before some user runs into it.
This is the point - no, you actually DON'T get all the benefits of a runtime. C# famously, and extensively, used runtime code generation. Not possible in Rust, kind of possible in C++.
> Necessitating certain things happen at runtime
Not how it works. The language is still compiled. You can 100% implement a borrow checker for C#, there's nothing stopping anyone. It's just not what they want to do.
For example, C++ and C are BOTH compiled languages. But C++ is able to verify the type of various operations at compile-time, but C has to wait until runtime. For example, C's qsort takes void pointers and then you cast and dereference them at runtime. In C++, this is physically baked into the type of function template std::sort.
But they're both compiled languages with no runtime. But because C++ has a much more complete type system, it can do those things at compile-time while C cannot.
Another example: while C# has polymorphized generics, it checks them at compile time! The generics are not monomorphized like C++ or Rust, which allows smaller linked libraries that you can load and use at runtime, which is not possible in C++ templates or Rust generics. The trade-off is then that generics can't be checked until runtime - but actually no. The C# compiler will go out of its way to check generics when it can and will fail if they don't meet the type constraints, just like C++ or Rust.
Rust is very powerful, BUT:
1. It doesn't cover all the usecases of more powerful languages with rich runtimes like C# or Java
2. Using async rust, which is typically a requirement for webdev, requires a runtime anyway.
> Another example: while C# has polymorphized generics, it checks them at compile time! The generics are not monomorphized like C++ or Rust, which allows smaller linked libraries that you can load and use at runtime, which is not possible in C++ templates or Rust generics. The trade-off is then that generics can't be checked until runtime - but actually no. The C# compiler will go out of its way to check generics when it can and will fail if they don't meet the type constraints, just like C++ or Rust.
Generics are not monomorphized at bytecode level, but whenever generic argument is a struct, the methods and types that have it are monomorphized at the stage of compiling to machine code, be it with JIT or ILC. This is identical to Rust, and is why C# has "zero-cost" abstractions.
Method bodies and types for class-type generic arguments are indeed shared however. Dispatch on generic constraints is still quite efficient in that case, even if no longer zero-cost (unless the compiler is able to see the exact type and inline such calls, or emit a guarded devirtualized fast-path when compiled by JIT, or prove that only few types implement a particular interface or subclass a specific parent when compiled by NativeAOT).
Or they could just carry on with their preferred language and not start shaming the shame bell until you repent your ungodly c++ ways. I think we have to consider both sides, weigh the languages and costs, including cost of writing a new code base and training new programmers.
I do want changes in C++, but believe me, the Safe C++ proposal (a copy of Rust typesystem on top of C++) that was sent to WG21 is not the push I would like for safety in C++ for reasons beyond the scope of Rust.
> I think it is a special case because the Rust people is especially annoying in their evangelism
I would say that what is bad about this is like they seem to think there is no other way. There is "the true Rust way" and after that, the others. At least I got that attitude, because when claims have been done about why the only path to Safe C++ is to copy Rust and I question it, they reply as "what you say is impossible" and I show them there are ways, then they move the attack somewhere else, even I have been accused of "wasting their time"...
Except all languages are screwdrivers in this analogy.
However, There here are some immensely annoying people (just see other replies here) that can't fucking shut up about how their screwdriver of choice is better.
the point is that if some screwdriver is objectively better (doesn't snap and takes your eye off when you make a mistake) then the decision to use it shouldn't be affected by who recommends it.
Better for what? That is a huge and broad question to begin with.
For example, let's say Rust is the best language in the world, provably safe.
1. it is in practice? You will use unsafe somewhere: C interfaces, for example, or some abstractions.
2. even if it is still theoretically safer: can I finish my work with it? Ecosystem, libraries, etc.
3. do I have trained people for it?
4. what is the cost of writing software in it compared to other languages?
5. is my system really, really safety-critical? What are the consequences of my program not working well?
If it was really that easy, all of us would code proofs with something like this: https://dafny.org/ and would use languages like Rust.
That is not the case at all and there are a ton of variables, from which cost, and I mean short vs long term cost where short term or middle term dominates for which it makes just more sense to choose the "worse" tool. Why? Because the "worse" tool will make that piece of software exist. The "better" one will probably make it never exist, because of other costs.
Whoever ignores this when doing anything, from coding to other activities, then that's ignoring reality. If human lives are at stake, yes, increase the cost a lot, make it the safest you can, but, even in that case, probably achieving 100% is impossible, or maybe achieving 99.999999% is 10 times cheaper than achieving 99.99999999999999999%.
By this measure, we would not have a lot of the imperfect inventions that improved over time in many areas.
I was one of the people that were really enthusiastic about Scala. But back then even as an enthusiast I had to admit Scala had a lot of drawbacks. Every bad thing inherited from Java, to start, and the leaky abstractions that forced, as well as the learning curve. Maybe the worst part was how easy Scala made it to write completely unintelligible code. Scala could effortlessly tap into the Java ecosystem, though.
Rust is much easier to recommend than Scala ever was. Rust's ecosystem is a bit smaller than you'd typically like, but that's a temporary problem.
A lot of times they are the same people. The Haskell community really isn't as active as it was five or even ten years ago, and many of those Haskell evangelists became Rust evangelists. Some people are just attracted to using the most "advanced" language (for whatever definition of "advanced" that's in vogue) and some of them are extremely vocal about that.
The Haskell community is admittedly less active in promoting niche concepts like zygohistic prepomorphisms, but much more active in building out a solid ecosystem. You won't hear much about the latter because it isn't shiny and weird.
How has the ecosystem become more solid in recent years? All of the important Haskell libraries I use were initially released eight years ago or before then.
While I personally don't like people promoting niche concepts as well (I'd rather have them promote building out a solid ecosystem as you say), I still feel the "brain drain" out of the Haskell community.
* We now have ghcup, a solid way to install Haskell (inspired by rustup, I believe). It used to be that you got access to one GHC package from your Linux distribution if you were lucky, otherwise you had to install manually from a bindist.
* We now have HLS, which integrates superbly with VS Code (it's also fine in Emacs and I guess in other editors, but I don't know). It can be flaky on large codebases, but it's a big step up from what we had before (which was almost nothing).
* GHC and bundled libraries have a much higher degree of API stability. In fact in the last 18 months there has been minimal breakage due to new GHC releases[1]. Community libraries are a bit more of a mixed bag, but there's much more awareness in the community that continual API breakage churn is harmful for adoption.
* We now have the Haskell Foundation[2] supporting all sorts of critical ecosystem activity behind the scenes, and helping people work together harmoniously. There used to be extremely fraught and unpleasant interpersonal battles in the community. Those are a thing of the past.
* We now have solid effect systems (Bluefin[3] and effectful) that combine the best of the transformers world (build effects from components, type safety, handle effects and remove them from the type) and the IO-based world (predictable performance, resource safety, escape hatches). I'm willing to boldly claim that effects are no longer a problem in Haskell.
I'm in the same boat as you: I don't use particularly man new libraries. Is that a problem? I'm given to understand that in the Java world that's called "Monday". Sure, I would have liked it if certain community members had stayed more active, but I don't feel I'm missing anything in particular. (That may just be to do with what I work on.) What are you missing?
I do think there's a bit of the Ignaz Semmelweis[1] issue at hand here, where developers want to believe in their inherent quality and refuse processes that improve safety if it goes against their worldview.
Also, no other language has really posed an existential threat to C/C++ developers. Other languages have narrowed the pool, but Rust is the first to really be a possible replacement. Now, there's no real chance that Rust will actually replace C/C++ in the near future, but even the threat can be a huge concern.
> I do think there's a bit of the Ignaz Semmelweis[1] issue at hand here, where developers want to believe in their inherent quality and refuse processes that improve safety if it goes against their worldview
I think the problem is that other variables (not only safety) must be assessed beyond the pure "better". Haskell is very good also. Very correct. Who uses that, and where? And why? Why not everyone uses https://dafny.org/ ?
Let's also admit every language before Rust hasn't succeeded at out right replacing the vast majority of C/C++, so it becomes it's own form of conservatism/bias. This post is coming from one of the few companies that actually does complete large scale rewrites of things. I'm curious/excited to see how it goes for them.
And I have that C/C++ bias: I don't enjoy programming in rust as much as I do C, but I don't like doing lots of new things.
There is hardly that much C and C++ left on GUI frameworks and distributed computing, other than existing infrastructure from projects that are decades old.
The amount of “we have garbage collection at home” from the C and C++ communities during the rise of Java is a big part of my hatred of C++. That and watching classmates spin out in C++ classes in college.
Saying C/C++ denotes your poor understanding of C and C++, which are usually quite different in style and with very different safety characteristics regarding to code written in good style.
I was alive during the rise of Java and it absolutely got a high amount of "Look at this piece of garbage" style comments. Particularly, people were VERY critical of how slow java was and how much memory it consumed. There was also a fair bit of critique about the lack of language features. For example, java eschewed generics for a long time (not unlike go) which really irked people. It was very much seen as a more verbose C++ with few benefits.
> Particularly, people were VERY critical of how slow java was and how much memory it consumed.
To be fair, those were totally valid criticisms before HotSpot got really good. IIRC, way back in the day Corel said they were going all in on Java for their software suite, and they eventually abandoned it due to performance.
It’s possible to write fast Java and one of the cornerstones of my early career was proving it.
But I’d also spent a lot of time thinking about performance already before finding Java. A lot of people fully embraced the abstract nature of it while I kept it at arm’s length.
I fixed a lot of really dumb mistakes, and a lot of things that were subtly wrong but with large consequences.
* JIT was added in Java 1.1 for Windows, and 1.2 for everyone else. But in the early days it sucked.
* Generics, autoboxing, and enums were added in Java [1.]5 (2004). Prior to that, APIs had to use `Object` everywhere, explicitly wrap primitives when needed, and use plain ints for constants.
* It's $CURRENTYEAR and Java still doesn't support structs, and it's not clear they actually understand the problem (since e.g. they force everything to be `final`).
> It's $CURRENTYEAR and Java still doesn't support structs, and it's not clear they actually understand the problem (since e.g. they force everything to be `final`)
I think they do. The problem they are solving is removing indirection and flattening the memory representation of data. Making all the fields final opens up the JVM to some really neat optimizations you can't get with mutable fields.
By the time you get something with enough fields in it that the immutability limitation becomes problematic, you likely would benefit from just having a regular class. Structs won't always be strictly faster than regular classes.
records aren't the structs OP is referring to. Instead, they are thinking of value classes. (at least, that's effectively what a struct means in C#).
You will see `value record Point(int x, int y)` which is different from `record Point(int x, int y)`. The key difference being that `record Point` will create a reference to the x/y data with identity whereas `value record Point` will instead represent that data inline without a reference.
I suspect that `value record` will be the way pretty much everyone starts using `record` as there's basically no reason not to.
Java doesn't want to do a Python 2, a JAR compiled in 2000 has to still work in a Valhala enabled runtime, assuming it didn't use any of the now deprecated APIs.
Hmm sorry I meant Java interfaces, which makes this comment irrelevant and also I can't find the original supporting article - it might have just been an opinionated blog post.
A lot of younger devs wouldn't remember, but when Swift 1.0 came out it was pretty widely disliked and grumbled about. It was years before companies started using it in production and years after that before it became the preferred option. I recall an internship where the management-motivated rule of "All new classes MUST be made in Swift" being met with a lot of backlash.
I recall working with greybeard types who found ARC to be a little sugary and pined for the days of NSAutoReleasePool. I think this is just a natural cycle of aging builders and their tools, I imagine it predates us all.
These days I pretty frequently find myself frustrated with SwiftUI and wishing I'd just stuck with UIKit, but alas, time waits for no man.
God yes. The C and C++ people would not shut the fuck up about how one could do X and Y in their languages - including garbage collection. You didn’t need Java for that.
Small problems are immediatly obvious in these sorts of replies. Hard to use off the shelf modules if they don’t all use the same memory management replacements. To be fair third party code by volume was in languages like Python or Java. Hard to miss what you haven’t experienced firsthand.
I recall early Java having appetite for memory that hardware often could not satisfy (on a x86 desktop, it is of course still true for many microcontrollers).
Also I read a story of a developer resisting move from hand-written assemby to C, to which company owner replied "you can write your assembly, but not for my money". So nothing new, though some languages get a lot more resistance than others.
> Was there a similar backlash against the rise of Java, Swift, Kotlin, or TypeScript?
Absolutely. Swift was hated because it lacked so much Objective-C functionality at launch, Kotlin got a similar "Java is fine" reaction, and Typescript got a "why add more complexity we don't need" reaction.
This is just the way of things. The most hated languages are some of the most used ones (Javascript, PHP, Python, etc).
This seems like a lot of revisionist history to say Swift was "hated". Sure, there were, some naysayers, but just look at the original Swift announcement from Apple. There were hoops and hollers of joy because everyone knew how problematic Objective-C was.
It's trite at this point but very apropos: "There are only two kinds of languages: the ones people complain about and the ones nobody uses." — Bjarne Stroustrup
I faced really strong criticism lately for criticizing (in the context of C++) a Safe C++ proposal that basically copies Rust into C++.
With all good and bad things this entails.
It seems there is a push from some people (that I hope it is not successful in its current shape, I personally find it a wrong approach) to just bifurcate C++ type system and standard library.
What I found a lot of is literally, intellectual dishonesty in some of the arguments to push in that direction.
Not that it is not a possible direction, but claims like: "there is no other way", "it is the only feasible solution", etc. without exploring other alternative, from which Swift/Hylo value semantics/subscripts (subscripts is basically controlled references for mutation without a full borrow checker) and adding compile-time enforced law of exclusivity semantics without a new type system are possible (and better, IMHO) alternatives... well, you can see the thread.
There seems to be a lot of obsession as "the only right way is the Rust way" and everything else is directly discardable by authors and some people supporting them.
There was no backlash against Java, Swift, Kotlin and TypeScript because they are all GCed languages. You don’t need to learn anything to use them, and it’s not constraining much how you write your code. Rust is very different in that regard. You need to learn about memory ownership and borrowing, and it requires an effort and also constrains your design. I think that explains the (limited but real) backlash.
I'll admit that there are certain patterns/data structures/etc. that are awkward to implement in Rust, but to write good C++ you are essentially required to think about ownership and lifetimes to some degree anyway (despite the greater lenience awarded compared to Rust and the lack of static checking). For anyone familiar with modern C++, I don't think that the shift in mental model should be too huge so the degree of backlash surprises me somewhat.
When Swift first came out there was a subset of obj-c developers who hated Swift’s take on Optional. In obj-c you can freely send messages to nil and it just returns nil. In Swift you have to explicitly mark if a value is optional and handle it being nil if so, and even though this often means adding just a single character (foo?.bar), there were people who thought that Swift was just making things more complicated for no benefit.
I remember a bunch of languages discussed alongside typescript, such as coffeescript, toffeescript, purescript, etc. the consensus was that typescript was the worst of the bunch.
Java yes over JVM/bytecode, the rest were all largely interpreted languages and so the discussion focused on if an interpreted language could ever compare to a compiled one.
But much as there's an indignity to something compiling to JS or your own preferred VM, people were convinced that memory allocation/GC is the reason they got a job and tended to object accordingly - so there was fight against that. Rust is treated by the hard-core C-ists as an affront to their ability to manage memory safely. Go look up early go vs C (or C#, D) - it had its own animated themes, but this was before GG went fucky with its own open source stuff so there was an air of 'google is trustworthy' in those discussions.
> > And Larry Page himself doesn't use G-mail for privacy reasons.
> I couldn't find anything to corroborate this with a quick Google search
I could see how removing that from the search results would be good for Google. Not to say it was there, but not finding it now is weak evidence for it not having happened.
Do you have a reputable source regarding the searching of everyone's phone? Apple tried something like this but couldn't make it work, maybe you're thinking of that effort?
Stop trolling. The compiler does no such thing. The build system & package manager does by default, just like `cmake` does. Of course you can prevent this by vendoring your dependencies, just like you can with other languages.
That’s something that is easily changed. You can download and individually inspect all the source code for yourself as well and proxy it locally. Not a fun task, but one that can be done with some grit and some time.
Anyone know what this means?
> In addition, we are exploring more seamless interoperability with C++ through Carbon, as a means to accelerate even more of our transition to MSLs.
Carbon doesn’t really care about memory safety except as a vague “in the future we’ll evolve the language somehow to add more memory safety” statement. The primary goal seems to be to parse code as quickly as possible. So I’m not sure I understand how it improves the MSL story. Or is it saying that interfacing Carbon with a MSL like Rust is easier so carbon is a better bridge to MSL than things like autocxx?
[Carbon team member here.]
Carbon is still a small team, so it is going to take time to achieve all of our goals. Carbon will need to demonstrate both C++ interop and memory safety to be considered a successful experiment, worth pushing to a 1.0 version. Once those are achieved, we do expect it will be easier to get C++ code to memory safety through Carbon, since that is the purpose it is being created for. The impedance mismatch between C++ and Carbon will be much lower than with Rust.
Parsing code quickly is merely one of those goals we can demonstrate today.
Does memory safety have a flushed out idea? Is it a form of borrow checking or something else? And if I recall correctly thread safety isn’t a target at all right? Not sure how to square that with memory safety as a goal given the two are linked.
Chandler posted about this on reddit: https://www.reddit.com/r/cpp/comments/1g4j5f0/safer_with_goo...
It's weird that he excludes data races from the memory safety even though data races can be in the other classes he calls out. Not sure I buy that it's not as relevant for security since use-after-free is a common side effect of data races.
I am also a little confused by it, but we'll see how it goes. Since everything is still subject to change, it's possible that this won't be the final word on the matter, so I'm not too concerned at this stage.
I'd guess this is the Carbon people milking 1-2 more promos out of this thing before acknowledging reality and switching to Rust ;)
People keep repeating this, when they are the first to mention if you need security today then you should use Rust.
I would have thought carbon would have memory safety from the get go, but having a language where memory safety is one of the primary goals is meaningful in and of itself (as opposed to C++, where memory safety is one of many competing priorities).
Every (software) company should be doing something like this. (Like the NSA told us to)
I think we can all agree that whenever Rust comes up there's a high amount of "I like to program in C/C++ and don't plan to change, thankyouverymuch"-style comments. Was there a similar backlash against the rise of Java, Swift, Kotlin, or TypeScript? Or is this somehow a unique case? Nothing is new under the sun and all that, but I can't remember similarly animated discourse before.
I think it is a special case because the Rust people is especially annoying in their evangelism.
I remember a degree of pushback on FP, because the people that were into things such as Haskell and Scala were a little annoying with their desire to add FP everywhere too.
My takeaway is that annoying evangelists get a level pushback similar to how annoying they are.
One could make the argument that the rust people had to become annoying evangelists because otherwise the default is still pushback from c/c++ folks that don’t want changes.
It's more than that because rust evangelists often preach replacing GC'd languages with rust in cases where it is not the best fit (mundane web dev for example).
So it's not only about C/C++.
I think Rust is a good fit for webdev, if you have programmers who already know or actively want to learn Rust, and if you don't need libraries not available for Rust. Admittedly those are some big caveats, but better latency and throughput are never bad things. The lack of GC is not actually a problem once you have internalized the rules of the borrow checker.
The only big problem I see with Rust that makes it a bad fit for webdev is the young ecosystem. The compile times are not great, but I don't think that's a significant issue when writing APIs.
Part of the magic of learning Rust is realizing that it's a fast compiled language that one doesn't fear to apply to services connected to the open internet. This is so unusual, it may be unique. And I think that leads to a lot of experimentation and application of the language to the sorts of problem which typically receive attention in memory safe languages, but not compiled languages. Rust, by definition, straddles both.
> Part of the magic of learning Rust is realizing that it's a fast compiled language that one doesn't fear to apply to services connected to the open internet. This is so unusual, it may be unique.
Is that sarcasm? If not, thanks for proving my point.
From the top of my head, Go/.NET/Java are fast and safe choices. Also safe in the sense that you won't have a hard time hiring devs that are productive from the get go.
> often preach replacing GC'd languages with rust in cases where it is not the best fit (mundane web dev for example).
> proving my point.
Was your point misunderstanding that garbage collection != memory safety? Having garbage collection doesn't make a language any better for web dev. Being memory safe does. If you do not understand the difference and needlessly associate the two, you are validating the work put in by the evangelists.
> Was your point misunderstanding that garbage collection != memory safety?
What?! You really think memory safety is binary?
> Having garbage collection doesn't make a language any better for web dev.
It does though. It's more ergonomic than fighting rust compiler and does add safety over manual memory management from the likes of C/C++.
I just want to thank you for demonstrating so clearly how rust evangelists think.
> What?! You really think memory safety is binary?
Not sure why you'd assume that. The languages you offered are only as memory safe as their runtimes, which are usually implemented in something C flavored, and the compiler for same. Rust identifies memory unsafe areas instead with the unsafe tag. Functionally equivalent but I know which I like more.
> It does though. It's more ergonomic and does adds safety over manual memory management.
This is the misunderstanding. Rust manages to be wonderfully ergonomic without GC. The type system is really the only thing that makes it wordier than C++. And I've written large Rust applications with no manual memory management whatsoever. Occasionally I'll have to wrap something in an Arc Mutex because something needs to run in another thread. One or two variables in a whole application.
Your complaints apply to other languages.
> This is the misunderstanding. Rust manages to be wonderfully ergonomic without GC. The type system is really the only thing that makes it wordier than C++
Thanks for the laugh. Maybe lifetimes/borrow checking syntax got invisible after a while to you. The only explanation.
I'll stop here. Don't have the time nor desire to argue against what seems like passionate opinions.
Lifetime syntax is only more visible if you try to write high performance code. If you wrap everything in Rc<RefCell> or Arc<Mutex> or Box you’ll not see it show up anywhere but you’ll probably still get a performance profile better than tracing GC because time and time again it’s been shown that reliable tail latencies and peak throughputs are generally harder to achieve with a tracing GC. Where lifetimes start to pop up more is in more complicated situations where you are trying to avoid memory allocations and have even higher performing code.
And op is correct. In terms of memory safety, Java and C# aren’t as safe as you might hope because significant chunks of the runtime are still in C/C++ (not true for Go) and because of JIT miscompilation bugs (also not relevant for go). Of course JIT miscompilation in theory should be the same as a compiler miscompilation, but in practice it turns out to be worse because you have to maintain consistency between the interpreted and all the tiers of JIT at runtime and in practice humans have a hard time doing that which is why V8 constantly has security vulnerabilities in this area. That’s why GraalVM is interesting because it proposes a JIT design where you only have the compiler implemented once.
Go is not magically exempt from constraints other languages are subject to, no matter how much blind faith is put into this mantra.
Java has different compiler implementations not all of which necessarily go through interpreter stage. .NET CoreCLR does not have one in the first place (there is technically interpreter code in the codebase but it has bitrotted and is unused). JIT or not does not make it inherently more prone to miscompilation bugs. One thing that Go has going for its compiler is it is a massively simpler design. The downside of that, of course, is much worse compiler output that cannot cope with complex code and is inadequate at high-performance scenarios, where the users of Go either have to try to work around this with goasm or might not even have a solution at all, short of rewriting the component in a different language and praying that the FFI call overhead is low enough for the workload in question.
In addition to that, runtime implemented in C/C++ vs Java/C# or other language does not make it inherently safer. It is always a lower-level flavour of the language, where it cannot depend on itself, and with likely numerous relaxations in assurances to stay performant that the authors uphold manually. Even when C or C++ are used - it's usually a limited, focused subset of the language. And it is a bold claim that these runtimes are not extensively tested, fuzzed and otherwise verified against defects before going to production. If you look at bug and CVE history of .NET - pretty much all issues happen in complex "non-core" APIs, often at the unmanaged side provided by operating system rather than .NET itself (the cryptography bits). I can't recall any that are directly or indirectly related to a compiler bugs.
I feel like this reply ignores the wider context, which is plain old web dev, where the use of Rust and the decision fatigue that it carries is likely to be misplaced. Just pick C# or Kotlin.
> If you wrap everything in Rc<RefCell> or Arc<Mutex> ... performance profile better than tracing GC
I seriously doubt it. Naively incrementing Ref counters is extremely expensive. Tracing GCs in modern languages are very advanced. The least you could do is try to elide some of the ref increments - but you can't do that in Rust. Also boxing every object is very unpleasant. If you're gonna do that, use a language with reference semantics, like Java or C#
> significant chunks of the runtime are still in C/C++
Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
> Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
As security experts like to repeat, it’s a numbers game. You only have to make a mistake once to result in an insecure system. And C++ encourages you to make those mistakes.
Same with JIT - because the compilation happens at runtime, a common attack mechanism is to exploit a particular bug in the JIT implementation so that the generated code violates some assumption and results in unsafe execution. This happens all the time with V8 and a large part of that problem comes from having to implement the language twice (once in the interpreter and once in the JIT and any deviation between the two implementations is an opportunity to exploit a vulnerability). This isn’t something I made up. This is from talking to people who are familiar with this and reading analyses by people who know what they’re talking about.
Compilers btw also have miscompilation bugs all over the place. The reason it’s less of a problem is because the code being executed isn’t “malicious” - it’s your code and you’re not going out of your way to structure code to find bugs in the compiler. This also protects most JIT and why most C# programs don’t have this problem - they’re not loading arbitrary code at runtime to execute. It is a particular problem for V8 though where that’s an expected behavior.
I don’t fundamentally disagree with your points but I do want to point out that in Rust, you have far less recount traffic than in a language that uses reference counting as a GC strategy, because you only increment and decrement on new owners, and not on every access. You can take a regular old &T to the interior of an Rc<T> at any time.
I didn't know this, that's pretty cool. Definitely more manual than what Swift is trying to do with RC elision since you have to then deal with runtimes, but also, I'd imagine much more performant due to a finer-grained nature.
Yes, and beyond that, you also have the Rc<T> vs Arc<T> split, so if you're not doing multi-threaded, it's not even an atomic increment/decrement.
That said, this is just my intuition here, I've never seen a good benchmark trying to do the comparison, so I don't want to over sell it either. But a lot of times, it's like, "okay I have four threads so I need the refcount to go to four" and then it's just using normal references to actually use the value inside.
I’ve done thorough benchmarking of Rc and Arc.
Rc is cheap to the point of free unless it frees - it’s literally just an addition or subtraction with no data dependency in the path. Your CPU can do > 6 billion of these per second on a 6Ghz CPU (if I recall correctly you could do 3 per clock cycle in a micro benchmark due to out-of-order but at a minimum you should be able to do 1 per cycle).
Arc is quite expensive, particularly on x86 because of its memory model. You can maybe do a few million of them per second. But as you said, you shouldn’t be incrementing ownership once you’ve sent it to a new thread and instead using references directly or using something Rc-like to track the lifetime (i.e. when the current thread’s Rc count hits 0, release your Arc reference).
> Means absolutely nothing about the safety of the languages themselves. Just because C# has a runtime written in C++ does not mean that you can now magically compile and run use after frees or whatever.
Runtime bugs are potentially exploitable, even from within the language implemented by the runtime. But I don't think anyone's suggested that they'd present as simply as you suggest. Many such bugs have been found in runtimes of many languages.
Compiler bugs can be similarly exploitable, with the caveat that any code eliminated at compile time won't be accessible at runtime, so compiled languages will tend to have a smaller surface area for exploitation in general.
Like the Rust compilers partially written in C++.
> Maybe lifetimes/borrow checking syntax got invisible after a while to you.
I just did a quick scan, in a couple thousand lines of Rust I see two, maybe three lifetime annotations. Nah, not a huge contributor to language verbosity. Most Rust folk seem to work with the compiler's notion of what lifetimes should be, and everything's implicit.
Rust, like absolutely any language, needs an unsafe base on which to implement safe abstractions.
Rust runtime is partially implemented in a C++ toolchain, should we use the same arguement then?
Yes, Rust does depend on Clang for the lowest level of compilation at the moment. MIR (where the borrow checker lives and most of memory safety is implemented) and HLR are both implemented in Rust.
What argument? Seems more like a misunderstanding to me.
Using your argument, Rust can't be fully trusted as long as its compilation model and runtime depends on having C++ somewhere.
In fact, that has broken Rust compilation semantics quite a few times already.
I don't think I've made any contradictory argument. Perhaps someone else did? Or maybe you assumed a binary where I see gradients. I'm quite sure I didn't claim anywhere that any language was perfectly memory safe. With Rust and Cheri we are getting closer, but there is still progress to be made.
I can say that Rust is well architected such that the borrow checker and most of memory safety is implemented in the midlevel intermediate representation, which is implemented using Rust, and the bits of Clang upon which MIR and HLR are built are both well tested by virtue of being used by multiple language frontends, and limited in scope and modularity. It'd be great to see an all-rust replacement though, as bugs in Clang have been found. I believe there are several projects in the works and existing pure-Rust bootstrap compilers sufficient to get the toolchain working.
> Rust manages to be wonderfully ergonomic without GC
Hahahahaha! That's funny.
Thanks for proving my original point.
I rest my case.
OK, dude. If you never correct your misconception, you can be annoyed when people attempt to educate you forever X)
No amount of complaining about Rust evangelists will change how they use the language, but you could learn to understand why they're using it the way they are.
Java, C#, Go...
Not the only one. Maybe the fastest, yes. But... at what cost?
Also, I think people seem to believe that memory safety is all safeties. Attacks come in many shapes and forms, from supply chain injection, to SQL injection, to unsanitized input, overflows, secret leaking... memory safety is one of the weak points of a chain, but not the only one.
> Java, C#, Go...
All require runtimes. Which means they're not in the same class of language as C, C++, and Rust.
> Also, I think people seem to believe that memory safety is all safeties.
It's 70% of CVEs in C and C++. People focus on it because it's the biggest single category. Enums and Option types and the whole type system for that matter goes a long way toward addressing logic bugs in Rust as well.
> All require runtimes. Which means they're not in the same class of language as C, C++, and Rust
Right, which is why it makes no sense to say "no don't use C# for webdev!! Use Rust!"
They're different classes of languages. If C# is good for that use case then use it. Also there are genuine benefits to having a runtime.
> it makes no sense to say "no don't use C# for webdev!! Use Rust!"
Well, first of all, I don't see anywhere where anyone said that. Perhaps in another thread? People did argue against the use of Rust for web dev, for which it's a perfectly reasonable choice among many.
Second, I have an embedded HTTP server running on a microcontroller with 500kb of ram in my current project. Works great in Rust. Simply not possible in C#.
> If [any language] is good for that use case then use it.
Agreed. The best camera is the one in your pocket, and the best language is the one you know.
> Also there are genuine benefits to having a runtime.
Things which are commonly associated with runtimes - memory safety, automatic memory management, various type systems, high level language abstractions - are all genuinely useful. Folks in this thread seem to want to hold on to a categorization of languages which Rust breaks by virtue of providing features typically only found in languages with runtimes. What Rust shows is that those benefits can be a part of a well designed language, with or without a runtime. And not having a runtime is a hard requirement in areas like embedded, OS development, realtime, etc. There is typically a separation between languages you'd use for that sort of thing and languages you might use for web dev, but the reasons for that separation do not apply to Rust. Which is fun.
Some people seem to be salty about it though, lol.
One significant downside to having a runtime is that it significantly complicates the process of program verification. Making it much harder to do statically. Necessitating certain things happen at runtime like error handling and performance profiling. Rust mostly has what it needs to get that work done at compile time, which means the developer has an opportunity to fix it, before some user runs into it.
> with or without a runtime
This is the point - no, you actually DON'T get all the benefits of a runtime. C# famously, and extensively, used runtime code generation. Not possible in Rust, kind of possible in C++.
> Necessitating certain things happen at runtime
Not how it works. The language is still compiled. You can 100% implement a borrow checker for C#, there's nothing stopping anyone. It's just not what they want to do.
For example, C++ and C are BOTH compiled languages. But C++ is able to verify the type of various operations at compile-time, but C has to wait until runtime. For example, C's qsort takes void pointers and then you cast and dereference them at runtime. In C++, this is physically baked into the type of function template std::sort.
But they're both compiled languages with no runtime. But because C++ has a much more complete type system, it can do those things at compile-time while C cannot.
Another example: while C# has polymorphized generics, it checks them at compile time! The generics are not monomorphized like C++ or Rust, which allows smaller linked libraries that you can load and use at runtime, which is not possible in C++ templates or Rust generics. The trade-off is then that generics can't be checked until runtime - but actually no. The C# compiler will go out of its way to check generics when it can and will fail if they don't meet the type constraints, just like C++ or Rust.
Rust is very powerful, BUT:
1. It doesn't cover all the usecases of more powerful languages with rich runtimes like C# or Java
2. Using async rust, which is typically a requirement for webdev, requires a runtime anyway.
> Another example: while C# has polymorphized generics, it checks them at compile time! The generics are not monomorphized like C++ or Rust, which allows smaller linked libraries that you can load and use at runtime, which is not possible in C++ templates or Rust generics. The trade-off is then that generics can't be checked until runtime - but actually no. The C# compiler will go out of its way to check generics when it can and will fail if they don't meet the type constraints, just like C++ or Rust.
Generics are not monomorphized at bytecode level, but whenever generic argument is a struct, the methods and types that have it are monomorphized at the stage of compiling to machine code, be it with JIT or ILC. This is identical to Rust, and is why C# has "zero-cost" abstractions.
Method bodies and types for class-type generic arguments are indeed shared however. Dispatch on generic constraints is still quite efficient in that case, even if no longer zero-cost (unless the compiler is able to see the exact type and inline such calls, or emit a guarded devirtualized fast-path when compiled by JIT, or prove that only few types implement a particular interface or subclass a specific parent when compiled by NativeAOT).
Yes, the infamous C/C++... what a trick...
Or they could just carry on with their preferred language and not start shaming the shame bell until you repent your ungodly c++ ways. I think we have to consider both sides, weigh the languages and costs, including cost of writing a new code base and training new programmers.
I do want changes in C++, but believe me, the Safe C++ proposal (a copy of Rust typesystem on top of C++) that was sent to WG21 is not the push I would like for safety in C++ for reasons beyond the scope of Rust.
> I think it is a special case because the Rust people is especially annoying in their evangelism
I would say that what is bad about this is like they seem to think there is no other way. There is "the true Rust way" and after that, the others. At least I got that attitude, because when claims have been done about why the only path to Safe C++ is to copy Rust and I question it, they reply as "what you say is impossible" and I show them there are ways, then they move the attack somewhere else, even I have been accused of "wasting their time"...
we gotta admit that not using, let's say screwdrivers, because some annoying people made a silly religion around them calls for some introspection.
Except all languages are screwdrivers in this analogy.
However, There here are some immensely annoying people (just see other replies here) that can't fucking shut up about how their screwdriver of choice is better.
the point is that if some screwdriver is objectively better (doesn't snap and takes your eye off when you make a mistake) then the decision to use it shouldn't be affected by who recommends it.
> objectively better
Except it is not.
As any other language, it has many potential upsides and downsides in picking it depending on the context.
This lack of nuanced thinking from evangelists is precisely why some people just prefer to shut it off entirely.
So what you’re saying is: I like to program in C/C++ and don't plan to change, thankyouverymuch?
No.
I am saying that depending on the context I might pick Go. Or Java. Or Python. Or Node. Or C#.
Better for what? That is a huge and broad question to begin with.
For example, let's say Rust is the best language in the world, provably safe.
If it was really that easy, all of us would code proofs with something like this: https://dafny.org/ and would use languages like Rust.That is not the case at all and there are a ton of variables, from which cost, and I mean short vs long term cost where short term or middle term dominates for which it makes just more sense to choose the "worse" tool. Why? Because the "worse" tool will make that piece of software exist. The "better" one will probably make it never exist, because of other costs.
Whoever ignores this when doing anything, from coding to other activities, then that's ignoring reality. If human lives are at stake, yes, increase the cost a lot, make it the safest you can, but, even in that case, probably achieving 100% is impossible, or maybe achieving 99.999999% is 10 times cheaper than achieving 99.99999999999999999%.
By this measure, we would not have a lot of the imperfect inventions that improved over time in many areas.
In theory. In practice, there is no "objective" for most things. Only pros and cons.
And if the people advocating it are crazy, you might start to wonder if they're telling the whole truth!
This really applies to everything IMO.
I was one of the people that were really enthusiastic about Scala. But back then even as an enthusiast I had to admit Scala had a lot of drawbacks. Every bad thing inherited from Java, to start, and the leaky abstractions that forced, as well as the learning curve. Maybe the worst part was how easy Scala made it to write completely unintelligible code. Scala could effortlessly tap into the Java ecosystem, though.
Rust is much easier to recommend than Scala ever was. Rust's ecosystem is a bit smaller than you'd typically like, but that's a temporary problem.
A lot of times they are the same people. The Haskell community really isn't as active as it was five or even ten years ago, and many of those Haskell evangelists became Rust evangelists. Some people are just attracted to using the most "advanced" language (for whatever definition of "advanced" that's in vogue) and some of them are extremely vocal about that.
The Haskell community is admittedly less active in promoting niche concepts like zygohistic prepomorphisms, but much more active in building out a solid ecosystem. You won't hear much about the latter because it isn't shiny and weird.
How has the ecosystem become more solid in recent years? All of the important Haskell libraries I use were initially released eight years ago or before then.
While I personally don't like people promoting niche concepts as well (I'd rather have them promote building out a solid ecosystem as you say), I still feel the "brain drain" out of the Haskell community.
* We now have ghcup, a solid way to install Haskell (inspired by rustup, I believe). It used to be that you got access to one GHC package from your Linux distribution if you were lucky, otherwise you had to install manually from a bindist.
* We now have HLS, which integrates superbly with VS Code (it's also fine in Emacs and I guess in other editors, but I don't know). It can be flaky on large codebases, but it's a big step up from what we had before (which was almost nothing).
* GHC and bundled libraries have a much higher degree of API stability. In fact in the last 18 months there has been minimal breakage due to new GHC releases[1]. Community libraries are a bit more of a mixed bag, but there's much more awareness in the community that continual API breakage churn is harmful for adoption.
* We now have the Haskell Foundation[2] supporting all sorts of critical ecosystem activity behind the scenes, and helping people work together harmoniously. There used to be extremely fraught and unpleasant interpersonal battles in the community. Those are a thing of the past.
* We now have solid effect systems (Bluefin[3] and effectful) that combine the best of the transformers world (build effects from components, type safety, handle effects and remove them from the type) and the IO-based world (predictable performance, resource safety, escape hatches). I'm willing to boldly claim that effects are no longer a problem in Haskell.
I'm in the same boat as you: I don't use particularly man new libraries. Is that a problem? I'm given to understand that in the Java world that's called "Monday". Sure, I would have liked it if certain community members had stayed more active, but I don't feel I'm missing anything in particular. (That may just be to do with what I work on.) What are you missing?
[1] See
* https://github.com/tomjaguarpaw/tilapia/blob/master/breakage...
* https://github.com/tomjaguarpaw/tilapia/blob/master/breakage...
[2] Disclaimer: I'm the vice chair
[3] Disclaimer: I'm the effect system author
I do think there's a bit of the Ignaz Semmelweis[1] issue at hand here, where developers want to believe in their inherent quality and refuse processes that improve safety if it goes against their worldview.
Also, no other language has really posed an existential threat to C/C++ developers. Other languages have narrowed the pool, but Rust is the first to really be a possible replacement. Now, there's no real chance that Rust will actually replace C/C++ in the near future, but even the threat can be a huge concern.
[1]: https://en.wikipedia.org/wiki/Ignaz_Semmelweis
> I do think there's a bit of the Ignaz Semmelweis[1] issue at hand here, where developers want to believe in their inherent quality and refuse processes that improve safety if it goes against their worldview
I think the problem is that other variables (not only safety) must be assessed beyond the pure "better". Haskell is very good also. Very correct. Who uses that, and where? And why? Why not everyone uses https://dafny.org/ ?
Every C/C++ coder thinks that they are good at memory management. Just look at the endless flame wars on garbage collection.
We just need to admit that we suck at it.
Let's also admit every language before Rust hasn't succeeded at out right replacing the vast majority of C/C++, so it becomes it's own form of conservatism/bias. This post is coming from one of the few companies that actually does complete large scale rewrites of things. I'm curious/excited to see how it goes for them.
And I have that C/C++ bias: I don't enjoy programming in rust as much as I do C, but I don't like doing lots of new things.
> Let's also admit every language before Rust hasn't succeeded at out right replacing the vast majority of C/C++,
Many languages expelled C/C++ from the domains they were once kings of entirely. It used to be "the language" for many, many things.
There is hardly that much C and C++ left on GUI frameworks and distributed computing, other than existing infrastructure from projects that are decades old.
The amount of “we have garbage collection at home” from the C and C++ communities during the rise of Java is a big part of my hatred of C++. That and watching classmates spin out in C++ classes in college.
There is RAII and it works well most of the time.
Saying C/C++ denotes your poor understanding of C and C++, which are usually quite different in style and with very different safety characteristics regarding to code written in good style.
Well most of the time programs don’t exhibit memory issues anyway. But as you said you are the only expert.
I was alive during the rise of Java and it absolutely got a high amount of "Look at this piece of garbage" style comments. Particularly, people were VERY critical of how slow java was and how much memory it consumed. There was also a fair bit of critique about the lack of language features. For example, java eschewed generics for a long time (not unlike go) which really irked people. It was very much seen as a more verbose C++ with few benefits.
> Particularly, people were VERY critical of how slow java was and how much memory it consumed.
To be fair, those were totally valid criticisms before HotSpot got really good. IIRC, way back in the day Corel said they were going all in on Java for their software suite, and they eventually abandoned it due to performance.
It’s possible to write fast Java and one of the cornerstones of my early career was proving it.
But I’d also spent a lot of time thinking about performance already before finding Java. A lot of people fully embraced the abstract nature of it while I kept it at arm’s length.
I fixed a lot of really dumb mistakes, and a lot of things that were subtly wrong but with large consequences.
For reference:
* JIT was added in Java 1.1 for Windows, and 1.2 for everyone else. But in the early days it sucked.
* Generics, autoboxing, and enums were added in Java [1.]5 (2004). Prior to that, APIs had to use `Object` everywhere, explicitly wrap primitives when needed, and use plain ints for constants.
* It's $CURRENTYEAR and Java still doesn't support structs, and it's not clear they actually understand the problem (since e.g. they force everything to be `final`).
> It's $CURRENTYEAR and Java still doesn't support structs, and it's not clear they actually understand the problem (since e.g. they force everything to be `final`)
I think they do. The problem they are solving is removing indirection and flattening the memory representation of data. Making all the fields final opens up the JVM to some really neat optimizations you can't get with mutable fields.
By the time you get something with enough fields in it that the immutability limitation becomes problematic, you likely would benefit from just having a regular class. Structs won't always be strictly faster than regular classes.
>It's $CURRENTYEAR and Java still doesn't support structs
they call it records, added recently. Depending on where you work, you may see them in a decade
records aren't the structs OP is referring to. Instead, they are thinking of value classes. (at least, that's effectively what a struct means in C#).
You will see `value record Point(int x, int y)` which is different from `record Point(int x, int y)`. The key difference being that `record Point` will create a reference to the x/y data with identity whereas `value record Point` will instead represent that data inline without a reference.
I suspect that `value record` will be the way pretty much everyone starts using `record` as there's basically no reason not to.
Java doesn't want to do a Python 2, a JAR compiled in 2000 has to still work in a Valhala enabled runtime, assuming it didn't use any of the now deprecated APIs.
You can play with early releases, https://jdk.java.net/valhalla/
I always thought that Java generics inspired python ABCs and later type interfaces.
What do Java generics have to do with Python's abstract base classes?
Hmm sorry I meant Java interfaces, which makes this comment irrelevant and also I can't find the original supporting article - it might have just been an opinionated blog post.
A lot of younger devs wouldn't remember, but when Swift 1.0 came out it was pretty widely disliked and grumbled about. It was years before companies started using it in production and years after that before it became the preferred option. I recall an internship where the management-motivated rule of "All new classes MUST be made in Swift" being met with a lot of backlash.
I recall working with greybeard types who found ARC to be a little sugary and pined for the days of NSAutoReleasePool. I think this is just a natural cycle of aging builders and their tools, I imagine it predates us all.
These days I pretty frequently find myself frustrated with SwiftUI and wishing I'd just stuck with UIKit, but alas, time waits for no man.
God yes. The C and C++ people would not shut the fuck up about how one could do X and Y in their languages - including garbage collection. You didn’t need Java for that.
Small problems are immediatly obvious in these sorts of replies. Hard to use off the shelf modules if they don’t all use the same memory management replacements. To be fair third party code by volume was in languages like Python or Java. Hard to miss what you haven’t experienced firsthand.
I recall early Java having appetite for memory that hardware often could not satisfy (on a x86 desktop, it is of course still true for many microcontrollers).
Also I read a story of a developer resisting move from hand-written assemby to C, to which company owner replied "you can write your assembly, but not for my money". So nothing new, though some languages get a lot more resistance than others.
> Was there a similar backlash against the rise of Java, Swift, Kotlin, or TypeScript?
Absolutely. Swift was hated because it lacked so much Objective-C functionality at launch, Kotlin got a similar "Java is fine" reaction, and Typescript got a "why add more complexity we don't need" reaction.
This is just the way of things. The most hated languages are some of the most used ones (Javascript, PHP, Python, etc).
This seems like a lot of revisionist history to say Swift was "hated". Sure, there were, some naysayers, but just look at the original Swift announcement from Apple. There were hoops and hollers of joy because everyone knew how problematic Objective-C was.
> The most hated languages are some of the most used ones (Javascript, PHP, Python, etc).
Mandatory quote: "There are only two kinds of languages: the ones people complain about and the ones nobody uses". -- Bjarne Stroustrup
It's trite at this point but very apropos: "There are only two kinds of languages: the ones people complain about and the ones nobody uses." — Bjarne Stroustrup
(Applies to much more than just languages!)
Reminds of a quote about Van Morrison:
There are two types of people. The people that like Van Morrison and the people that have met him
Those other languages were universally met with the "real programmers code in C or C++" defense.
The problem here is that is doesn't work with Rust.
The culture around Rust is perfectly calibrated to set off middle-aged C/C++ developers.
1) It's very assertive without having a lot of experience or a track record.
2) It's extremely online, filled with anime references and Twitter jokes, obsessed with cuteness, etc...
3) It's full of whipper-snappers with 6 months of experience who think they can explain your job to you, who have been doing it for 30 years.
IME many arguments around Rust aren't focused on its technical properties, but boil down to "I just plain don't like you people."
If Rust can successfully become "boring", a lot of the opposition will go away.
I faced really strong criticism lately for criticizing (in the context of C++) a Safe C++ proposal that basically copies Rust into C++.
With all good and bad things this entails.
It seems there is a push from some people (that I hope it is not successful in its current shape, I personally find it a wrong approach) to just bifurcate C++ type system and standard library.
More context here if you are curious: https://www.reddit.com/r/cpp/comments/1g41lhi/memory_safety_...
What I found a lot of is literally, intellectual dishonesty in some of the arguments to push in that direction.
Not that it is not a possible direction, but claims like: "there is no other way", "it is the only feasible solution", etc. without exploring other alternative, from which Swift/Hylo value semantics/subscripts (subscripts is basically controlled references for mutation without a full borrow checker) and adding compile-time enforced law of exclusivity semantics without a new type system are possible (and better, IMHO) alternatives... well, you can see the thread.
There seems to be a lot of obsession as "the only right way is the Rust way" and everything else is directly discardable by authors and some people supporting them.
I think there are a lot of strong feelings there.
There was no backlash against Java, Swift, Kotlin and TypeScript because they are all GCed languages. You don’t need to learn anything to use them, and it’s not constraining much how you write your code. Rust is very different in that regard. You need to learn about memory ownership and borrowing, and it requires an effort and also constrains your design. I think that explains the (limited but real) backlash.
I'll admit that there are certain patterns/data structures/etc. that are awkward to implement in Rust, but to write good C++ you are essentially required to think about ownership and lifetimes to some degree anyway (despite the greater lenience awarded compared to Rust and the lack of static checking). For anyone familiar with modern C++, I don't think that the shift in mental model should be too huge so the degree of backlash surprises me somewhat.
When Swift first came out there was a subset of obj-c developers who hated Swift’s take on Optional. In obj-c you can freely send messages to nil and it just returns nil. In Swift you have to explicitly mark if a value is optional and handle it being nil if so, and even though this often means adding just a single character (foo?.bar), there were people who thought that Swift was just making things more complicated for no benefit.
Happily slowly replacing C and C++ on GUI development and distributed computing systems for the last 20 years.
And yes there has been enough anti-GC hate and FUD from those that think all automatic resource management systems are born alike.
You’re right, I forgot about the skepticism GCed languages initially faced.
> Was there a similar backlash against the rise of Java, Swift, Kotlin, or TypeScript?
Absolutely and STILL TO THIS DAY.
I remember a bunch of languages discussed alongside typescript, such as coffeescript, toffeescript, purescript, etc. the consensus was that typescript was the worst of the bunch.
I always keep in mind that the product with the most complaints is usually the most used one.
Not because it's the worst, but because more people use it, and because it's the one people have to use even though they don't like it.
Which is funny, given the rise of typescript relative to many of the other languages you listed
I don’t remember this.
Java yes over JVM/bytecode, the rest were all largely interpreted languages and so the discussion focused on if an interpreted language could ever compare to a compiled one.
But much as there's an indignity to something compiling to JS or your own preferred VM, people were convinced that memory allocation/GC is the reason they got a job and tended to object accordingly - so there was fight against that. Rust is treated by the hard-core C-ists as an affront to their ability to manage memory safely. Go look up early go vs C (or C#, D) - it had its own animated themes, but this was before GG went fucky with its own open source stuff so there was an air of 'google is trustworthy' in those discussions.
TLDR: Yes.
Well, yes, probably
I code on my free time because I enjoy doing so. I find C to be fun (even if I suck at it)
Doing boring thing (or things I do not want to) is called work
There are 1-2 people here on HN that routinely shit on Kotlin any time it's even tangentially related to the topic at hand.
There was definitely similar backlash against Python, with a lot of people calling it not "real" programming
[flagged]
What does privacy and safety, have to do with memory safety and CVEs? You are conflating multiple aspects of software engineering.
> And Larry Page himself doesn't use G-mail for privacy reasons.
I couldn't find anything to corroborate this with a quick Google search, other than some article from the early 2010s about gmail being slow.
> So many people have been arrested using evidence form Signal
Do you have any evidence that Signal messages were shared with governments or other entities?
> > And Larry Page himself doesn't use G-mail for privacy reasons.
> I couldn't find anything to corroborate this with a quick Google search
I could see how removing that from the search results would be good for Google. Not to say it was there, but not finding it now is weak evidence for it not having happened.
https://africa.businessinsider.com/news/meet-the-enigmatic-m...
I think you dropped your tin foil hat on the way out
https://africa.businessinsider.com/news/meet-the-enigmatic-m...
Do you have a reputable source regarding the searching of everyone's phone? Apple tried something like this but couldn't make it work, maybe you're thinking of that effort?
CSAM is already deployed on Android and Cloudflare.
In Cloudflare they don't even hide it. It's well listed in the options
you have a source for the larry page claim?
https://africa.businessinsider.com/news/meet-the-enigmatic-m...
Is it safe yet?
It's so fucking safe...
Ah yes, rust. A language so "safe" that its compiler downloads shit from the internet every time you call it.
Stop trolling. The compiler does no such thing. The build system & package manager does by default, just like `cmake` does. Of course you can prevent this by vendoring your dependencies, just like you can with other languages.
That’s something that is easily changed. You can download and individually inspect all the source code for yourself as well and proxy it locally. Not a fun task, but one that can be done with some grit and some time.
With rust I absolutely hate when they make it more complicated to do something that is already unsafe.
Great examples:
- Reference to mut static (interacting with it is already unsafe), but now you get additional warnings.
- the entire addr_of!() mess.
Things like these just make me wish I did this in c where I know I am only ever executing in a single thread.