Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs. Spark Ada was a relatively easy substandard to learn and apply to start to develop software that was SIL 4 compliant.
I can't help but feel that we just went through a huge period of growth at all costs and now there is a desire to return, after 30-years of anything goes, to trying to make software that is safer again. Would be nice to start to build languages based on all the safety learnings over the decades to build some better languages, the good ideas keep getting lost in obscure languages and forgotten about.
Yes, we re-invent the wheel. The more time you spend writing software for a living, the more you will see the wheel re-invented. But Ada and Rust are safe under different definitions of safety. I view Rust as having a more narrow definition of safety, but a very important notion of safety, and executed with brutal focus. While Ada's definition of safety being broader, but better suited to a small subset of use cases.
I write Rust at work. I learned Ada in the early 1990s as the language of software engineering. Back then a lot of the argument against Ada was it was too big, complex, and slowed down development too much. (Not to mention the validating Ada 83 compiler I used cost about $20,000 a seat in today's money). I think the world finally caught up with Ada and we're recognizing that we need languages every bit as big and complex, like Rust, to handle issues like safe, concurrent programming.
I don't know Ada; care to explain why its definition of safety is broader than Rust?
I agree Rust's safety is very clearly (and maybe narrowly) defined, but it doesn't mean there isn't focus on general correctness - there is. The need to define safety precisely arises because it's part of the language (`unsafe`).
Rust’s built-in notion of safety is intentionally focused on memory + data-race properties at compile time. logic, timing, and determinism are left to libraries and design. Ada (with SPARK & Ravenscar) treats contracts, concurrency discipline, and timing analysis as first-class language/profile concerns hence a broader safety envelope.
You may choose to think from safety guarantee hierarchy perspective like
(Bottom = foundation... Top = highest assurance)
with Ada.Text_IO; use Ada.Text_IO;
procedure Restricted_Number_Demo is
-- Define a restricted subtype of Integer
subtype Small_Positive is Integer range 1 .. 100;
-- Define a restricted subtype of Float
subtype Probability is Float range 0.0 .. 1.0;
-- Variables of these restricted types
X : Small_Positive := 42;
P : Probability := 0.75;
begin
Put_Line("X = " & Integer'Image(X));
Put_Line("P = " & Float'Image(P));
-- Uncommenting the following line would raise a Constraint_Error at runtime
-- X := 200;
end Restricted_Number_Demo;
Ada, or at least GNAT, also supports compile-time dimensional analysis (unit checking). I may be biased, because I mostly work with engineering applications, but I still do not understand why in other languages it is delegated to 3rd party libraries.
One of the best software design books I've read is "Domain Modelling Made Functional", from Scott Wlaschin.
It's about F#, but it remains a good read for any software programmer, whatever your language.
And it's easily understandable, you can almost read it like a novel, without focusing too much.
Though what may need some brains is applying the functional concepts of this book with your favourite language...
FWIW, physical dimensions like meters were the original apples-to-oranges type system that pre-dates all modern notions of things beyond arithmetic. I'm a little surprised it wasn't added to early FORTRAN. In a different timeline, maybe. :)
I think what is in "the" "stdlib" or not is a tricky question. For most general/general purpose languages, it can be pretty hard to know even the probability distribution of use cases. So, it's important to keep multiple/broad perspectives in mind as your "I may be biased" disclaimer. I don't like the modern (well, it kind of started with CTAN where the micros seemed meant more for copy-paste and then CPAN where it was not meant for that) trend toward dozens to hundreds of micro-dependencies, either, though. I think Python, Node/JS, and Rust are all known for this.
I know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).
Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:
This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"
Ada's compile time verification is very good. With SPARK it's even better.
Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.
Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.
let a: u8 is 0..100 = 1;
let b: u8 is 0..100 = 2;
let c = a + b;
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.
In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.
But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].
Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.
That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).
The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].
I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.
> as it inserts run-time checks that are not easily traceable to source code
Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.
To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.
If it fails at run time, it could be the reason you get paged at 1am because everything's broken.
It’s not just about safety, it’s also about speed. For many applications, having to check the values during runtime constantly is a bottleneck they do not want.
Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.
It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"
class LEQ200 { ... }
LEQ200 validate(int age) throws Exception {
if (age <= 200) return age;
else throw Exception();
}
LEQ200 works = validate(200);
// LEQ200 fails = validate(201);
// LEQ200 hmmm = works + 1; // Error in Java
LEQ hmmm = works.add(1); // Throws an exception or use Haskell's Either-type / Rust's Result-type
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.
These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.
Your example also gets evaluated at comptime. For more complex cases I wouldn't be able to tell you, I'm not the compiler :) For example, this get's checked:
let ageFails = (200 + 2).Age
Error: 202 can't be converted to Age
If it cannot statically prove it at comptime, it will crash at runtime during the type conversion operation, e.g.:
import std/strutils
stdout.write("What's your age: ")
let age = stdin.readLine().parseInt().Age
Then, when you run it:
$ nim r main.nim
What's your age: 999
Error: unhandled exception: value out of range: 999 notin 0 .. 200 [RangeDefect]
Exactly this. Fails at runtime. Consider rather a different example: say the programmer thought the age were constrained to 110 years. Now, as soon as a person is aged 111, the program crashes. Stupid mistake by a programmer assumption turns into a program crash.
Why would you want this?
I mean, we've recently discussed on HN how most sorting algorithms have a bug for using ints to index into arrays when they should be using (at least) size_t. Yet, for most cases, it's ok, because you only hit the limit rarely. Why would you want to further constrain the field, would it not just be the source of additional bugs?
Once the program is operating outside of the bounds of the programmers assumption, it’s in an undefined state that may cause a crash to happen at a later point of time in a totally different place.
Making the crash happen at the same time and space as the error means you don’t have to trace a later crash back to the root cause.
This makes your system much easier to debug at the expense of causing some crashes that other systems might not have. A worthy trade off in the right context.
Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.
I could go into many more examples but I hope I am understood. I think these hard-coded definition of ranges at compile time are causes of far more issues than they solve.
Let's take a completely different example: size of a field in a database for a surname. How much is enough? Turns out 128 varchars is not enough, so now they've set it to 2048 (not a project I work(ed) on, but am familiar with). Guess what? Not in our data set, but theoretically, even that is not enough.
> Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.
So you validate user input, we've known how to do that for decades. This is a non-issue. You won't crash the program if you require temperatures to be between 0 and 1000 K and a user puts in 1001, you'll reject the user input.
If that user input crashes your program, you're not a very good programmer, or it's a very early prototype.
I think, if I am following things correctly, you will find that there's a limit to the "validate user input" argument - especially when you think of scenarios where multiple pieces of user input are gathered together and then have mathematical operations applied to them.
eg. If the constraint is 0..200, and the user inputs one value that is being multiplied by our constant, it's trivial to ensure the user input is less than the range maximum divided by our constant.
However, if we are having to multiply by a second, third... and so on.. piece of user input, we get to the position where we have to divide our currently held value by a piece of user input, check that the next piece of user input isn't higher, and then work from there (this assumes that the division hasn't caused an exception, which we will need to ensure doesn't happen.. eg if we have a divide by zero going on)
I mean, yeah. If you do bad math you'll get bad results and potentially crashes. I was responding to someone who was nonsensically ignoring that we validate user input rather than blindly putting it into a variable. Your comment seems like a non sequitur in this discussion. It's not like the risk you describe is unique to range constrained integer types, which is what was being discussed. It can happen with i32 and i64, too, if you write bad code.
Hmm, I was really pointing at the fact that once you get past a couple of pieces of user input, all the validation in the world isn't going to save you from the range constraints.
Assuming you want a good faith conversation, then the idea that there's bad math involved seems a bit ludicrous
I believe that the solution here is to make crashes "safe" eg with a supervisor process that should either never crash or be resumed quickly and child processes that handle operations like user inputs.
This together with the fact that the main benefit of range types is on the consumption side (ie knowing that a PositiveInt is not 0) and it is doable to use try-catch or an equivalent operation at creation time
Happens with DB constraints all the time, user gets an error and at least his session, if not whole process, crashes. And yes that too is considered bad code that needs fixing.
> Stupid mistake by a programmer assumption turns into a program crash.
I guess you can just catch the exception in Ada? In Rust you might instead manually check the age validity and return Err if it's out of range. Then you need to handle the Err. It's the same thing in the end.
> Why would you want to further constrain the field
You would only do that if it's a hard requirement (this is the problem with contrived examples, they make no sense). And in that case you would also have to implement some checks in Rust.
Also, I would be very interested to learn the case for hard requirement for a range.
In almost all the cases I have seen it eventually breaks out of confinement. So, it has to be handled sensibly. And, again, in my experience, if it's built into constraints, it invarianly is not handled properly.
Consider the size of the time step in a numerical integrator of some chemical reaction equation, if it gets too big the prediction will be wrong and your chemical plant could explode.
So too big times steps cannot be used, but constant sized steps is wasteful. Seems good to know the integrator can never quietly be wrong, even if you have to pay the price that tge integrator could crash.
Exactly, but how do you catch the exception? One exception catch to catch them all, or do you have to distinguish the types?
And yes... error handle on the input and you'd be fine. How would you write code that is cognizant enough to catch outofrange for every +1 done on the field? Seriously, the production code then devolves into copying the value into something else, where operations don't cause unexpected exceptions. Which is a workaround for a silly restriction that should not reside in runtime level.
If you're using SPARK, it'll catch at compile time if there's ever a possibility that it would fit within that condition. Otherwise it'll throw an exception (constraint_error) during runtime for you to catch.
No, range types are at best a very limited piece of DbC. Design by Contract lets you state much more interesting things about your program. It's also available in Ada, though.
> The ability to make number types that were limited in their range is really useful for certain classes of bugs.
This is a feature I use a lot in C++. It is not part of the standard library but it is trivial to programmatically generate range-restricted numeric types in modern C++. Some safety checks can even be done at compile-time instead of runtime.
It should be a standard feature in programming languages.
I've never come across any range restricting constructions in C++ projects in the wild before. It truly is a shame, I think it's something more programmers should be aware of and use. Eliminating all bounds checking and passing that job to the compiler is pretty killer and eliminates whole classes of bugs.
This is an unfortunate reality. C++ has evolved into a language capable of a surprisingly deep compile-time verification but almost no one uses that capability. It reflects somewhat negatively on the C++ developer community that problems easily solved within the language are routinely not solved, though the obsession with backward compatibility with old versions of the language plays a role. If you fully leverage it, I would argue that recent versions of C++ are actually the safest systems language. Nonetheless, almost no one has seen code bases that leverage that verification capability to its maximum. Most people have no clue what it is capable of.
There is the wisdom that it is impossible to deliver C++ without pervasive safety issues, for which there are many examples, and on the other hand there are people delivering C++ in high-assurance environments with extremely low defect rates without heroic efforts. Many stories can be written in that gap. C++ can verify many things that are not verifiable in Rust, even though almost no one does.
It mostly isn’t worth the argument. For me, C++20 reached the threshold where it is practical to design code where large parts can be formally verified in multiple ways. That’s great, this has proven to be robust in practice. At the same time, there is an almost complete absence of such practice in the C++ literature and zeitgeist. These things aren’t that complex, the language users are in some sense failing the language.
The ability to codegen situationally specific numeric types is just scratching the surface. You can verify far weirder situational properties than numeric bounds if you want to. I’m always surprised by how few people do.
I used to be a C++ hater. Modern C++ brought me back almost purely because it allows rich compile-time verification of correctness. C++11 was limited but C++20 is like a different world.
I would guess that Ada is simply more known. Keep in mind that tech exploded in the past ~3.5 decades whereas those languages are much older and lost the popularity contest. If you ask most people about older languages, the replies other than the obvious C and (kind of wrong but well) C++ are getting thin really quickly. COBOL, Ada, Fortran, and Lisp are probably what people are aware of the most, but other than that?
The first five languages I learned back in the 70s: FORTRAN, Pascal, PL/I, SNOBOL, APL. Then I was an Ada and Icon programmer in the 80s. In the 90s, it was C/C++ and I just never had the enthusiasm for it.
Icon (which came from SNOBOL) is one of the few programming languages I consider to embody truly new ideas. (Lisp, Forth, and Prolog are others that come to mind.)
Icon is an amazing language and I wish it was better known.
You probably know this but, for anyone else who is interested, the easiest way to get a feel for Icon nowadays may be through its descendant Unicon which is available at unicon.org.
I found Pascal more readable as a budding programmer. Later on, C's ability to just get out of the way to program what I wanted trumped the Pascal's verbosity and opinionatedness.
I admit that the terseness of the syntax of C can be off-putting. Still, it's just syntax, I am sorry you were disuaded by it.
I dabbled in some of them during some periods when I took a break from work. And also some, during work, in my free time at home.
Pike, ElastiC (not a typo), Icon, Rebol (and later Red), Forth, Lisp, and a few others that I don't remember now.
Not all of those are from the same period, either.
Heck, I can even include Python and Ruby in the list, because I started using them (at different times, with Python being first) much before they became popular.
2005-2010 my college most interesting (in this direction) language was Haskell. I don't think that there was any other language (like Ada) being taught)
Turbo Pascal could check ranges on assignment with the {$R+} directive, and Delphi could check arithmetic overflow with {$Q+}. Of course, nobody wanted to waste the cycles to turn those on :)
I would argue that was one of the reasons why those languages lost.
I distinctly remember arguments for functions working on array of 10. Oh, you want array of 12? Copy-paste the function to make it array of 12. What a load of BS.
It took Pascal years to drop that constraint, but by then C had already won.
I never ever wanted the compiler or runtime to check a subrange of ints. Ever. Overflow as program crash would be better, which I do find useful, but arbitrary ranges chosen by programmer? No thanks. To make matters worse, those are checked even by intermediate results.
I realize this is opinioned only on my experience, so I would appreciate a counter example where it is a benefit (and yes, I worked on production code written in Pascal, French variant even, and migrating it to C was hilariously more readable and maintainable).
It still results in overflow and while you are right that it's UB by the standard, it's still pretty certain what will happen on a particular platform with a particular compiler :)
No, optimizing compilers don't translate overflow to platform-specific behavior for signed integers - since it's UB they'll freely make arithmetic or logic assumptions that can result in behavior that can't really be humanly predicted without examining the generated machine code.
They are free to but not required. You can pick a different compiler, or you can configure your compiler to something else, if it provides such options.
I always found it surprising that people did not reject clang for aggressively optimizing based on UB, but instead complained about the language while still using clang with -O3.
Programmers don’t have much choice, since most compilers don’t really provide an option / optimization level that results in sane behavior for common UB footguns while providing reasonable levels of performance optimization.
The one exception I know of is CompCert but it comes with a non-free license.
I definitely do think the language committee should have constrained UB more to prevent standards-compliant compilers from generating code that completely breaks the expectations of even experienced programmers. Instead the language committees went the opposite route, removing C89/90 wording from subsequent standards that would have limited what compilers can do for UB.
The C89/C90 wording change story is a myth. And I am not sure I understand your point about CompCert. The correctness proof of CompCert covers programs that have no UB. And programmers do have some choice and also some voice. But I do not see them pushing for changes a lot.
FYI all major C compilers have flags to enforce the usual two's-complement rollover, even if you enable all optimizations. I always enable at least fwrapv, even when I know that the underlying CPU has well defined overflow behavior (gcc knows this so the flag presumably becomes a no-op, but I've never validated that thought).
gcc has -fwrapv and -f[no-]strict-overflow, clang copied both, and MSVC has had a plethora of flags over the years (UndefIntOverflow, for example) so your guess is as good as mine which one still works as expected.
> I never ever wanted the compiler or runtime to check a subrange of ints. Ever.
Requiring values to be positive, requiring an index to fall within the bounds of an array, and requiring values to be non-zero so you never divide by zero are very, very common requirements and a common source of bugs when the assumptions are violated.
The thing I miss most from Ada is its clear conception of object-orientation. Every other language bundles all of OOP into the "class" idea, but Ada lets you separately opt in to message sending, dynamic dispatch, subtyping, generics, etc. In Ada, those are separate features that interact usefully, rather than one big bundle.
In my personal experience it's not just safety. Reliability of produced is also a big part.
Ime, being able to express constraints in a type systems yields itself to producing better quality code. A simple example from my experience with rust and golang is mutex handling, rust just won't let you leak a guard handle while golang happily let's you run into a deadlock.
>Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs.
As pjmlp says in a sibling comment, Pascal had this feature, from the beginning, IIRC, or from an early version - even before the first Turbo Pascal version.
It doesn’t really compete in the same space as Ada or Rust but C# has range attributes that are similar, the only downside is you have to manually call the validation function unless you are using something like ASP.NET that does it automatically at certain times.
30+ years ago I was programming in Ada, and I feel the same way and have been repeatedly disappointed. Maybe this time around things will be different.
The author indicates some obvious differences, including the fact that Ada has a formal spec and rust doesn't -- rustc seems to be both in flux as well as the reference implementation. This might matter if you're writing a new compiler or analyzer.
But the most obvious difference, and maybe most important to a user, was left unstated: the adoption and ecosystem such as tooling, libraries, and community.
Ada may have a storied success history in aerospace and life safety, etc, and it might have an okay standard lib which is fine for AOC problems and maybe embedded bit poking cases in which case it makes sense to compare to Rust. But if you're going to sit down for a real world project, ie distributed system or OS component, interfacing with modern data formats, protocols, IDEs, people, etc is going to influence your choice on day one.
Rust has now a donated spec that was provided by Ferrocene. This spec style was influenced by the Ada spec. It is available publicly now on https://rust-lang.github.io/fls/ .
This is part of the effort of Ferrocene to provide a safety certificate compiler. And they are already available now.
This is only meaningful if Rust compiler devs give any guarantees about never breaking the spec and always being able to compile code that adheres to this spec.
That's not how it works for most language standards, though. Most language standards are prescriptive, while Rust is descriptive.
Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed. Using Ada as an example, there was Ada 95 and Ada 2003, but between 95 and 2003 there was only Ada 95. There was no in-progress version, the compiler vendors weren't making changes to the language, and an Ada95 compiler today compiles the same language as an Ada95 compiler 30 years ago.
Looking at the changelog for the Rust spec (https://rust-lang.github.io/fls/changelog.html), it's just the changelog of the language as each compiler verion is released, and there doesn't seem to be any intention of supporting previous versions. Would there be any point in an alternative compiler implementing "1.77.0" of the Rust spec?
And the alternative compiler implementation can't start implementing a compiler for version n+1 of the spec until that version of rustc is released because "the spec" is just "whatever rustc does", making the spec kind of pointless.
> Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed.
This is not how C or C++ were standardized, nor most computer standards in the first place. Usually, vendors implement something, and then they come together to agree upon a standard second.
When updating standards, sometimes things are put in the standard before any implementations, but that's generally considered an antipattern for larger designs. You want real-world evaluation of the usefulness of something before it's been standardized.
Because otherwise the spec is just words on a paper, and the standard is just "whatever the compiler does is what it supposed to do". The spec codifies the intentions of the creators separately from the implementation.
In rust, there is currently only one compiler so it seems like there's no problem
Rust doesn’t have quite as strong compatibility guarantees. For example, it’s not considered a NC-breaking change to add new methods to standard library types, even though this can make method resolution ambiguous for programs that had their own definitions of methods with the same name. A C++ implementation claiming to support C++11 wouldn’t do that, they’d use ifdefs to gate off the new declarations when compiling in C++11 mode.
Thanks, that was easily the most confusing thing and I was like well... I understand everything else, if it's very important what exactly "NC-breaking" means I'm sure I will realise later.
You have to squint fairly hard to get here for any of the major C++ compilers.
I guess maybe someone like Sean Baxter will know the extent to which, in theory, you can discern the guts of C++ by reading the ISO document (or, more practically, the freely available PDF drafts, essentially nobody reads the actual document, no not even Microsoft bothers to spend $$$ to buy an essentially identical PDF)
My guess would be that it's at least helpful, but nowhere close to enough.
And that's ignoring the fact that the popular implementations do not implement any particular ISO standard, in each case their target is just C++ in some more general sense, they might offer "version" switches, but they explicitly do not promise to implement the actual versions of the ISO C++ programming language standard denoted by those versions.
Neither the Rust nor the Ada spec is formal, in the sense of consumable by a theorem prover. AFAIK for Ada Spark, there is of course assumptions on the language semantics built-in to Spark, but: these are nowhere coherently written down in a truly formal (as in machine-readable) spec.
There's a formally verified C compiler, IIRC the frontend isn't, but if you define the language to the structs that are in the inputs to whatever is formally verified I guess whatever C like dialect of a language it implements must be.
I'm sure the programmers of the flight control software safely transporting 1 billion people per year see your "real world project" and reply with something like "yes, if you are writing software where the outputs don't matter very much, our processes are excessive" :p
This write-up shows that while Ada may have some cultural and type-related disadvantages compared to Rust, Ada seems to generally win the readability contest.
What is missing from the comparison is compiler speed - Ada was once seen as a complex language, but that may not be the case if compared against Rust.
In any case, thanks for the post, it made me want to try using Ada for a real project.
As far as I'm aware, Ada has a much more expressive type system and not by a hair. By miles. Being able to define custom bounds checked ordinals, being able to index arrays with any enumerable type. Defining custom arithmatic operators for types. adding compile and runtime typechecks to types with pre/post conditions, iteration variants, predicates, etc... Discriminant records. Record representation clauses.
References and lifetimes are where Rust wins over Ada, although I agree that Ada has many other great type features.
Access types are unable to express ownership transfer without SPARK (and a sufficiently recent release of gnatprove), and without it the rules for accessibility checks are so complex they're being revamped entirely. And access types can only express lifetimes through lexical scope, which combined with the lack of first-class packages (a la OCaml) means library code just can't define access types with reasonable lifetimes.
Also, I appreciate that Rust is memory safe by default and without toying with a dozen pragmas. Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.
Only if SPARK is powerful enough to verify the code you need to write in the first place, which from my experience with SPARK is not a given as soon as access types are involved. Also Rust doesn't need to exclude high level language features just to prove memory safety, whereas SPARK can't even verify any use of controlled types.
If SPARK really were enough, I'd just write all Ada in SPARK of course.
Usually taking the IR (MIR) from rustc and translating it to a verifier engine/language, with the help of metadata in the source (attributes) when needed. E.g. Kani, Prusti, Creusot and more.
> Why can't other languages have a "formal verification library"?
I don't think there is a reason that prevents that, and perhaps some have, however it turns out that modelling shared mutability formally is really hard, and therefore the shared xor mutable rule in Rust really helps verification.
Not a library, but it kinda does: Frama-C will formally verify C code using special comments to write the contracts in, and I hear it can prove more pointer properties than SPARK, although it takes more more effort to maintain a Frama-C codebase due to all the other missing language features.
Well, I'm not an expert in formal verifications, but there are such libraries, I listed few of them above, you can go and check them. So it is possible.
C doesn't have the shared xor mutable rule - with strict aliasing or without.
I checked them, but I am not convinced and I am not sure why it was brought into this thread.
SPARK has industrial-strength, integrated verification proven in avionics (DO-178C), rail (EN 50128), and automotive (ISO 26262) contexts. Rust's tools are experimental research projects with limited adoption, and they're bolted-on and fundamentally different from SPARK.
SPARK is a designed-for-verification language subset. Ada code is written in SPARK's restricted subset with contracts (Pre, Post, Contract_Cases, loop invariants) as first-class language features. The GNAT compiler understands these natively, and GNATprove performs verification as part of the build process. It's integrated at the language specification level.
Rust's tools retrofit verification onto a language designed for different goals (zero-cost abstractions, memory safety via ownership). They translate existing Rust semantics into verification languages after the fact - architecturally similar to C's Frama-C or VCC (!). The key difference from C is that Rust's type system already guarantees memory safety in safe code, so these tools can focus on functional correctness rather than proving absence of undefined behavior.
Bottom line is that these tools cannot achieve SPARK-level verification for fundamental reasons: `unsafe` blocks create unverifiable boundaries, the trait system and lifetime inference are too complex to model completely, procedural macros generate code that can't be statically verified, interior mutability (`Cell`, `RefCell`) bypasses type system guarantees, and Rust can panic in safe code. Most critically, Rust lacks a formal language specification with mathematical semantics.
SPARK has no escape hatches, so if it compiles in SPARK, the mathematical guarantees hold for the entire program. SPARK's formal semantics map directly to verification conditions. Rust's semantics are informally specified and constantly evolving (async, const generics, GATs). This isn't tooling immaturity though, it's a consequence of language design.
> although none is as integrated as SPARK I believe
And yes, they're experimental (for now). But some are also used in production. For example, AWS uses Kani for some of their code, and recently launched a program to formally verify the Rust standard library.
Whether the language was designed for it does not matter, as long as it works. And it works well.
> `unsafe` blocks create unverifiable boundaries
Few of the tools can verify unsafe code is free of UB, e.g. https://github.com/verus-lang/verus. Also, since unsafe code should be small and well-encapsulated, this is less of a problem.
> the trait system and lifetime inference are too complex to model completely
You don't need to prove anything about them: they're purely a type level thing. At the level these tools are operating, (almost) nothing remains from them.
> procedural macros generate code that can't be statically verified
The code that procedural macros generate is visible to these tools and they can verify it well.
> interior mutability (`Cell`, `RefCell`) bypasses type system guarantees
Although it's indeed harder, some of the tools do support interior mutability (with extra annotations I believe).
> Rust can panic in safe code
That is not a problem - in fact most of them prove precisely that: that code does not panic.
> Most critically, Rust lacks a formal language specification with mathematical semantics
That is a bit of a problem, but not much since you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc).
> Rust's semantics are informally specified and constantly evolving (async, const generics, GATs)
Many of those advancements are completely erased at the levels these tools are operating. The rest does need to be handled, and the interface to rustc is unstable. But you can always pin your Rust version.
> This isn't tooling immaturity though, it's a consequence of language design.
No it's not, Rust is very well amenable to formal verification, despite, as you said, not being designed for it (due to the shared xor mutable rule, as I said), Perhaps even more amenable than Ada.
Also this whole comment seems unfair to Rust since, if I understand correctly, SPARK also does not support major parts of Ada (maybe there aren't unsupported features, but you not all features are fully supported). As I said I know nothing about Ada or SPARK, but if we compare the percentage of the language the tools are supporting, I won't be surprised if that of the Rust tools is bigger (despite Rust being a bigger language). These tools just support Rust really well.
> Whether the language was designed for it does not matter, as long as it works. And it works well.
It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required. The verification community distinguishes between "we verified some properties of some code" and "the language guarantees these properties for all conforming code."
> Few of the tools can verify unsafe code is free of UB
With heavy annotation burden, for specific patterns. SPARK has no unsafe - the entire language is verifiable. That's the difference between "can be verified with effort" and "is verified by construction."
> You don't need to prove anything about [traits/lifetimes]: they're purely a type level thing
Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.
> The code that procedural macros generate is visible to these tools
The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.
> some of the tools do support interior mutability (with extra annotations I believe)
Exactly - manual annotation burden. SPARK's verification is automatic for all conforming code. The percentage of manual proof effort is a critical metric in formal verification.
> That is not a problem - in fact most of them prove precisely that: that code does not panic
So they're doing what SPARK does automatically - proving absence of runtime errors. But SPARK guarantees this for the language; Rust tools must verify it per codebase.
> you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc)
"Follow the compiler's behavior" is not formal semantics. Formal verification requires mathematical definitions independent of implementation. This is why SPARK has an ISO standard with formal semantics, not "watch what GNAT does."
> Rust is very well amenable to formal verification [...] Perhaps even more amenable than Ada
Then why doesn't it have DO-178C, EN 50128, or ISO 26262 certification toolchains after a decade? SPARK achieved this because verification was the design goal. Rust's design goals were different - and valid - but they weren't formal verifiability.
> SPARK also does not support major parts of Ada
Correct - by design. SPARK excludes features incompatible with efficient verification (unrestricted pointers, exceptions in contracts, controlled types). This is intentional subsetting for verification. Claiming Rust tools "support more of Rust" ignores that some Rust features are fundamentally unverifiable without massive annotation burden.
The core issue: you're comparing research tools that can verify some properties of some Rust programs with significant manual effort, to a language designed so that conforming programs are automatically verifiable with mathematical guarantees. These are different categories of assurance.
Conversely, the Rustonomicon has this page that says that Rust just uses C++20's memory model for its atomics. Yet I do not believe that memory model is defined anywhere in FLS.
It is certainly incomplete. Virtually all specifications for programming languages are. It is good enough for safety critical work, which is a higher bar than most.
The idea is that you adopt them and improve them over time. It is more complete than the reference, which is the previous project in this area.
Rust does have a qualified compiler: Ferrocene. It supports ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 currently, with more on the way, including plans for DO-178 I believe. It’s being driven by actual adoption, so they’ve started in automotive and will move to other industries eventually.
> I think it's important to note, the certification is only for a subset of the run-time, which means some language features will not be available. Also, the certification is only to SIL-2 level, so any projects requiring SIL-3 or 4 will still not be able to use the Ferrocine compiler!
I know that Ferrocene and AdaCore were working together, but then parted ways. I am assuming they're both doing roughly the same thing: qualifying the upstream compiler with some patches. I know that Ferrocene's patches are mostly adding a new platform, they've upstreamed all the other stuff.
> Isn't that only for a very small subset of Rust and its standard library?
It is currently for the compiler only. This ties into the next bit, though:
> Also, do you happen to be able to explain this comment?
Yeah, you can see me posting in that thread, though not that specific sub-thread. Rust has a layered standard library: core, alloc (which layers memory allocation on top of core), and std (which layers OS specific things on top of that). There's three parts to this comment:
First, because it's only core, you don't get the stuff from alloc and std. So, no dynamic memory allocation or OS specific things like filesystem access. That's usually not a problem for these kinds of projects. But that's what they mean by 'some language features will not be available', but they're mistaken: all language features are available, it's some standard library features that are not. No language features require allocation, for example.
Second, they qualified a subset of libcore for IEC61508. A founder of Ferrous mentions that IS 26262 is coming next, they just had a customer that needed IEC61508 quickly, so they prioritized that. This is how it relates to the above, for ISO 26262, it's just the compiler currently.
> It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required
As said in a sibling comment, certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning. Ada and SPARK are old and mature. It's not a fair comparison - but that doesn't mean Rust couldn't get there.
> > Few of the tools can verify unsafe code is free of UB
>
> With heavy annotation burden, for specific patterns
> > some of the tools do support interior mutability (with extra annotations I believe)
>
> Exactly - manual annotation burden.
SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?
> Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.
Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.
The Ada specification was, if I understand correctly, formally defined. But there are efforts to do that to Rust as well (MiniRust, a-mir-formality, and in the past RustBelt).
> The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.
And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified? The outputted code is, and that's what important. Even with full formal verification, proving that the program fulfills its goals, you don't need to verify helpers like this - only the code they generate. If it works, then even if the script is buggy, who cares.
> So they're doing what SPARK does automatically - proving absence of runtime errors
Exactly - that's the point, to prove free of runtime errors.
I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase" - does SPARK not need to verify separate codebases separately? Does it somehow work magically for all of your codebases at once?
It's clear at this point that neither of us will get convinced, and I think I said everything I had about this already.
That first one is an implementation bug that's never been discovered in the wild.
Regardless, all projects have bugs. It's not really germane to qualification, other than that qualification assumes that software has bugs and that you need to, well, qualify them and their impact.
> Certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning.
Ferrocene has ISO 26262 qualification for the compiler, not verification tools. That's compiler correctness, not formal verification of programs. SPARK's GNATprove is qualified for proving program properties - fundamentally different.
> SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?
SPARK supports controlled aliasing through ownership aspects and borrow/observ annotations - but these are designed into the verification framework, not bolted on. The key difference: SPARK's aliasing controls are part of the formal semantics and verified by GNATprove automatically. Rust's unsafe shared mutability requires external verification tools with manual proof burden. SPARK deliberately restricts aliasing patterns to those that remain efficiently verifiable: it's not "can't do it" it's "only allow patterns we can verify".
> Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.
Qualified compilers (GNAT Pro) undergo qualification per DO-178C/ED-12C. The difference: SPARK's semantics are formally defined independent of the compiler. Rust verification tools must trust rustc's implementation because Rust has no formal specification. When rustc changes behavior (happens frequently), verification tools break. SPARK's specification is stable.
> And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified?
External build scripts are different from language features. Procedural macros are part of Rust's language definition and can access compiler internals. If you use external code generation with SPARK, you're explicitly stepping outside the language's guarantees - and safety standards require justification. Rust embeds this escape hatch in the language itself.
> I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase"
SPARK: If your code compiles in the SPARK subset, overflow/division-by-zero/array bounds are automatically proven impossible by language rules. You can add contracts for functional correctness.
Rust tools: You must annotate code, write invariants, and run verification per program. The language provides memory safety via the type system, but not freedom from arithmetic errors or functional correctness. These must be proven per codebase with tools.
The distinction: language-level guarantees vs. per-program verification effort.
> It's clear at this point that neither of us will get convinced
Fair enough, but the fundamental difference is whether verification is a language property or a tool property. Both approaches have merit for different use cases.
> Being able to define custom bounds checked ordinals
That Rust doesn't have (builtin, at least).
> being able to index arrays with any enumerable type
In Rust you can impl `std::ops::Index` and index types, including arrays, with whatever you want.
> Defining custom arithmatic operators for types
Again, definitely possible by implementing traits from `std::ops`.
> adding compile and runtime typechecks to types with pre/post conditions, iteration variants
If you refer to the default runtime verification, that's just a syntax sugar for assertions (doable in Rust via a macro). If you refer to compile-time verification via SPARK, Rust's formal verification libraries usually offer this tool as well.
> predicates
Doable via newtypes.
> Discriminant records
That's just generic ADTs if I understand correctly?
> Record representation clauses
Bitfields aren't available but you can create them yourself (and there are ecosystem crates that do), other than that you can perfectly control the layout of a type.
Compiler speed is quite good, but --as OP-- I've only written AoC in Ada. I think Ada's advantage is the separation between package/module specification and body. If you change a body, the compiler only has to process that file. But the rust compiler must check other files as well, since something in the interface might have changed. But that's a guess.
Ada is not necessarily complex, but it does require getting used to. It is a very large language, though.
> If you change a body, the compiler only has to process that file. But the rust compiler must check other files as well, since something in the interface might have changed.
That's correct in Rust as well (minus some small warts such as if you add an impl inside, which the Rust team wants to deprecate). In fact rust-analyzer relies on that. The compiler will realize that as well via its sophisticated incremental system, but it does take time to evaluate all the queries, even if all are cache hits.
On strings in Ada vs Rust. Ada's design predates Unicode (early 1980s vs 1991), so Ada String is basically char array whereas Rust string is a Unicode text type. This explains why you can index into Ada Strings, which are arrays of bytes, but not into Rust strings, which are UTF8 encoded buffers that should be treated as text. Likely the Rust implementation could have used a byte array here.
Worse, the built-in Unicode strings are arrays of Unicode scalars, effectively UTF-32 in the general case. There's no proper way to write UTF-8 string literals AFAIK, you need to convert them from arrays of 8, 16 or 32 bit characters depending on the literal.
How is the internal representation an issue?
Java string are utf16 internally and it's doesn't matter how you write your code nor what's the targeted format.
It's an issue because there's nothing internal about the representation in Ada: They're regular arrays of Character/Wide_Character/Wide_Wide_Character, and string literals have different type depending on the width required to represent it as such.
Also, string representations very much matter if you're coding with even the slightest amount of mechanical sympathy.
I mean you can index into Rust's strings, it's just that you probably don't want that:
"Clown"[2..5] // is "own"
Notice that's a range, Rust's string slice type doesn't consider itself just an array (as the Ada type is) and so we can't just provide an integer index, the index is a range of integers to specify where our sub-string should begin and end. If we specify the middle of a Unicode character then the code panics - don't do that.
Yes, since AoC always uses ASCII it will typically make sense to use &[u8] (the reference to a slice of bytes) and indeed the str::as_bytes method literally gives you that byte slice if you realise that's what you actually needed.
I found it kind of odd that the author says Rust doesn't support concurrent programming out of the box. He links to another comment which points out you don't need Tokio for async (true enough), but even that aside async isn't the only way to achieve concurrency. Threads are built right into the language, and are easier to use than async. The only time they wouldn't be a good choice is if you anticipate needing to spawn so many threads that it causes resource issues, which very few programs will.
How does the cancellation story differ between threads and async in Rust? Or vs async in other languages?
There's no inherent reason they should be different, but in my experience (in C++, Python, C#) cancellation is much better in async then simple threads and blocking calls. It's near impossible to have organised socket shutdown in many languages with blocking calls, assuming a standard read thread + write thread per socket. Often the only reliable way to interrupt a socket thread it's to close the socket, which may not be what you want, and in principle can leave you vulnerable to file handle reuse bugs.
Async cancellation is, depending on the language, somewhere between hard but achievable (already an improvement) and fabulous. With Trio [1] you even get the guarantee that non-compared socket operations are either completed or have no effect.
Did this work any better in Rust threads / blocking calls? My uneducated understanding is that things are actually worse in async than other languages because there's no way to catch and handle cancellations (unlike e.g. Python which uses exceptions for that).
I'm also guessing things are no better in Ada but very happy to hear about that too.
Ok I could be super wrong here, but I think that's not true.
Dropping a future does not cancel a concurrently running (tokio::spawn) task. It will also not magically stop an asynchronous I/o call, it just won't block/switch from your code anymore while that continues to execute. If you have created a future but not hit .await or tokio::spawn or any of the futures:: queue handlers, then it also won't cancel it it just won't begin it.
Cancellation of a running task from outside that task actually does require explicit cancelling calls IIRC.
Spawn is kind of a special case where it's documented that the future will be moved to the background and polled without the caller needing to do anything with the future it returns. The vast majority of futures are lazy and will not do work unless explicitly polled, which means the usual way of cancelling is to just stop polling (e.g. by awaiting the future created when joining something with a timeout; either the timeout happens before the other future completes, or the other future finishes and the timeout no longer gets polled). Dropping the future isn't technically a requirement, but in practice it's usually what will happen because there's no reason to keep around a future you'll never poll again, so most of the patterns that exist for constructing a future that finishes when you don't need it anymore rather than manually cancelling will implicitly drop any future that won't get used again (like in the join example above, where the call to `join` will take ownership of both futures and not return either of them, therefore dropping whichever one hasn't finished when returning).
So how do you do structured concurrency [1] in Rust i.e. task groups that can be cancelled together (and recursively as a tree), always waiting for all tasks to finish their cancellation before moving on? (Normally structured concurrency also involves automatically cancelling the other tasks if one fails, which I guess Rust could achieve by taking special action for Result types.)
If you can't cancel a task and its direct dependents, and wait for them to finish as part of that, I would argue that you still don't have "real" cancellation. That's not an edge case, it's the core of async functionality.
Hmm, maybe it's possible to layer structured concurrency on top of what Rust does (or will do with async drop)? Like, if you have a TaskGroup class and demand all tasks are spawned via that, then internally it could keep track of child tasks and make sure that they're all cancelled when the parent one is (in the task group's drop). I think? So maybe not such an issue, in principle.
I think you're on the right track here to figuring this out. Tokio's JoinSet basically does what you describe for a single level of spawning (so not recursively, but it's at least part of the way to get what you describe); the `futures` library also has a type called `FuturesUnordered` that's similar but has the tradeoff that all futures it tracks need to be the same type which allows it to avoid spawning new tasks (and by extension doesn't need to wrap the values obtained by awaiting in a Result).
Under the hood, there's nothing stopping a future from polling on or more other futures, so keeping in mind that it isn't the dropping that cancels but rather the lack of polling, you could achieve what you're describing with each future in the tree polling its children in its own poll implementation, which means that once you stop polling the "root" future in the tree, all of the others in the tree will by extension no longer get polled. You don't actually need any async Drop implementation for this because there's no special logic you need when dropping; you just stop polling, which happens automatically since you can't poll something that's been dropped anyhow.
That's a rare exception, and just a design choice of this particular library function. It had to intentionally implement a workaround integrated with the async runtime to survive normal cancellation. (BTW, the anti-cancellation workaround isn't compatible with Rust's temporary references, which can be painfully restrictive. When people say Rust's async sucks, they often actually mean `tokio:spawn()` made their life miserable).
Regular futures don't behave like this. They're passive, and can't force their owner to keep polling them, and can't prevent their owner from dropping them.
When a Future is dropped, it has only one chance to immediately do something before all of its memory is obliterated, and all of its inputs are invalidated. In practice, this requires immediately aborting all the work, as doing anything else would be either impossible (risking use-after-free bugs), or require special workarounds (e.g. io_uring can't work with the bare Future API, and requires an external drop-surviving buffer pool).
Rain showed that not all may be as simple as it seems to do it correctly.
In her presentation on async cancellation in Rust, she spoke pretty extensively on cancel safety and correctness, and I would recommend giving it a watch or read.
Yeah that's what I'm talking about ... Cancellation where the cancelled object can't handle the cancellation, call other async operations and even (very rarely) suppress it, isn't "real" cancellation to me, having seen how this essential it is.
> There's no inherent reason they should be different
There is... They're totally different things.
And yeah Rust thread cancellation is pretty much the same as in any other language - awkward to impossible. That's a fundamental feature of threads though; nothing to do with Rust.
There's no explicit cancel, but there's trivial one shot cancellation messages that you can handle on the thread side. It's perfectly fine, honestly, and how I've been doing it forever.
I would call that clean shutdown more than cancellation. You can't cancel a long computation, or std::thread::sleep(). Though tbf that's sort of true of async too.
To be clear about what I meant: I was saying that, in principle, it would be possible design a language or even library where all interruptable operations (at least timers and networking) can be cancelled from other threads. This can be done using a cancellation token mechanism which avoids even starting the operation of already cancelled token, in a way that avoids races (as you might imagine from a naive check of a token before starting the operation) if another thread cancels this one just as the operation is starting.
Now I've set (and possibly moved) the goalposts, I can prove my point: C# already does this! You can use async across multiple threads and cancellation happens with cancellation tokens that are thread safe. Having a version where interruptable calls are blocking rather than async (in the language sense) would actually be easier to implement (using the same async-capable APIs under the hood e.g., IOCP on Windows).
Well sure, there's nothing to stop you writing a "standard library" that exposes that interface. The default one doesn't though. I expect there are platforms that Rust supports that don't have interruptible timers and networking (whereas C# initially only supported Windows).
I wonder where the cut-off is where a work stealing scheduler like Tokio's is noticeably faster than just making a bunch of threads to do work, and then where the hard cut-off is that making threads will cause serious problems rather than just being slower because we don't steal.
It might be quite small, as I found for Maps (if we're putting 5 things in the map then we can just do the very dumbest thing which I call `VecMap` and that's fine, but if it's 25 things the VecMap is a little worse than any actual hash table, and if it's 100 things the VecMap is laughably terrible) but it might be quite large, even say 10x number of cores might be just fine without stealing.
Threads as they are conventionally considered are inadequate. Operating systems should offer something along the lines of scheduler activations[0]: a low-level mechanism that represents individual cores being scheduled/allocated to programs. Async is responsive simply because it conforms to the (asynchronous) nature of hardware events. Similarly, threads are most performant if leveraged according to the usage of hardware cores. A program that spawns 100 threads on a system with 10 physical cores is just going to have threads interrupting each other for no reason; each core can only do so much work in a time frame, whether it's running 1 thread or 10. The most performant/efficient abstraction is a state machine[1] per core. However, for some loss of performance and (arguable) ease of development, threads can be used on top of scheduler activations[2]. Async on top of threads is just the worst of both worlds. Think in terms of the hardware resources and events (memory accesses too), and the abstractions write themselves.
Yes; my understanding is that, for kernels designed for 1:1 threading, scheduler activations are an invasive change and not preferred by developers. Presumably, an operating system designed around scheduler activations would be able to better integrate them into applications, possibly even binary-compatibly with existing applications expecting 1:1 threading.
Not the OP but I'll take a stab: I see "waiting faster" as meaning roughly "check the status of" faster.
For example, you have lots of concurrent tasks, and they're waiting on slow external IO. Each task needs its IO to finish so you can make forward progress. At any given time, it's unlikely more than a couple of tasks can make forward progress, due to waiting on that IO. So most of the time, you end up checking on tasks that aren't ready to do anything, because the IO isn't done. So you're waiting on them to be ready.
Now, if you can do that "waiting" (really, checking if they're ready for work or not) on them faster, you can spend more of your machine time on whatever actual work _is_ ready to be done, rather than on checking which tasks are ready for work.
Threads make sense in the opposite scenario: when you have lots of work that _is_ ready, and you just need to chew through it as fast as possible. E.g. numbers to crunch, data to search through, etc.
I'd love if someone has a more illustrative metaphor to explain this, this is just how I think about it.
You may be correct in theory though in practice the reason to use Async over threads is often because the crate you want to use is async. Reqwest is a good example, it cannot be used without Tokio. Ureq exists and works fine. I've done a fairly high level application project where I tried to avoid all async and at some point it started to feel like swimming upstream.
Or in cases where the platform doesn't support threads easily - WASM and embedded (Embassy). Tbh I think that's a better motivation for using async than the usual "but what if 100k people visit my anime blog all at once?"
Interesting that Ada has an open source compiler. For whatever reason when I looked at it years ago I thought it was proprietary compilers only so I never really looked at it again. Maybe I’ll look again now.
GNAT has been around for 30 years. There were some limitations with (one version of?) it due to it being released without the GPL runtime exception, which meant linking against its runtime technically required your programs to also be released under GPL. That hasn't been an issue for a very long time either, though.
GNAT has been around since the 90s, based on GCC. My university did some work on the compiler and used it for some undergrad courses like real-time programming. IIRC there was an attempt to use Ada for intro to programming courses, but I think they chose Pascal and then eventually C++.
Tangentially related, one of the more interesting projects I've seen in the 3D printing space recently is Prunt. It's a printer control board and firmware, with the latter being developed in Ada.
I'd love to use Ada as my primary static language if it had broader support. It's in my opinion the best compiled language with strong static typing. Although it has gained traction with Alire, it unfortunately doesn't have enough 3rd party support for my needs yet.
If you have some time to fool around, try it for Advent of Code. I did last year, and while I'm not going to seriously consider it for my day job, I found it worthwhile.
You can generate bindings using a gcc -c -fdump-ada-spec <header.h>. They typically work well enough without needing additional tweaks but If it's more involved you can ask Claude to make a shell script that generates bindings for whatever C library you wanted to use and it works reasonably well.
In my opinion, don't make thick bindings for your C libraries. It just makes it harder to use them.
For example I don't really like the OpenGL thick bindings for Ada because using them is so wildly different than the C examples that I can't really figure out how to do what I want to do.
It's just everything and the kitchen sink, I'm afraid, from a NATS server and client library over cross-platform GUIs including mobile, common file format reading and writing like Excel, Word, and PDF to post-quantum cryptography. I'm unfortunately in the business of Lego-brick software development using well-maintained 3rd-party libraries whenever possible. It's the same with CommonLisp, I love it but in the end I'd have to write too many things on my own to be productive in it.
I envy people who can write foundational, self-contained software. It's so elegant.
I found the inclusion of arrays indexed on arbitrary types in the feature table as a benefit of Ada surprising. That sounds like a dictionary type, which is in the standard library of nearly every popular Language. Rust includes two.
I think they're focused very much specifically on the built-in array type. Presumably Ada is allowed to say eggs is an array and the index has type BirdSpecies so eggs[Robin] and eggs[Seagull] work but eggs[5] is nonsense.
Rust is OK with you having a type which implements Index<BirdSpecies> and if eggs is an instance of that type it's OK to ask for eggs[Robin] while eggs[5] won't compile, but Rust won't give you an "array" with this property, you'd have to make your own.
My guess is that this makes more sense in a language where user defined types are allowed to be a subset of say a basic integer type, which I know Ada has and Rust as yet does not. If you can do that, then array[MyCustomType] is very useful.
I call out specifically User Defined types because, Rust's NonZeroI16 - the 16-bit integers except zero - is compiler-only internal magic, if you want a MultipleOfSixU32 or even U8ButNotThreeForSomeReason that's not "allowed" and so you'd need nightly Rust and an explicit "I don't care that this isn't supported" compiler-only feature flag in your source. I want to change this so that anybody can make the IntegersFiveThroughTwelveU8 or whatever, and there is non zero interest in that happening, but I'd have said the exact same thing two years ago so...
I really don't understand this so I hope you won't mind explaining it. If I would have the type U8ButNotThreeForSomeReason wouldn't that need a check at runtime to make sure you are not assigning 3 to it?
At runtime it depends. If we're using arbitrary outside integers which might be three, we're obliged to check yes, nothing is for free. But perhaps we're mostly or entirely working with numbers we know a priori are never three.
NonZero<T> has a "constructor" named new() which returns Option<NonZero<T>> so that None means nope this value isn't allowed because it's zero. But unwrapping or expecting an Option is constant, so NonZeroI8::new(9).expect("Nine is not zero") will compile and produce a constant that the type system knows isn't zero.
Three in particular does seem like a weird choice, I want Balanced<signed integer> types such as BalancedI8 which is the 8-bit integers including zero, -100 and +100 but crucially not including -128 which is annoying but often not needed. A more general system is envisioned in "Pattern Types". How much more general? Well, I think proponents who want lots of generality need to help deliver that.
Option<U8ButNotThreeForSomeReason> would have a size of 2 bytes (1 for the discriminant, 1 for the value) whereas Option<NonZeroU8> has a size of only 1 byte, thanks to some special sauce in the compiler that you can't use for your own types. This is the only "magic" around NonZero<T> that I know of, though.
You can make an enum, with all 255 values spelled out, and then write lots of boilerplate, whereupon Option<U8ButNotThreeForSomeReason> is also a single byte in stable Rust today, no problem.
That's kind of silly for 255 values, and while I suspect it would work clearly not a reasonable design for 16-bits let alone 32-bits where I suspect the compiler will reject this wholesale.
Another trick you can do, which will also work just fine for bigger types is called the "XOR trick". You store a NonZero<T> but all your adaptor code XORs with your single not-allowed value, in this case 3 and this is fairly cheap on a modern CPU because it's an ALU operation, no memory fetches except that XOR instruction, so often there's no change to bulk instruction throughput. This works because only 3 XOR 3 == 0, other values will all have bits jiggled but remain valid.
Because your type's storage is the same size, you get all the same optimisations and so once again Option<U8ButNotThreeForSomeReason> is a single byte.
The feature of being able to use a discrete range as an array index is very helpful when you have a dense map (most keys will be used) or you also want to be able to iterate over a sequential block of memory (better performance than a dictionary will generally give you, since they don't usually play well with caches).
The Americans with Disabilities Act doesn't cover subtyping the index type into an array. Ada, the language, does though.
EDIT: Seems I'm getting downvoted, do people not know that ADA is not the name of the programming language? It's Ada, as in Ada Lovelace, whose name was also not generally SHOUTED as ADA.
There does seem to be a strain of weird language fanatics who insist all programming language names must be shouted, so they'll write RUST and ADA, and presumably JAVA and PYTHON, maybe it's an education thing, maybe they're stuck in an environment with uppercase only like a 1960s FORTRAN programmer ?
Similarly, and less explicably, are people who program in "C" and don't understand when people mention the oddity. Do people not see quotes? Do they just add them and not realize?
Very nice text. As i am very sceptical to Rust i apreciate the comarison to a language i know better. I was not aware that there is no formal spec for rust. Isn't that a problem if rustc makes changes?
1. There is a spec now, Ferrocene donated theirs, and the project is currently integrating it
2. The team takes backwards compatibility seriously, and uses tests to help ensure the lack of breakage. This includes tests like “compile the code being used by Linux” and “compile most open source Rust projects” to show there’s no regressions.
Ferrocene is a specification but it’s not a formal specification. [Minirust](https://github.com/minirust/minirust) is the closest thing we have to a formal spec but it’s very much a work-in-progress.
It's a good enough spec to let rustc work with safety critical software, so while something like minirust is great, it's not necessary, just something that's nice to have.
For implementers of third-party compilers, researchers of the Rust programming language, and programmers who write unsafe code, this is indeed a problem. It's bad.
For the designers of Rust, "no formal specification" allows them to make changes as long as it is not breaking. It's good.
Medical or Miltary often require the software stack/tooling to be certified following certain rules. I know most of this certifications are bogus, but how is that handled with Rust?
Ferrocene provides a certified compiler (based on a spec they've written documenting how it behaves) which is usable for many uses cases, but it obviously depends what exactly your specific domain needs.
Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs. Spark Ada was a relatively easy substandard to learn and apply to start to develop software that was SIL 4 compliant.
I can't help but feel that we just went through a huge period of growth at all costs and now there is a desire to return, after 30-years of anything goes, to trying to make software that is safer again. Would be nice to start to build languages based on all the safety learnings over the decades to build some better languages, the good ideas keep getting lost in obscure languages and forgotten about.
Yes, we re-invent the wheel. The more time you spend writing software for a living, the more you will see the wheel re-invented. But Ada and Rust are safe under different definitions of safety. I view Rust as having a more narrow definition of safety, but a very important notion of safety, and executed with brutal focus. While Ada's definition of safety being broader, but better suited to a small subset of use cases.
I write Rust at work. I learned Ada in the early 1990s as the language of software engineering. Back then a lot of the argument against Ada was it was too big, complex, and slowed down development too much. (Not to mention the validating Ada 83 compiler I used cost about $20,000 a seat in today's money). I think the world finally caught up with Ada and we're recognizing that we need languages every bit as big and complex, like Rust, to handle issues like safe, concurrent programming.
I don't know Ada; care to explain why its definition of safety is broader than Rust?
I agree Rust's safety is very clearly (and maybe narrowly) defined, but it doesn't mean there isn't focus on general correctness - there is. The need to define safety precisely arises because it's part of the language (`unsafe`).
Rust’s built-in notion of safety is intentionally focused on memory + data-race properties at compile time. logic, timing, and determinism are left to libraries and design. Ada (with SPARK & Ravenscar) treats contracts, concurrency discipline, and timing analysis as first-class language/profile concerns hence a broader safety envelope.
You may choose to think from safety guarantee hierarchy perspective like (Bottom = foundation... Top = highest assurance)
Layer 6: FORMAL PROOFS (functional correctness, no RT errors) Ada/SPARK: built-in (GNATprove) Rust: external tools (Kani, Prusti, Verus)
Layer 5: TIMING / REAL-TIME ANALYSIS (WCET, priority bounds) Ada: Ravenscar profile + scheduling analysis Rust: frameworks (RTIC, Embassy)
Layer 4: CONCURRENCY DETERMINISM (predictable schedules) Ada: protected objects + task priorities Rust: data-race freedom; determinism via design
Layer 3: LOGICAL CONTRACTS & INVARIANTS (pre/post, ranges) Ada: Pre/Post aspects, type predicates (built-in) Rust: type states, assertions, external DbC tools
Layer 2: TYPE SAFETY (prevent invalid states) Ada: range subtypes, discriminants Rust: newtypes, enums, const generics
Layer 1: MEMORY SAFETY & DATA-RACE FREEDOM Ada: runtime checks; SPARK proves statically Rust: compile-time via ownership + Send/Sync
As the OP mentioned, restricted number ranges:
Ada, or at least GNAT, also supports compile-time dimensional analysis (unit checking). I may be biased, because I mostly work with engineering applications, but I still do not understand why in other languages it is delegated to 3rd party libraries.
https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug...
F# can do this too.
https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
A whole zoo of dimensional analysis in programming languages : https://www.gmpreussner.com/research/dimensional-analysis-in...
Nice, didn't know that. I keep seeing praise for F# so I should finally take a look.
One of the best software design books I've read is "Domain Modelling Made Functional", from Scott Wlaschin. It's about F#, but it remains a good read for any software programmer, whatever your language. And it's easily understandable, you can almost read it like a novel, without focusing too much. Though what may need some brains is applying the functional concepts of this book with your favourite language...
#f!!
Nim (https://nim-lang.org), mentioned elsethread Re: numeric ranges like Ada, only needs a library for this: https://github.com/SciNim/Unchained
FWIW, physical dimensions like meters were the original apples-to-oranges type system that pre-dates all modern notions of things beyond arithmetic. I'm a little surprised it wasn't added to early FORTRAN. In a different timeline, maybe. :)
I think what is in "the" "stdlib" or not is a tricky question. For most general/general purpose languages, it can be pretty hard to know even the probability distribution of use cases. So, it's important to keep multiple/broad perspectives in mind as your "I may be biased" disclaimer. I don't like the modern (well, it kind of started with CTAN where the micros seemed meant more for copy-paste and then CPAN where it was not meant for that) trend toward dozens to hundreds of micro-dependencies, either, though. I think Python, Node/JS, and Rust are all known for this.
Nim was inspired by Ada & Modula, and has subranges [1]:
Then at compile time: [1] https://nim-lang.org/docs/tut1.html#advanced-types-subrangesI know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).
Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:
https://www.sqlite.org/testing.html#tension_between_fuzz_tes...
This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"
Ada's compile time verification is very good. With SPARK it's even better.
Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.
Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.
If you have
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.
But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].
Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.
That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).
But if the number type’s value can change at runtime as long as it stays within the range, thus may not always be possible to check at compile time.
The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].
I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.
[1] https://en.wikipedia.org/wiki/Interval_arithmetic
I like how better more reliable code is more expensive to certify and the problem is the code and not the certification criteria/process being flawed.
> as it inserts run-time checks that are not easily traceable to source code
Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.
Can you help me understand the context in which this would be far more beneficial from having a validation function, like this in Java:
To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.
If it fails at run time, it could be the reason you get paged at 1am because everything's broken.
It’s not just about safety, it’s also about speed. For many applications, having to check the values during runtime constantly is a bottleneck they do not want.
Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.
It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.
It’s a question of compile time versus runtime.
Yeah it’s something that code would compile down to. You can skip Java and write assembly directly, too.
What happens when you add 200+1 in a situation where the compiler cannot statically prove that this is 201?
Your example also gets evaluated at comptime. For more complex cases I wouldn't be able to tell you, I'm not the compiler :) For example, this get's checked:
If it cannot statically prove it at comptime, it will crash at runtime during the type conversion operation, e.g.: Then, when you run it:Exactly this. Fails at runtime. Consider rather a different example: say the programmer thought the age were constrained to 110 years. Now, as soon as a person is aged 111, the program crashes. Stupid mistake by a programmer assumption turns into a program crash.
Why would you want this?
I mean, we've recently discussed on HN how most sorting algorithms have a bug for using ints to index into arrays when they should be using (at least) size_t. Yet, for most cases, it's ok, because you only hit the limit rarely. Why would you want to further constrain the field, would it not just be the source of additional bugs?
Once the program is operating outside of the bounds of the programmers assumption, it’s in an undefined state that may cause a crash to happen at a later point of time in a totally different place.
Making the crash happen at the same time and space as the error means you don’t have to trace a later crash back to the root cause.
This makes your system much easier to debug at the expense of causing some crashes that other systems might not have. A worthy trade off in the right context.
Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.
I could go into many more examples but I hope I am understood. I think these hard-coded definition of ranges at compile time are causes of far more issues than they solve.
Let's take a completely different example: size of a field in a database for a surname. How much is enough? Turns out 128 varchars is not enough, so now they've set it to 2048 (not a project I work(ed) on, but am familiar with). Guess what? Not in our data set, but theoretically, even that is not enough.
> Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.
So you validate user input, we've known how to do that for decades. This is a non-issue. You won't crash the program if you require temperatures to be between 0 and 1000 K and a user puts in 1001, you'll reject the user input.
If that user input crashes your program, you're not a very good programmer, or it's a very early prototype.
I think, if I am following things correctly, you will find that there's a limit to the "validate user input" argument - especially when you think of scenarios where multiple pieces of user input are gathered together and then have mathematical operations applied to them.
eg. If the constraint is 0..200, and the user inputs one value that is being multiplied by our constant, it's trivial to ensure the user input is less than the range maximum divided by our constant.
However, if we are having to multiply by a second, third... and so on.. piece of user input, we get to the position where we have to divide our currently held value by a piece of user input, check that the next piece of user input isn't higher, and then work from there (this assumes that the division hasn't caused an exception, which we will need to ensure doesn't happen.. eg if we have a divide by zero going on)
I mean, yeah. If you do bad math you'll get bad results and potentially crashes. I was responding to someone who was nonsensically ignoring that we validate user input rather than blindly putting it into a variable. Your comment seems like a non sequitur in this discussion. It's not like the risk you describe is unique to range constrained integer types, which is what was being discussed. It can happen with i32 and i64, too, if you write bad code.
Hmm, I was really pointing at the fact that once you get past a couple of pieces of user input, all the validation in the world isn't going to save you from the range constraints.
Assuming you want a good faith conversation, then the idea that there's bad math involved seems a bit ludicrous
I believe that the solution here is to make crashes "safe" eg with a supervisor process that should either never crash or be resumed quickly and child processes that handle operations like user inputs.
This together with the fact that the main benefit of range types is on the consumption side (ie knowing that a PositiveInt is not 0) and it is doable to use try-catch or an equivalent operation at creation time
Happens with DB constraints all the time, user gets an error and at least his session, if not whole process, crashes. And yes that too is considered bad code that needs fixing.
> Stupid mistake by a programmer assumption turns into a program crash.
I guess you can just catch the exception in Ada? In Rust you might instead manually check the age validity and return Err if it's out of range. Then you need to handle the Err. It's the same thing in the end.
> Why would you want to further constrain the field
You would only do that if it's a hard requirement (this is the problem with contrived examples, they make no sense). And in that case you would also have to implement some checks in Rust.
Also, I would be very interested to learn the case for hard requirement for a range.
In almost all the cases I have seen it eventually breaks out of confinement. So, it has to be handled sensibly. And, again, in my experience, if it's built into constraints, it invarianly is not handled properly.
Consider the size of the time step in a numerical integrator of some chemical reaction equation, if it gets too big the prediction will be wrong and your chemical plant could explode.
So too big times steps cannot be used, but constant sized steps is wasteful. Seems good to know the integrator can never quietly be wrong, even if you have to pay the price that tge integrator could crash.
Exactly, but how do you catch the exception? One exception catch to catch them all, or do you have to distinguish the types?
And yes... error handle on the input and you'd be fine. How would you write code that is cognizant enough to catch outofrange for every +1 done on the field? Seriously, the production code then devolves into copying the value into something else, where operations don't cause unexpected exceptions. Which is a workaround for a silly restriction that should not reside in runtime level.
> Why would you want this?
Logic errors should be visible so they can be fixed?
How does this work for dynamic casting? Say like if an age was submitted from a form?
I assume it’s a runtime error or does the compiler force you to handle this?
If you're using SPARK, it'll catch at compile time if there's ever a possibility that it would fit within that condition. Otherwise it'll throw an exception (constraint_error) during runtime for you to catch.
Isn’t this just Design By Contract from Eiffel just in another form?
No, range types are at best a very limited piece of DbC. Design by Contract lets you state much more interesting things about your program. It's also available in Ada, though.
https://learn.adacore.com/courses/intro-to-ada/chapters/cont...
> The ability to make number types that were limited in their range is really useful for certain classes of bugs.
This is a feature I use a lot in C++. It is not part of the standard library but it is trivial to programmatically generate range-restricted numeric types in modern C++. Some safety checks can even be done at compile-time instead of runtime.
It should be a standard feature in programming languages.
I've never come across any range restricting constructions in C++ projects in the wild before. It truly is a shame, I think it's something more programmers should be aware of and use. Eliminating all bounds checking and passing that job to the compiler is pretty killer and eliminates whole classes of bugs.
This is an unfortunate reality. C++ has evolved into a language capable of a surprisingly deep compile-time verification but almost no one uses that capability. It reflects somewhat negatively on the C++ developer community that problems easily solved within the language are routinely not solved, though the obsession with backward compatibility with old versions of the language plays a role. If you fully leverage it, I would argue that recent versions of C++ are actually the safest systems language. Nonetheless, almost no one has seen code bases that leverage that verification capability to its maximum. Most people have no clue what it is capable of.
There is the wisdom that it is impossible to deliver C++ without pervasive safety issues, for which there are many examples, and on the other hand there are people delivering C++ in high-assurance environments with extremely low defect rates without heroic efforts. Many stories can be written in that gap. C++ can verify many things that are not verifiable in Rust, even though almost no one does.
It mostly isn’t worth the argument. For me, C++20 reached the threshold where it is practical to design code where large parts can be formally verified in multiple ways. That’s great, this has proven to be robust in practice. At the same time, there is an almost complete absence of such practice in the C++ literature and zeitgeist. These things aren’t that complex, the language users are in some sense failing the language.
The ability to codegen situationally specific numeric types is just scratching the surface. You can verify far weirder situational properties than numeric bounds if you want to. I’m always surprised by how few people do.
I used to be a C++ hater. Modern C++ brought me back almost purely because it allows rich compile-time verification of correctness. C++11 was limited but C++20 is like a different world.
> C++ can verify many things that are not verifiable in Rust, even though almost no one does.
Do you have an example of this? I'm curious where C++ exceeds Rust in this regard.
> The ability to make number types that were limited in their range is really useful for certain classes of bugs.
Yes! I would kill to get Ada's number range feature in Rust!
It is worked on under the term "pattern types" mainly by Oli oli-obk Scherer I think, who has an Ada background.
Can't tell you what the current state is but this should give you the keywords to find out.
Also, here is a talk Oli gave in the Ada track at FOSDEM this year: https://hachyderm.io/@oli/113970047617836816
AFAIK the current status is that it's internal to std (used to implement `NonNull` and friends) and not planned to be exposed.
There were some talks about general pattern type, but it's not even approved as an experiment, not to talk about RFC or stabilization.
That feature is actually from Pascal, and Modula-2, before making its way into Ada.
For some strange reason people always relate to Ada for it.
I would guess that Ada is simply more known. Keep in mind that tech exploded in the past ~3.5 decades whereas those languages are much older and lost the popularity contest. If you ask most people about older languages, the replies other than the obvious C and (kind of wrong but well) C++ are getting thin really quickly. COBOL, Ada, Fortran, and Lisp are probably what people are aware of the most, but other than that?
You've forgotten about BASIC, SNOBOL, APL, Forth, and PL/1. There were many interesting programming languages back then. Good times!
The first five languages I learned back in the 70s: FORTRAN, Pascal, PL/I, SNOBOL, APL. Then I was an Ada and Icon programmer in the 80s. In the 90s, it was C/C++ and I just never had the enthusiasm for it.
Icon (which came from SNOBOL) is one of the few programming languages I consider to embody truly new ideas. (Lisp, Forth, and Prolog are others that come to mind.)
Icon is an amazing language and I wish it was better known.
You probably know this but, for anyone else who is interested, the easiest way to get a feel for Icon nowadays may be through its descendant Unicon which is available at unicon.org.
I found Pascal more readable as a budding programmer. Later on, C's ability to just get out of the way to program what I wanted trumped the Pascal's verbosity and opinionatedness.
I admit that the terseness of the syntax of C can be off-putting. Still, it's just syntax, I am sorry you were disuaded by it.
True.
I dabbled in some of them during some periods when I took a break from work. And also some, during work, in my free time at home.
Pike, ElastiC (not a typo), Icon, Rebol (and later Red), Forth, Lisp, and a few others that I don't remember now.
Not all of those are from the same period, either.
Heck, I can even include Python and Ruby in the list, because I started using them (at different times, with Python being first) much before they became popular.
For me it's because I learned Ada in college.
18 year old me couldn't appreciate how beautiful a language it is but in my 40s I finally do.
2000-2005 College was teaching Ada?
2005-2010 my college most interesting (in this direction) language was Haskell. I don't think that there was any other language (like Ada) being taught)
Yes, I learned it in a course that surveyed a bunch of different languages like Ada, Standard ML, and Assembly
Ada is sometimes taught as part of a survey course in Programming Languages. That’s how I learned a bit about it.
Turbo Pascal could check ranges on assignment with the {$R+} directive, and Delphi could check arithmetic overflow with {$Q+}. Of course, nobody wanted to waste the cycles to turn those on :)
Most Pascal compilers could do that actually.
Yeah not wanting to waste cycles is how we ended up with the current system languages, while Electron gets used all over the place.
I would argue that was one of the reasons why those languages lost.
I distinctly remember arguments for functions working on array of 10. Oh, you want array of 12? Copy-paste the function to make it array of 12. What a load of BS.
It took Pascal years to drop that constraint, but by then C had already won.
I never ever wanted the compiler or runtime to check a subrange of ints. Ever. Overflow as program crash would be better, which I do find useful, but arbitrary ranges chosen by programmer? No thanks. To make matters worse, those are checked even by intermediate results.
I realize this is opinioned only on my experience, so I would appreciate a counter example where it is a benefit (and yes, I worked on production code written in Pascal, French variant even, and migrating it to C was hilariously more readable and maintainable).
Thankfully instead of overflow, C gets you the freedom of UB based optimizations.
Funny :)
It still results in overflow and while you are right that it's UB by the standard, it's still pretty certain what will happen on a particular platform with a particular compiler :)
No, optimizing compilers don't translate overflow to platform-specific behavior for signed integers - since it's UB they'll freely make arithmetic or logic assumptions that can result in behavior that can't really be humanly predicted without examining the generated machine code.
They are free to but not required. You can pick a different compiler, or you can configure your compiler to something else, if it provides such options.
I always found it surprising that people did not reject clang for aggressively optimizing based on UB, but instead complained about the language while still using clang with -O3.
Programmers don’t have much choice, since most compilers don’t really provide an option / optimization level that results in sane behavior for common UB footguns while providing reasonable levels of performance optimization.
The one exception I know of is CompCert but it comes with a non-free license.
I definitely do think the language committee should have constrained UB more to prevent standards-compliant compilers from generating code that completely breaks the expectations of even experienced programmers. Instead the language committees went the opposite route, removing C89/90 wording from subsequent standards that would have limited what compilers can do for UB.
The C89/C90 wording change story is a myth. And I am not sure I understand your point about CompCert. The correctness proof of CompCert covers programs that have no UB. And programmers do have some choice and also some voice. But I do not see them pushing for changes a lot.
The choice is going for other languages because they don't believe WG14, or WG21 will ever sort this out, as many are doing nowadays.
FYI all major C compilers have flags to enforce the usual two's-complement rollover, even if you enable all optimizations. I always enable at least fwrapv, even when I know that the underlying CPU has well defined overflow behavior (gcc knows this so the flag presumably becomes a no-op, but I've never validated that thought).
gcc has -fwrapv and -f[no-]strict-overflow, clang copied both, and MSVC has had a plethora of flags over the years (UndefIntOverflow, for example) so your guess is as good as mine which one still works as expected.
> I never ever wanted the compiler or runtime to check a subrange of ints. Ever.
Requiring values to be positive, requiring an index to fall within the bounds of an array, and requiring values to be non-zero so you never divide by zero are very, very common requirements and a common source of bugs when the assumptions are violated.
compile time user config checking?
Sorry? That's not possible...
I've seen it plenty of times. safety critical controllers have numeric bounds of stability. why wouldn't you want to encode that into the type
There is RFC but I guess the work stopped.
As a sibling comment[0] mentioned, pattern types are actively being worked on.
[0] https://news.ycombinator.com/item?id=45474777
Oh. I thought it stalled since there was a long time without activity.
The thing I miss most from Ada is its clear conception of object-orientation. Every other language bundles all of OOP into the "class" idea, but Ada lets you separately opt in to message sending, dynamic dispatch, subtyping, generics, etc. In Ada, those are separate features that interact usefully, rather than one big bundle.
In my personal experience it's not just safety. Reliability of produced is also a big part.
Ime, being able to express constraints in a type systems yields itself to producing better quality code. A simple example from my experience with rust and golang is mutex handling, rust just won't let you leak a guard handle while golang happily let's you run into a deadlock.
>Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs.
As pjmlp says in a sibling comment, Pascal had this feature, from the beginning, IIRC, or from an early version - even before the first Turbo Pascal version.
If I am not wrong, you could do a zero-cost abstraction in C++ and use user-defined literals if you wosh for nice syntax.
It doesn’t really compete in the same space as Ada or Rust but C# has range attributes that are similar, the only downside is you have to manually call the validation function unless you are using something like ASP.NET that does it automatically at certain times.
30+ years ago I was programming in Ada, and I feel the same way and have been repeatedly disappointed. Maybe this time around things will be different.
The author indicates some obvious differences, including the fact that Ada has a formal spec and rust doesn't -- rustc seems to be both in flux as well as the reference implementation. This might matter if you're writing a new compiler or analyzer.
But the most obvious difference, and maybe most important to a user, was left unstated: the adoption and ecosystem such as tooling, libraries, and community.
Ada may have a storied success history in aerospace and life safety, etc, and it might have an okay standard lib which is fine for AOC problems and maybe embedded bit poking cases in which case it makes sense to compare to Rust. But if you're going to sit down for a real world project, ie distributed system or OS component, interfacing with modern data formats, protocols, IDEs, people, etc is going to influence your choice on day one.
Rust has now a donated spec that was provided by Ferrocene. This spec style was influenced by the Ada spec. It is available publicly now on https://rust-lang.github.io/fls/ .
This is part of the effort of Ferrocene to provide a safety certificate compiler. And they are already available now.
This is only meaningful if Rust compiler devs give any guarantees about never breaking the spec and always being able to compile code that adheres to this spec.
Why so?
Specs for other languages are also for a specific version/snapshot.
It's also a specific version of a compiler that gets certified, not a compiler in perpetuity, no matter what language.
That's not how it works for most language standards, though. Most language standards are prescriptive, while Rust is descriptive.
Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed. Using Ada as an example, there was Ada 95 and Ada 2003, but between 95 and 2003 there was only Ada 95. There was no in-progress version, the compiler vendors weren't making changes to the language, and an Ada95 compiler today compiles the same language as an Ada95 compiler 30 years ago.
Looking at the changelog for the Rust spec (https://rust-lang.github.io/fls/changelog.html), it's just the changelog of the language as each compiler verion is released, and there doesn't seem to be any intention of supporting previous versions. Would there be any point in an alternative compiler implementing "1.77.0" of the Rust spec?
And the alternative compiler implementation can't start implementing a compiler for version n+1 of the spec until that version of rustc is released because "the spec" is just "whatever rustc does", making the spec kind of pointless.
> Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed.
This is not how C or C++ were standardized, nor most computer standards in the first place. Usually, vendors implement something, and then they come together to agree upon a standard second.
When updating standards, sometimes things are put in the standard before any implementations, but that's generally considered an antipattern for larger designs. You want real-world evaluation of the usefulness of something before it's been standardized.
Because otherwise the spec is just words on a paper, and the standard is just "whatever the compiler does is what it supposed to do". The spec codifies the intentions of the creators separately from the implementation.
In rust, there is currently only one compiler so it seems like there's no problem
There being only one compiler is exactly the problem.
How is this different from the existing situation that Rust remains compatible since Rust 1.0 over a decade ago?
Rust doesn’t have quite as strong compatibility guarantees. For example, it’s not considered a NC-breaking change to add new methods to standard library types, even though this can make method resolution ambiguous for programs that had their own definitions of methods with the same name. A C++ implementation claiming to support C++11 wouldn’t do that, they’d use ifdefs to gate off the new declarations when compiling in C++11 mode.
That's a good point about the #ifdefs thanks.
Too late to edit but I meant BC not NC
Thanks, that was easily the most confusing thing and I was like well... I understand everything else, if it's very important what exactly "NC-breaking" means I'm sure I will realise later.
By that criteria there's no meaningful C++ compiler/spec.
How so? There are compiler-agnostic C++ specs, and compiler devs try to be compatible with it.
What the GP is suggesting is that the rust compiler should be written and then a spec should be codified after the fact (I guess just for fun?).
> compiler devs try to be compatible with it.
You have to squint fairly hard to get here for any of the major C++ compilers.
I guess maybe someone like Sean Baxter will know the extent to which, in theory, you can discern the guts of C++ by reading the ISO document (or, more practically, the freely available PDF drafts, essentially nobody reads the actual document, no not even Microsoft bothers to spend $$$ to buy an essentially identical PDF)
My guess would be that it's at least helpful, but nowhere close to enough.
And that's ignoring the fact that the popular implementations do not implement any particular ISO standard, in each case their target is just C++ in some more general sense, they might offer "version" switches, but they explicitly do not promise to implement the actual versions of the ISO C++ programming language standard denoted by those versions.
Neither the Rust nor the Ada spec is formal, in the sense of consumable by a theorem prover. AFAIK for Ada Spark, there is of course assumptions on the language semantics built-in to Spark, but: these are nowhere coherently written down in a truly formal (as in machine-readable) spec.
There was also Larch/Ada [0], which was a formally proved subset of Ada, developed for NASA [1].
[0] https://apps.dtic.mil/sti/tr/pdf/ADA249418.pdf
[1] https://ntrs.nasa.gov/citations/19960000030
What even is the most complex programming language with a fully machine-checkable spec? Are there actually practically useful ones? I know of none.
One candidate is ATS [1].
Another, https://cakeml.org/
[1]: https://en.wikipedia.org/wiki/ATS_(programming_language)
For the seL4 proofs a subset of C was formalized, for example.
(Already mentioned) CakeML would be another example, together maybe with its Pancake sibling.
Also: WebAssembly!
There's a formally verified C compiler, IIRC the frontend isn't, but if you define the language to the structs that are in the inputs to whatever is formally verified I guess whatever C like dialect of a language it implements must be.
I'm sure the programmers of the flight control software safely transporting 1 billion people per year see your "real world project" and reply with something like "yes, if you are writing software where the outputs don't matter very much, our processes are excessive" :p
It is only real-world project when it is about serializing and de-serializing JSON payloads.
This write-up shows that while Ada may have some cultural and type-related disadvantages compared to Rust, Ada seems to generally win the readability contest.
What is missing from the comparison is compiler speed - Ada was once seen as a complex language, but that may not be the case if compared against Rust.
In any case, thanks for the post, it made me want to try using Ada for a real project.
What exactly is a "type-related disadvantage"?
As far as I'm aware, Ada has a much more expressive type system and not by a hair. By miles. Being able to define custom bounds checked ordinals, being able to index arrays with any enumerable type. Defining custom arithmatic operators for types. adding compile and runtime typechecks to types with pre/post conditions, iteration variants, predicates, etc... Discriminant records. Record representation clauses.
I'm not sure what disadvantages exist.
References and lifetimes are where Rust wins over Ada, although I agree that Ada has many other great type features.
Access types are unable to express ownership transfer without SPARK (and a sufficiently recent release of gnatprove), and without it the rules for accessibility checks are so complex they're being revamped entirely. And access types can only express lifetimes through lexical scope, which combined with the lack of first-class packages (a la OCaml) means library code just can't define access types with reasonable lifetimes.
Also, I appreciate that Rust is memory safe by default and without toying with a dozen pragmas. Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.
... and formal verification is where Ada / SPARK wins over Rust.
I mean, we can go on but I think it quite ends there, as far as safety goes. :D
There is a reason for why Ada is used in industries that are mission-critical.
> Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.
Not really, you can just use Ada / SPARK and it is all checked at compile-time. Look for my comments where I mention Ada.
Only if SPARK is powerful enough to verify the code you need to write in the first place, which from my experience with SPARK is not a given as soon as access types are involved. Also Rust doesn't need to exclude high level language features just to prove memory safety, whereas SPARK can't even verify any use of controlled types.
If SPARK really were enough, I'd just write all Ada in SPARK of course.
There is more than one formal verification library for Rust, although none is as integrated as SPARK I believe.
How does that work?
Why can't other languages have a "formal verification library"?
> How does that work?
Usually taking the IR (MIR) from rustc and translating it to a verifier engine/language, with the help of metadata in the source (attributes) when needed. E.g. Kani, Prusti, Creusot and more.
> Why can't other languages have a "formal verification library"?
I don't think there is a reason that prevents that, and perhaps some have, however it turns out that modelling shared mutability formally is really hard, and therefore the shared xor mutable rule in Rust really helps verification.
Wouldn't C have such a library already if it really was "possible"? You can turn off strict aliasing.
Not a library, but it kinda does: Frama-C will formally verify C code using special comments to write the contracts in, and I hear it can prove more pointer properties than SPARK, although it takes more more effort to maintain a Frama-C codebase due to all the other missing language features.
Well, I'm not an expert in formal verifications, but there are such libraries, I listed few of them above, you can go and check them. So it is possible.
C doesn't have the shared xor mutable rule - with strict aliasing or without.
I checked them, but I am not convinced and I am not sure why it was brought into this thread.
SPARK has industrial-strength, integrated verification proven in avionics (DO-178C), rail (EN 50128), and automotive (ISO 26262) contexts. Rust's tools are experimental research projects with limited adoption, and they're bolted-on and fundamentally different from SPARK.
SPARK is a designed-for-verification language subset. Ada code is written in SPARK's restricted subset with contracts (Pre, Post, Contract_Cases, loop invariants) as first-class language features. The GNAT compiler understands these natively, and GNATprove performs verification as part of the build process. It's integrated at the language specification level.
Rust's tools retrofit verification onto a language designed for different goals (zero-cost abstractions, memory safety via ownership). They translate existing Rust semantics into verification languages after the fact - architecturally similar to C's Frama-C or VCC (!). The key difference from C is that Rust's type system already guarantees memory safety in safe code, so these tools can focus on functional correctness rather than proving absence of undefined behavior.
Bottom line is that these tools cannot achieve SPARK-level verification for fundamental reasons: `unsafe` blocks create unverifiable boundaries, the trait system and lifetime inference are too complex to model completely, procedural macros generate code that can't be statically verified, interior mutability (`Cell`, `RefCell`) bypasses type system guarantees, and Rust can panic in safe code. Most critically, Rust lacks a formal language specification with mathematical semantics.
SPARK has no escape hatches, so if it compiles in SPARK, the mathematical guarantees hold for the entire program. SPARK's formal semantics map directly to verification conditions. Rust's semantics are informally specified and constantly evolving (async, const generics, GATs). This isn't tooling immaturity though, it's a consequence of language design.
> SPARK has no escape hatches
It does, you can declare a procedure in SPARK but implement it in regular Ada. Ada is SPARK's escape hatch.
This is just as visible in source code as unsafe is in Rust, a procedure body will have SPARK_Mode => Off.
I think I did say:
> although none is as integrated as SPARK I believe
And yes, they're experimental (for now). But some are also used in production. For example, AWS uses Kani for some of their code, and recently launched a program to formally verify the Rust standard library.
Whether the language was designed for it does not matter, as long as it works. And it works well.
> `unsafe` blocks create unverifiable boundaries
Few of the tools can verify unsafe code is free of UB, e.g. https://github.com/verus-lang/verus. Also, since unsafe code should be small and well-encapsulated, this is less of a problem.
> the trait system and lifetime inference are too complex to model completely
You don't need to prove anything about them: they're purely a type level thing. At the level these tools are operating, (almost) nothing remains from them.
> procedural macros generate code that can't be statically verified
The code that procedural macros generate is visible to these tools and they can verify it well.
> interior mutability (`Cell`, `RefCell`) bypasses type system guarantees
Although it's indeed harder, some of the tools do support interior mutability (with extra annotations I believe).
> Rust can panic in safe code
That is not a problem - in fact most of them prove precisely that: that code does not panic.
> Most critically, Rust lacks a formal language specification with mathematical semantics
That is a bit of a problem, but not much since you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc).
> Rust's semantics are informally specified and constantly evolving (async, const generics, GATs)
Many of those advancements are completely erased at the levels these tools are operating. The rest does need to be handled, and the interface to rustc is unstable. But you can always pin your Rust version.
> This isn't tooling immaturity though, it's a consequence of language design.
No it's not, Rust is very well amenable to formal verification, despite, as you said, not being designed for it (due to the shared xor mutable rule, as I said), Perhaps even more amenable than Ada.
Also this whole comment seems unfair to Rust since, if I understand correctly, SPARK also does not support major parts of Ada (maybe there aren't unsupported features, but you not all features are fully supported). As I said I know nothing about Ada or SPARK, but if we compare the percentage of the language the tools are supporting, I won't be surprised if that of the Rust tools is bigger (despite Rust being a bigger language). These tools just support Rust really well.
> Whether the language was designed for it does not matter, as long as it works. And it works well.
It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required. The verification community distinguishes between "we verified some properties of some code" and "the language guarantees these properties for all conforming code."
> Few of the tools can verify unsafe code is free of UB
With heavy annotation burden, for specific patterns. SPARK has no unsafe - the entire language is verifiable. That's the difference between "can be verified with effort" and "is verified by construction."
> You don't need to prove anything about [traits/lifetimes]: they're purely a type level thing
Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.
> The code that procedural macros generate is visible to these tools
The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.
> some of the tools do support interior mutability (with extra annotations I believe)
Exactly - manual annotation burden. SPARK's verification is automatic for all conforming code. The percentage of manual proof effort is a critical metric in formal verification.
> That is not a problem - in fact most of them prove precisely that: that code does not panic
So they're doing what SPARK does automatically - proving absence of runtime errors. But SPARK guarantees this for the language; Rust tools must verify it per codebase.
> you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc)
"Follow the compiler's behavior" is not formal semantics. Formal verification requires mathematical definitions independent of implementation. This is why SPARK has an ISO standard with formal semantics, not "watch what GNAT does."
> Rust is very well amenable to formal verification [...] Perhaps even more amenable than Ada
Then why doesn't it have DO-178C, EN 50128, or ISO 26262 certification toolchains after a decade? SPARK achieved this because verification was the design goal. Rust's design goals were different - and valid - but they weren't formal verifiability.
> SPARK also does not support major parts of Ada
Correct - by design. SPARK excludes features incompatible with efficient verification (unrestricted pointers, exceptions in contracts, controlled types). This is intentional subsetting for verification. Claiming Rust tools "support more of Rust" ignores that some Rust features are fundamentally unverifiable without massive annotation burden.
The core issue: you're comparing research tools that can verify some properties of some Rust programs with significant manual effort, to a language designed so that conforming programs are automatically verifiable with mathematical guarantees. These are different categories of assurance.
I could be wrong, but the Ferrocene Language Specification, FLS, adopted by Rust as its specification, may be severely incomplete.
For instance, it describes atomics, but I am not sure that it defines Rust's memory model or the semantics of atomics anywhere.
This Ferrocene page looks very short, and has a lot of links to Rust API documentation.
https://public-docs.ferrocene.dev/main/specification/concurr...
Conversely, the Rustonomicon has this page that says that Rust just uses C++20's memory model for its atomics. Yet I do not believe that memory model is defined anywhere in FLS.
https://doc.rust-lang.org/nomicon/atomics.html
I do not know if FLS defines unwinding of panics anywhere either.
Is FLS severely incomplete regarding rustc and Rust, despite being adopted by Rust as its specification? It almost seems fake.
It is certainly incomplete. Virtually all specifications for programming languages are. It is good enough for safety critical work, which is a higher bar than most.
The idea is that you adopt them and improve them over time. It is more complete than the reference, which is the previous project in this area.
Rust does have a qualified compiler: Ferrocene. It supports ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 currently, with more on the way, including plans for DO-178 I believe. It’s being driven by actual adoption, so they’ve started in automotive and will move to other industries eventually.
Do you happen to know how Ferrocene relates to AdaCore's Rust compiler?
https://www.adacore.com/press/adacore-announces-the-first-qu...
> ISO 26262 (ASIL D)
Isn't that only for a very small subset of Rust and its standard library?
Also, do you happen to be able to explain this comment?
https://reddit.com/r/rust/comments/1nhk30y/ferrous_systems_j...
> I think it's important to note, the certification is only for a subset of the run-time, which means some language features will not be available. Also, the certification is only to SIL-2 level, so any projects requiring SIL-3 or 4 will still not be able to use the Ferrocine compiler!
I know that Ferrocene and AdaCore were working together, but then parted ways. I am assuming they're both doing roughly the same thing: qualifying the upstream compiler with some patches. I know that Ferrocene's patches are mostly adding a new platform, they've upstreamed all the other stuff.
> Isn't that only for a very small subset of Rust and its standard library?
It is currently for the compiler only. This ties into the next bit, though:
> Also, do you happen to be able to explain this comment?
Yeah, you can see me posting in that thread, though not that specific sub-thread. Rust has a layered standard library: core, alloc (which layers memory allocation on top of core), and std (which layers OS specific things on top of that). There's three parts to this comment:
First, because it's only core, you don't get the stuff from alloc and std. So, no dynamic memory allocation or OS specific things like filesystem access. That's usually not a problem for these kinds of projects. But that's what they mean by 'some language features will not be available', but they're mistaken: all language features are available, it's some standard library features that are not. No language features require allocation, for example.
Second, they qualified a subset of libcore for IEC61508. A founder of Ferrous mentions that IS 26262 is coming next, they just had a customer that needed IEC61508 quickly, so they prioritized that. This is how it relates to the above, for ISO 26262, it's just the compiler currently.
I mentioned Ferrocene here: https://news.ycombinator.com/item?id=45494263.
> It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required
As said in a sibling comment, certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning. Ada and SPARK are old and mature. It's not a fair comparison - but that doesn't mean Rust couldn't get there.
> > Few of the tools can verify unsafe code is free of UB > > With heavy annotation burden, for specific patterns
> > some of the tools do support interior mutability (with extra annotations I believe) > > Exactly - manual annotation burden.
SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?
> Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.
Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.
The Ada specification was, if I understand correctly, formally defined. But there are efforts to do that to Rust as well (MiniRust, a-mir-formality, and in the past RustBelt).
> The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.
And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified? The outputted code is, and that's what important. Even with full formal verification, proving that the program fulfills its goals, you don't need to verify helpers like this - only the code they generate. If it works, then even if the script is buggy, who cares.
> So they're doing what SPARK does automatically - proving absence of runtime errors
Exactly - that's the point, to prove free of runtime errors.
I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase" - does SPARK not need to verify separate codebases separately? Does it somehow work magically for all of your codebases at once?
It's clear at this point that neither of us will get convinced, and I think I said everything I had about this already.
Have fun developing with Ada!
rustc and Rust have some rather serious type system holes and other kinds of issues.
https://github.com/Speykious/cve-rs
https://github.com/lcnr/solver-woes/issues
That first one is an implementation bug that's never been discovered in the wild.
Regardless, all projects have bugs. It's not really germane to qualification, other than that qualification assumes that software has bugs and that you need to, well, qualify them and their impact.
> Certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning.
Ferrocene has ISO 26262 qualification for the compiler, not verification tools. That's compiler correctness, not formal verification of programs. SPARK's GNATprove is qualified for proving program properties - fundamentally different.
> SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?
SPARK supports controlled aliasing through ownership aspects and borrow/observ annotations - but these are designed into the verification framework, not bolted on. The key difference: SPARK's aliasing controls are part of the formal semantics and verified by GNATprove automatically. Rust's unsafe shared mutability requires external verification tools with manual proof burden. SPARK deliberately restricts aliasing patterns to those that remain efficiently verifiable: it's not "can't do it" it's "only allow patterns we can verify".
> Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.
Qualified compilers (GNAT Pro) undergo qualification per DO-178C/ED-12C. The difference: SPARK's semantics are formally defined independent of the compiler. Rust verification tools must trust rustc's implementation because Rust has no formal specification. When rustc changes behavior (happens frequently), verification tools break. SPARK's specification is stable.
> And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified?
External build scripts are different from language features. Procedural macros are part of Rust's language definition and can access compiler internals. If you use external code generation with SPARK, you're explicitly stepping outside the language's guarantees - and safety standards require justification. Rust embeds this escape hatch in the language itself.
> I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase"
SPARK: If your code compiles in the SPARK subset, overflow/division-by-zero/array bounds are automatically proven impossible by language rules. You can add contracts for functional correctness.
Rust tools: You must annotate code, write invariants, and run verification per program. The language provides memory safety via the type system, but not freedom from arithmetic errors or functional correctness. These must be proven per codebase with tools.
The distinction: language-level guarantees vs. per-program verification effort.
> It's clear at this point that neither of us will get convinced
Fair enough, but the fundamental difference is whether verification is a language property or a tool property. Both approaches have merit for different use cases.
(Disclaimer: I know next to nothing about Ada).
> Being able to define custom bounds checked ordinals
That Rust doesn't have (builtin, at least).
> being able to index arrays with any enumerable type
In Rust you can impl `std::ops::Index` and index types, including arrays, with whatever you want.
> Defining custom arithmatic operators for types
Again, definitely possible by implementing traits from `std::ops`.
> adding compile and runtime typechecks to types with pre/post conditions, iteration variants
If you refer to the default runtime verification, that's just a syntax sugar for assertions (doable in Rust via a macro). If you refer to compile-time verification via SPARK, Rust's formal verification libraries usually offer this tool as well.
> predicates
Doable via newtypes.
> Discriminant records
That's just generic ADTs if I understand correctly?
> Record representation clauses
Bitfields aren't available but you can create them yourself (and there are ecosystem crates that do), other than that you can perfectly control the layout of a type.
Compiler speed is quite good, but --as OP-- I've only written AoC in Ada. I think Ada's advantage is the separation between package/module specification and body. If you change a body, the compiler only has to process that file. But the rust compiler must check other files as well, since something in the interface might have changed. But that's a guess.
Ada is not necessarily complex, but it does require getting used to. It is a very large language, though.
> If you change a body, the compiler only has to process that file. But the rust compiler must check other files as well, since something in the interface might have changed.
That's correct in Rust as well (minus some small warts such as if you add an impl inside, which the Rust team wants to deprecate). In fact rust-analyzer relies on that. The compiler will realize that as well via its sophisticated incremental system, but it does take time to evaluate all the queries, even if all are cache hits.
On strings in Ada vs Rust. Ada's design predates Unicode (early 1980s vs 1991), so Ada String is basically char array whereas Rust string is a Unicode text type. This explains why you can index into Ada Strings, which are arrays of bytes, but not into Rust strings, which are UTF8 encoded buffers that should be treated as text. Likely the Rust implementation could have used a byte array here.
> Ada String is basically char array
Worse, the built-in Unicode strings are arrays of Unicode scalars, effectively UTF-32 in the general case. There's no proper way to write UTF-8 string literals AFAIK, you need to convert them from arrays of 8, 16 or 32 bit characters depending on the literal.
How is the internal representation an issue? Java string are utf16 internally and it's doesn't matter how you write your code nor what's the targeted format.
It's an issue because there's nothing internal about the representation in Ada: They're regular arrays of Character/Wide_Character/Wide_Wide_Character, and string literals have different type depending on the width required to represent it as such.
Also, string representations very much matter if you're coding with even the slightest amount of mechanical sympathy.
I mean you can index into Rust's strings, it's just that you probably don't want that:
Notice that's a range, Rust's string slice type doesn't consider itself just an array (as the Ada type is) and so we can't just provide an integer index, the index is a range of integers to specify where our sub-string should begin and end. If we specify the middle of a Unicode character then the code panics - don't do that.Yes, since AoC always uses ASCII it will typically make sense to use &[u8] (the reference to a slice of bytes) and indeed the str::as_bytes method literally gives you that byte slice if you realise that's what you actually needed.
I found it kind of odd that the author says Rust doesn't support concurrent programming out of the box. He links to another comment which points out you don't need Tokio for async (true enough), but even that aside async isn't the only way to achieve concurrency. Threads are built right into the language, and are easier to use than async. The only time they wouldn't be a good choice is if you anticipate needing to spawn so many threads that it causes resource issues, which very few programs will.
(Honest question from non Rustacean.)
How does the cancellation story differ between threads and async in Rust? Or vs async in other languages?
There's no inherent reason they should be different, but in my experience (in C++, Python, C#) cancellation is much better in async then simple threads and blocking calls. It's near impossible to have organised socket shutdown in many languages with blocking calls, assuming a standard read thread + write thread per socket. Often the only reliable way to interrupt a socket thread it's to close the socket, which may not be what you want, and in principle can leave you vulnerable to file handle reuse bugs.
Async cancellation is, depending on the language, somewhere between hard but achievable (already an improvement) and fabulous. With Trio [1] you even get the guarantee that non-compared socket operations are either completed or have no effect.
Did this work any better in Rust threads / blocking calls? My uneducated understanding is that things are actually worse in async than other languages because there's no way to catch and handle cancellations (unlike e.g. Python which uses exceptions for that).
I'm also guessing things are no better in Ada but very happy to hear about that too.
Cancellation in rust async is almost too easy, all you need to do is drop the future.
If you need cleanup, that still needs to be handled manually. Hopefully the async Drop trait lands soon.
Ok I could be super wrong here, but I think that's not true.
Dropping a future does not cancel a concurrently running (tokio::spawn) task. It will also not magically stop an asynchronous I/o call, it just won't block/switch from your code anymore while that continues to execute. If you have created a future but not hit .await or tokio::spawn or any of the futures:: queue handlers, then it also won't cancel it it just won't begin it.
Cancellation of a running task from outside that task actually does require explicit cancelling calls IIRC.
Edit here try this https://cybernetist.com/2024/04/19/rust-tokio-task-cancellat...
Spawn is kind of a special case where it's documented that the future will be moved to the background and polled without the caller needing to do anything with the future it returns. The vast majority of futures are lazy and will not do work unless explicitly polled, which means the usual way of cancelling is to just stop polling (e.g. by awaiting the future created when joining something with a timeout; either the timeout happens before the other future completes, or the other future finishes and the timeout no longer gets polled). Dropping the future isn't technically a requirement, but in practice it's usually what will happen because there's no reason to keep around a future you'll never poll again, so most of the patterns that exist for constructing a future that finishes when you don't need it anymore rather than manually cancelling will implicitly drop any future that won't get used again (like in the join example above, where the call to `join` will take ownership of both futures and not return either of them, therefore dropping whichever one hasn't finished when returning).
So how do you do structured concurrency [1] in Rust i.e. task groups that can be cancelled together (and recursively as a tree), always waiting for all tasks to finish their cancellation before moving on? (Normally structured concurrency also involves automatically cancelling the other tasks if one fails, which I guess Rust could achieve by taking special action for Result types.)
If you can't cancel a task and its direct dependents, and wait for them to finish as part of that, I would argue that you still don't have "real" cancellation. That's not an edge case, it's the core of async functionality.
[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
(Too late to edit)
Hmm, maybe it's possible to layer structured concurrency on top of what Rust does (or will do with async drop)? Like, if you have a TaskGroup class and demand all tasks are spawned via that, then internally it could keep track of child tasks and make sure that they're all cancelled when the parent one is (in the task group's drop). I think? So maybe not such an issue, in principle.
I think you're on the right track here to figuring this out. Tokio's JoinSet basically does what you describe for a single level of spawning (so not recursively, but it's at least part of the way to get what you describe); the `futures` library also has a type called `FuturesUnordered` that's similar but has the tradeoff that all futures it tracks need to be the same type which allows it to avoid spawning new tasks (and by extension doesn't need to wrap the values obtained by awaiting in a Result).
Under the hood, there's nothing stopping a future from polling on or more other futures, so keeping in mind that it isn't the dropping that cancels but rather the lack of polling, you could achieve what you're describing with each future in the tree polling its children in its own poll implementation, which means that once you stop polling the "root" future in the tree, all of the others in the tree will by extension no longer get polled. You don't actually need any async Drop implementation for this because there's no special logic you need when dropping; you just stop polling, which happens automatically since you can't poll something that's been dropped anyhow.
That's a rare exception, and just a design choice of this particular library function. It had to intentionally implement a workaround integrated with the async runtime to survive normal cancellation. (BTW, the anti-cancellation workaround isn't compatible with Rust's temporary references, which can be painfully restrictive. When people say Rust's async sucks, they often actually mean `tokio:spawn()` made their life miserable).
Regular futures don't behave like this. They're passive, and can't force their owner to keep polling them, and can't prevent their owner from dropping them.
When a Future is dropped, it has only one chance to immediately do something before all of its memory is obliterated, and all of its inputs are invalidated. In practice, this requires immediately aborting all the work, as doing anything else would be either impossible (risking use-after-free bugs), or require special workarounds (e.g. io_uring can't work with the bare Future API, and requires an external drop-surviving buffer pool).
Rain showed that not all may be as simple as it seems to do it correctly.
In her presentation on async cancellation in Rust, she spoke pretty extensively on cancel safety and correctness, and I would recommend giving it a watch or read.
https://sunshowers.io/posts/cancelling-async-rust/
Yeah that's what I'm talking about ... Cancellation where the cancelled object can't handle the cancellation, call other async operations and even (very rarely) suppress it, isn't "real" cancellation to me, having seen how this essential it is.
> There's no inherent reason they should be different
There is... They're totally different things.
And yeah Rust thread cancellation is pretty much the same as in any other language - awkward to impossible. That's a fundamental feature of threads though; nothing to do with Rust.
There's no explicit cancel, but there's trivial one shot cancellation messages that you can handle on the thread side. It's perfectly fine, honestly, and how I've been doing it forever.
I would call that clean shutdown more than cancellation. You can't cancel a long computation, or std::thread::sleep(). Though tbf that's sort of true of async too.
To be clear about what I meant: I was saying that, in principle, it would be possible design a language or even library where all interruptable operations (at least timers and networking) can be cancelled from other threads. This can be done using a cancellation token mechanism which avoids even starting the operation of already cancelled token, in a way that avoids races (as you might imagine from a naive check of a token before starting the operation) if another thread cancels this one just as the operation is starting.
Now I've set (and possibly moved) the goalposts, I can prove my point: C# already does this! You can use async across multiple threads and cancellation happens with cancellation tokens that are thread safe. Having a version where interruptable calls are blocking rather than async (in the language sense) would actually be easier to implement (using the same async-capable APIs under the hood e.g., IOCP on Windows).
Well sure, there's nothing to stop you writing a "standard library" that exposes that interface. The default one doesn't though. I expect there are platforms that Rust supports that don't have interruptible timers and networking (whereas C# initially only supported Windows).
I wonder where the cut-off is where a work stealing scheduler like Tokio's is noticeably faster than just making a bunch of threads to do work, and then where the hard cut-off is that making threads will cause serious problems rather than just being slower because we don't steal.
It might be quite small, as I found for Maps (if we're putting 5 things in the map then we can just do the very dumbest thing which I call `VecMap` and that's fine, but if it's 25 things the VecMap is a little worse than any actual hash table, and if it's 100 things the VecMap is laughably terrible) but it might be quite large, even say 10x number of cores might be just fine without stealing.
The general rule is that if you need to wait faster use async, and if you need to process faster use threads.
Another way of thinking about this is whether you want to optimize your workload for throughput or latency. It's almost never a binary choice, though.
Threads as they are conventionally considered are inadequate. Operating systems should offer something along the lines of scheduler activations[0]: a low-level mechanism that represents individual cores being scheduled/allocated to programs. Async is responsive simply because it conforms to the (asynchronous) nature of hardware events. Similarly, threads are most performant if leveraged according to the usage of hardware cores. A program that spawns 100 threads on a system with 10 physical cores is just going to have threads interrupting each other for no reason; each core can only do so much work in a time frame, whether it's running 1 thread or 10. The most performant/efficient abstraction is a state machine[1] per core. However, for some loss of performance and (arguable) ease of development, threads can be used on top of scheduler activations[2]. Async on top of threads is just the worst of both worlds. Think in terms of the hardware resources and events (memory accesses too), and the abstractions write themselves.
[0] https://en.wikipedia.org/wiki/Scheduler_activations, https://dl.acm.org/doi/10.1145/121132.121151 | Akin to thread-per-core
[1] Stackless coroutines and event-driven programming
[2] User-level virtual/green threads today, plus responsiveness to blocking I/O events
Haven't scheduler activations largely been abandoned in the bad and linux kernels?
Yes; my understanding is that, for kernels designed for 1:1 threading, scheduler activations are an invasive change and not preferred by developers. Presumably, an operating system designed around scheduler activations would be able to better integrate them into applications, possibly even binary-compatibly with existing applications expecting 1:1 threading.
Can you say more about what you mean by wait faster? Is it as in, enqueue many things faster?
Not the OP but I'll take a stab: I see "waiting faster" as meaning roughly "check the status of" faster.
For example, you have lots of concurrent tasks, and they're waiting on slow external IO. Each task needs its IO to finish so you can make forward progress. At any given time, it's unlikely more than a couple of tasks can make forward progress, due to waiting on that IO. So most of the time, you end up checking on tasks that aren't ready to do anything, because the IO isn't done. So you're waiting on them to be ready.
Now, if you can do that "waiting" (really, checking if they're ready for work or not) on them faster, you can spend more of your machine time on whatever actual work _is_ ready to be done, rather than on checking which tasks are ready for work.
Threads make sense in the opposite scenario: when you have lots of work that _is_ ready, and you just need to chew through it as fast as possible. E.g. numbers to crunch, data to search through, etc.
I'd love if someone has a more illustrative metaphor to explain this, this is just how I think about it.
You may be correct in theory though in practice the reason to use Async over threads is often because the crate you want to use is async. Reqwest is a good example, it cannot be used without Tokio. Ureq exists and works fine. I've done a fairly high level application project where I tried to avoid all async and at some point it started to feel like swimming upstream.
I think reqwest has a client that doesn't require async now
Or in cases where the platform doesn't support threads easily - WASM and embedded (Embassy). Tbh I think that's a better motivation for using async than the usual "but what if 100k people visit my anime blog all at once?"
Interesting that Ada has an open source compiler. For whatever reason when I looked at it years ago I thought it was proprietary compilers only so I never really looked at it again. Maybe I’ll look again now.
GNAT has been around for 30 years. There were some limitations with (one version of?) it due to it being released without the GPL runtime exception, which meant linking against its runtime technically required your programs to also be released under GPL. That hasn't been an issue for a very long time either, though.
GNAT has been around since the 90s, based on GCC. My university did some work on the compiler and used it for some undergrad courses like real-time programming. IIRC there was an attempt to use Ada for intro to programming courses, but I think they chose Pascal and then eventually C++.
In Case Study 2, near the end it says "if the client may need to know SIDE_LENGTH, then you can add a function to return the value"
Which yeah, you can do that but it's a constant so you can also more literally write (in the implementation just like that function):
Tangentially related, one of the more interesting projects I've seen in the 3D printing space recently is Prunt. It's a printer control board and firmware, with the latter being developed in Ada.
https://prunt3d.com/
https://github.com/Prunt3D/prunt
It's kind of an esoteric choice, but struck me as "ya know, that's really not a bad fit in concept."
I wrote about some of the reason for choosing it here: https://news.ycombinator.com/item?id=42319962
I'd love to use Ada as my primary static language if it had broader support. It's in my opinion the best compiled language with strong static typing. Although it has gained traction with Alire, it unfortunately doesn't have enough 3rd party support for my needs yet.
If you have some time to fool around, try it for Advent of Code. I did last year, and while I'm not going to seriously consider it for my day job, I found it worthwhile.
You can generate bindings using a gcc -c -fdump-ada-spec <header.h>. They typically work well enough without needing additional tweaks but If it's more involved you can ask Claude to make a shell script that generates bindings for whatever C library you wanted to use and it works reasonably well.
In my opinion, don't make thick bindings for your C libraries. It just makes it harder to use them.
For example I don't really like the OpenGL thick bindings for Ada because using them is so wildly different than the C examples that I can't really figure out how to do what I want to do.
What 3rd party things would you like to see?
It's just everything and the kitchen sink, I'm afraid, from a NATS server and client library over cross-platform GUIs including mobile, common file format reading and writing like Excel, Word, and PDF to post-quantum cryptography. I'm unfortunately in the business of Lego-brick software development using well-maintained 3rd-party libraries whenever possible. It's the same with CommonLisp, I love it but in the end I'd have to write too many things on my own to be productive in it.
I envy people who can write foundational, self-contained software. It's so elegant.
You have Alire crates for generating Excel and PDF streams/files. Of course you want everything else ;-).
I’d disagree that both languages encourage stack-centric programming idioms. Ada encourages static allocation instead.
I found the inclusion of arrays indexed on arbitrary types in the feature table as a benefit of Ada surprising. That sounds like a dictionary type, which is in the standard library of nearly every popular Language. Rust includes two.
I think they're focused very much specifically on the built-in array type. Presumably Ada is allowed to say eggs is an array and the index has type BirdSpecies so eggs[Robin] and eggs[Seagull] work but eggs[5] is nonsense.
Rust is OK with you having a type which implements Index<BirdSpecies> and if eggs is an instance of that type it's OK to ask for eggs[Robin] while eggs[5] won't compile, but Rust won't give you an "array" with this property, you'd have to make your own.
My guess is that this makes more sense in a language where user defined types are allowed to be a subset of say a basic integer type, which I know Ada has and Rust as yet does not. If you can do that, then array[MyCustomType] is very useful.
I call out specifically User Defined types because, Rust's NonZeroI16 - the 16-bit integers except zero - is compiler-only internal magic, if you want a MultipleOfSixU32 or even U8ButNotThreeForSomeReason that's not "allowed" and so you'd need nightly Rust and an explicit "I don't care that this isn't supported" compiler-only feature flag in your source. I want to change this so that anybody can make the IntegersFiveThroughTwelveU8 or whatever, and there is non zero interest in that happening, but I'd have said the exact same thing two years ago so...
I really don't understand this so I hope you won't mind explaining it. If I would have the type U8ButNotThreeForSomeReason wouldn't that need a check at runtime to make sure you are not assigning 3 to it?
At runtime it depends. If we're using arbitrary outside integers which might be three, we're obliged to check yes, nothing is for free. But perhaps we're mostly or entirely working with numbers we know a priori are never three.
NonZero<T> has a "constructor" named new() which returns Option<NonZero<T>> so that None means nope this value isn't allowed because it's zero. But unwrapping or expecting an Option is constant, so NonZeroI8::new(9).expect("Nine is not zero") will compile and produce a constant that the type system knows isn't zero.
Three in particular does seem like a weird choice, I want Balanced<signed integer> types such as BalancedI8 which is the 8-bit integers including zero, -100 and +100 but crucially not including -128 which is annoying but often not needed. A more general system is envisioned in "Pattern Types". How much more general? Well, I think proponents who want lots of generality need to help deliver that.
Option<U8ButNotThreeForSomeReason> would have a size of 2 bytes (1 for the discriminant, 1 for the value) whereas Option<NonZeroU8> has a size of only 1 byte, thanks to some special sauce in the compiler that you can't use for your own types. This is the only "magic" around NonZero<T> that I know of, though.
You can make an enum, with all 255 values spelled out, and then write lots of boilerplate, whereupon Option<U8ButNotThreeForSomeReason> is also a single byte in stable Rust today, no problem.
That's kind of silly for 255 values, and while I suspect it would work clearly not a reasonable design for 16-bits let alone 32-bits where I suspect the compiler will reject this wholesale.
Another trick you can do, which will also work just fine for bigger types is called the "XOR trick". You store a NonZero<T> but all your adaptor code XORs with your single not-allowed value, in this case 3 and this is fairly cheap on a modern CPU because it's an ALU operation, no memory fetches except that XOR instruction, so often there's no change to bulk instruction throughput. This works because only 3 XOR 3 == 0, other values will all have bits jiggled but remain valid.
Because your type's storage is the same size, you get all the same optimisations and so once again Option<U8ButNotThreeForSomeReason> is a single byte.
Ada also has hash maps and sets.
http://www.ada-auth.org/standards/22rm/html/RM-TOC.html - See section A.18 on Containers.
The feature of being able to use a discrete range as an array index is very helpful when you have a dense map (most keys will be used) or you also want to be able to iterate over a sequential block of memory (better performance than a dictionary will generally give you, since they don't usually play well with caches).
Thanks for the clarification. I can imagine that being a useful optimization on occasion.
It's not a dictionary, that's a totally different data structure.
In ADA you can subtype the index type into an array, i.e. constraining the size of the allowed values.
Maybe you can also do that in JAVA.
The Americans with Disabilities Act doesn't cover subtyping the index type into an array. Ada, the language, does though.
EDIT: Seems I'm getting downvoted, do people not know that ADA is not the name of the programming language? It's Ada, as in Ada Lovelace, whose name was also not generally SHOUTED as ADA.
There does seem to be a strain of weird language fanatics who insist all programming language names must be shouted, so they'll write RUST and ADA, and presumably JAVA and PYTHON, maybe it's an education thing, maybe they're stuck in an environment with uppercase only like a 1960s FORTRAN programmer ?
It's funny because Fortran's official name now is Fortran, not FORTRAN.
Maybe who cares?
You, apparently.
I have found a strong correlation between people who say JAVA and country of origin. And thus have assumed it's an education thing.
Similarly, and less explicably, are people who program in "C" and don't understand when people mention the oddity. Do people not see quotes? Do they just add them and not realize?
Very nice text. As i am very sceptical to Rust i apreciate the comarison to a language i know better. I was not aware that there is no formal spec for rust. Isn't that a problem if rustc makes changes?
A few things:
1. There is a spec now, Ferrocene donated theirs, and the project is currently integrating it
2. The team takes backwards compatibility seriously, and uses tests to help ensure the lack of breakage. This includes tests like “compile the code being used by Linux” and “compile most open source Rust projects” to show there’s no regressions.
Ferrocene is a specification but it’s not a formal specification. [Minirust](https://github.com/minirust/minirust) is the closest thing we have to a formal spec but it’s very much a work-in-progress.
It's a good enough spec to let rustc work with safety critical software, so while something like minirust is great, it's not necessary, just something that's nice to have.
Isn’t the Ada spec also not a formal spec?
The most popular language with a formal specification is Standard ML.
I guess this is terminology confusion on behalf of Surac, who probably just wants a specification that is independent of the rustc implementation.
It depends on who you are.
For implementers of third-party compilers, researchers of the Rust programming language, and programmers who write unsafe code, this is indeed a problem. It's bad.
For the designers of Rust, "no formal specification" allows them to make changes as long as it is not breaking. It's good.
Medical or Miltary often require the software stack/tooling to be certified following certain rules. I know most of this certifications are bogus, but how is that handled with Rust?
Ferrocene provides a certified compiler (based on a spec they've written documenting how it behaves) which is usable for many uses cases, but it obviously depends what exactly your specific domain needs.
The word for types depending on a value is dependent typing. Eg lists of size N, numbers in a range, are all what you call dependent types.
Idris - cosmetically looks like haskell, Lean and a bunch of other languages have this feature
https://en.wikipedia.org/wiki/Dependent_type
Wow, so why do people pimp ade?
[dead]