Yet it has relatively fast compilers, than a couple of modern languages, even on aging hardware versus the gaming rigs required for those modern languages.
I wonder if there's some type of generational amnesia where the subjective definition of what constitutes fast gets slower and slower with time, where communicating ones frustration with slow software is met with comments about how software is not actually slow.
Turbo Pascal back in the day was super fast. One of the fastest compiled languages I had used before we had incremental compilers and background incremental compilation. C/C++ compilers have mostly been a lot slower than those old Turbo Pascal implementations. MS Quick C 1.1 was a notable exception in the C compiler space. It could do 1 pass in memory compilation of most c code. Modern frameworks feel way slower even with scripting languages that require no compiles.
Coming to think of it, my 486 from early 90s when running DOS was one of the fastest computers I have used when it comes to bootup and applicaion load times. I was like type name of program, press enter and the program is up and ready to use. The systems that I had before that did take a lot longer to launch programs and anything with Windows has always felt an order of a magnitude slower. Modern tiny linux disros can match that speed, but then that is only with a very slimmed down system minus any gui. The MacBook Pro with M1 Pro did feel a lot faster than previous macs or windows machines when it came to application launch times and general gui responsiveness, but still no match to those DOS systems.
My experience as well. Apparently the Turbo Pascal developer implemented the compiler as a one pass compiler generating machine code while parsing. It dramatically reduces memory usage and increase speed, while reducing the opportunity to optimize the machine code at a higher level.
Free Pascal is not a powerhouse building large FP+SDL projects such as GearHead 1 and 2.
A comparable 486 program for DOS would be almost as slow if it was backported to DOS+FPC.
Modern languages generally have much fancier type systems (Algol 60 didn't even have records!). Perhaps more importantly, modern compilers do optimizations that I doubt anyone back in 1960s even dreamed of.
Yet the specific problem of compilation times only affect a subset of modern languages, e.g. it takes a fraction of time to compile an Ada/SPARK application versus C++, or Rust, although the type system is equally rather complex.
Or the development workflow in Haskell and OCaml is much better than Rust, despite having a rather complex type system, because those ecosystems have invested into having interpreted toolchains, also able to load compiled code.
The way Ada handles generics removes a lot of complexity wrt things like inference and overload resolution, but at the expense of making things less convenient for the programmer because of the need to explicitly instantiate.
Rust and C++ are at this weird intersection of features where, on one hand, they need to have very fancy optimizers to remove all the overhead that idiomatic high-level library code (including stdlib) has, and on the other, they have generics and some degree of type inference that results in lots of generated code that consequently needs to go through the optimizer.
You can argue that it is that intersection that it itself problematic. But then again, C++ is arguably so popular precisely because it offers it, and Rust became popular because it was the first thing other than C++ that targeted the same broad niche.
Ada 2022/SPARK has plenty of complex language features, not sure why callout to generics in particular.
Using C++23 modules (import std) is quite fast, even more so with binary libraries, or the proprietary C++ Builder packages, and has been proven that the reason Rust is slow is the amount of unoptimized IR code the frontend shoves down into LLVM backend, and not the type system.
What I argue is the lack of investment in compiler tooling, once upon a time IBM and Lucid, showed the world how to have a Smalltalk/Lisp Machine like experience with C++ for example.
> Edit: OpenVMS maybe, but not sure whether that qualifies for in wider use
In the x86-64 port, they've been moving OpenVMS to use LLVM too. With the VAX->Alpha port, DEC introduced a common backend for all their OpenVMS compilers, called "GEM". For the Alpha->Itanium port, Compaq/HP kept the GEM backend but modified it to generate Itanium code instead of Alpha.
For the x86-64 port, instead of investing in modifying "GEM" to support x86-64, VSI decided to use LLVM as a backend, and then write a translator from GEM's intermediate representation to LLVM's. So they keep the same frontends as before – except for C++, where they replaced the EDG/Intel frontend with LLVM – but now use LLVM as a backend. https://llvm.org/devmtg/2017-10/slides/Reagan-Porting%20Open...
OpenVMS has a somewhat unusual ABI. For x86-64 they've tried to make it more consistent with the standard Linux x86-64 ABI, but there remain some distinctive features. One feature (which I actually really like, I think it is a shame that other ABIs don't do this), is that variadic function calls pass the argument count in a register (%rax on x86-64). Another is that all functions must have a 32-bit address; the code of a function can be allocated above 4GB, but they allocate a trampoline under 4GB which jumps to its actual code – this was done to simplify interoperability between 32-bit and 64-bit code in the same address space. https://docs.vmssoftware.com/vsi-openvms-calling-standard
IBM still has their classic XL C/C++ compilers for AIX, IBM i and mainframe, but they've been converging to LLVM. I believe the pre-LLVM versions share a common backend with some of their other language compilers (COBOL, PL/I, PL/X, PL.8, etc), ultimately going back to work of their Toronto and Yorktown labs starting in the 1980s. More recently they've started replacing their C/C++ frontends with LLVMs, while keeping their own backend; and even more recently swapping out their backend for LLVM's too.
Good point. I completely forgot IBM. And I even was a trainee there, when mainframes where a big thing, PCs an exception and nobody including myself had heard about AIX (although technically it was a year old).
Probably not qualifies as "wide use" but embedded compilers like SDCC still matters. Yes, 8-bits are giving way to ARM, but there will be a few more millennia before they are out.
Clang, GCC and MSVC are the big ones, the rest are more or less niche compilers - with Clang and GCC only differing in small details.
(one could argue that MSVC is slowly becoming a niche compiler too - it feels like it's not as important anymore as it was 10..15 years ago, maybe because a lot of new C/C++ programmers seem to start on Linux, not Windows - that's at least the 'general vibe' I'm getting)
True of course, but then MSVC only matters for building Windows and Xbox games (and at least on Windows I would also give Clang a try since it typically optimizes better than MSVC).
Currently, zig cc is the Clang frontend which is linked into the Zig compiler (most things in that blog post are about differences between the Zig toolchain and a typical Clang toolchain for cross-compilation, and how Zig invokes the Clang frontend).
I'm not sure what the current plan is for C compilation after the LLVM divorce and after Clang will be moved into a Zig package (e.g. I remember talk that arocc [1] is going to be integrated into the Zig compiler, but not sure if that's still planned, or if C compilation will also be delegated to the new Clang package along with C++ and ObjC compilation).
The maintainer of oksh has an interesting comparison[1] between tcc and cproc as it comes to the performance of the generated code on OpenBSD. To be clear, cproc delivers. But tcc beats it, usually significantly, for reasons I can’t even begin to see how to investigate. It actually beats cparser/libfirm in most cases, which just makes no sense. (I ran the benchmarks myself on Linux some months ago; with the versions given in the original post, they reproduce.)
Right?.. There’s a reason I decided to spend the couple of hours it took to get libfirm to compile. Then again, it’s a shell benchmark, most of it is bound to be syscalls and (nonfancy) parsing, so the space for an optimizing compiler to make a difference is bound to be fairly limited. All the compilers in the test are within 2× of the best one, and CompCert isn’t exactly a speed demon either. I’d love to dig into this, but there are so many layers of stuff I don’t really know where to start.
I tested with tcc, cparser and my own slimcc, the benchmark showed multiple process with htop, so there are some dynamic cpu scaling randomness going on. I saw larger run-to-run differences with the same binary, than with comparing cherry-picked best-run of each, so probably not the best benchmark.
One cool thing about Algol is that, while the language itself is over 60 years old, most modern programmers can have a decent understanding of source code written in it. It's a bit like reading 17th century English. You don't quite understand everything, but you can grasp most of it.
I wonder why the retrocomputing crowd hasn't done much in ALGOL. Perhaps because it's just easier to write in BASIC, which was influenced by it.
Assuming we're talking about Algol 60, it really depends on what kind of source code. When you look at the more creative applications of call-by-name, for example, it looks utterly alien. Or stuff like label arguments and switches.
OTOH if you disregard those, the rest is simple to understand because there isn't much of it. E.g. there's a grand total of three types: integer, real, and Boolean. There are no structs, pointers, or really any derived types (it has arrays, but they aren't technically types but rather an orthogonal property of variables).
But, for the same reason, it's not particularly useful. The lack of pointers especially limits what you can do with it, even if you use it for its original primary purpose (reference implementations of algorithms).
Probably because of the mix with nostalgia, and the folks that would be nostalgic about Algol 60 hardly care about computers nowadays.
When I was a kid, the systems that we cared about were 8 bit home systems, starting with anything CP/M, and then there was the whole big machines being used at universities and our parents jobs, which we only knew from computer magazines.
Spaniard here; Spanish is like Lisp. Reading 17th century Spanish is almost a no brainer minus some Medieval knights' jargon (put from Cervantes on purpose on making fun on old-farts) and some 17th jargon too, specially from popular sayings just said in La Mancha region which are still in use because they are related to daily farm/fields/food but the rest of Spain has no clear references off.
Also the RAE institution for the Spanish Language does something similar to SRFI's + RSR?s for Scheme.
IIRC Algol was already dead and buried in the 80's, it certainly wasn't relevant or even available on most computer systems an 80's teenager had access to. The most popular high level language on 8-bit home computers was certainly BASIC, for compiled languages probably some dialect of PASCAL, and maybe FORTH for something entirely different.
BASIC didn't have to be a bad choice either, Acorn/BBC BASIC even had proper inline assembly.
It was pretty much dead by the mid-70s. PASCAL, C, FORTRAN, and COBOL killed it on mainframes + minis and it was too complex for micros. (Thus - BASIC.)
The twitching corpse lived on in the form of JOVIAL for DOD work (until ADA happened), and CORAL persisted in the UK because of bureaucratic momentum. Simula was another derivative that lasted for a while.
But C and PASCAL were better, simpler, equally productive languages. As soon as they appeared ALGOL 60 didn't really need to exist any more. And ALGOL 68 was an ALGOL too far.
Lest we forget, Pascal itself was an evolution of Algol-W, which was basically Algol with pointers and records. So in that sense, Pascal and the entire family of languages that it spawned are all direct descendants.
Algol 68 doesn't really have anything in common with Algol 60 apart from name. The syntax is completely different, as is the overall feel of the language.
Interesting - I remember working with someone that told me that they'd seen an ALGOL codebase finally get decommissioned in 2005 (the replacement system had run in parallel with it for a few years first) after first going live in 1979.
It surprised me because people talk about COBOL and Fortran being dead, but ALGOL always seemed really dead, and I couldn't believe that there was still ALGOL running this century.
I think ALGOL nowadays would likely work best as sort of a lingua franca psuedocode, I don’t think it would be too difficult to make that work. It kind of was already for a long time, as much of the Collected Algorithms of the ACM were originally written in it IIRC.
With a few exceptions (like APL), I've always found it pretty easy to read code in most languages, as long as I understood the paradigm, and the original author made at least a tiny effort to use sensible naming.
Imperative languages only have a handful of concepts, like variables, type declaration, looping, branching, function call, etc. and the language and the context generally make those pretty easy to identify.
The other language types (functional, forth-like, etc.) have similar (but often different) concepts, and once you understand the concept and can see it through the syntax, it's pretty easy to follow along.
Writing new code in a new language is the difficult part.
Key Innovations Introduced by ALGOL 60:
Block Structure: It introduced the concept of structuring code into blocks, which paved the way for modular programming.
Formal Syntax: ALGOL 60 used Backus-Naur Form (BNF) to formally define its syntax, setting a standard for future language specifications.
Recursion: It supported recursive function calls, which was revolutionary at the time.
Lexical Scoping: It allowed nested functions and controlled variable scope.
Platform Independence: It aimed to be machine-independent, making it suitable for documenting algorithms.
My friend Colin Broughton completed the first full implementation of the Algo68 spec. Apparently there was some concern that it was not going to be implementable!
Reading the ALGOL 68 spec[1] today, it feels like complexity is mainly about the unusual verbiage used (which, as I understand, was equally unusual at the time, being introduced specifically for it). The language itself is pretty mundane by modern standards.
Algol 60 is also ... not exactly the easiest language to compile, even today. Here's the spec:
https://www.masswerk.at/algol60/report.htm
Call by name in particular can be very tricky, especially where it intersects with higher-order functions.
It can also be interesting to read contemporary discussions of problematic spots in the language, e.g.:
https://dl.acm.org/doi/10.1145/366193.366209
https://dl.acm.org/doi/10.1145/363717.363743
And the original ALGOL bulletin which has committee reports and mailing lists for the design process:
https://archive.computerhistory.org/resources/text/algol/alg...
Yet it has relatively fast compilers, than a couple of modern languages, even on aging hardware versus the gaming rigs required for those modern languages.
I wonder if there's some type of generational amnesia where the subjective definition of what constitutes fast gets slower and slower with time, where communicating ones frustration with slow software is met with comments about how software is not actually slow.
There is definitely one, see the amazement of Go compile times, versus what Turbo Pascal was doing on MS-DOS computers running at 20 MHz.
Turbo Pascal is just an example from the days when C and C++ were yet to rule the Zeitgeist, and we had scripting all over the place.
Turbo Pascal back in the day was super fast. One of the fastest compiled languages I had used before we had incremental compilers and background incremental compilation. C/C++ compilers have mostly been a lot slower than those old Turbo Pascal implementations. MS Quick C 1.1 was a notable exception in the C compiler space. It could do 1 pass in memory compilation of most c code. Modern frameworks feel way slower even with scripting languages that require no compiles.
Coming to think of it, my 486 from early 90s when running DOS was one of the fastest computers I have used when it comes to bootup and applicaion load times. I was like type name of program, press enter and the program is up and ready to use. The systems that I had before that did take a lot longer to launch programs and anything with Windows has always felt an order of a magnitude slower. Modern tiny linux disros can match that speed, but then that is only with a very slimmed down system minus any gui. The MacBook Pro with M1 Pro did feel a lot faster than previous macs or windows machines when it came to application launch times and general gui responsiveness, but still no match to those DOS systems.
My experience as well. Apparently the Turbo Pascal developer implemented the compiler as a one pass compiler generating machine code while parsing. It dramatically reduces memory usage and increase speed, while reducing the opportunity to optimize the machine code at a higher level.
Even the original Turbo Pascal for CP/M was fast (e.g. running on the 2MHz Z80 SoftCard for the Apple II).
Free Pascal is not a powerhouse building large FP+SDL projects such as GearHead 1 and 2. A comparable 486 program for DOS would be almost as slow if it was backported to DOS+FPC.
Maybe, but that isn't Borland's Turbo Pascal compiler, nor Borland/Inprise/Embarcadero's Delphi.
Most likely a consequence of how many compiler writers care about optimizations in FPC codebase.
This is accompanied by surprised reactions when maintained and polished old implementations come across as fast. I'm thinking of you, Common Lisp.
I think there's also the effect that old, polished software can tend to become increasingly optimized.
Modern languages generally have much fancier type systems (Algol 60 didn't even have records!). Perhaps more importantly, modern compilers do optimizations that I doubt anyone back in 1960s even dreamed of.
Yet the specific problem of compilation times only affect a subset of modern languages, e.g. it takes a fraction of time to compile an Ada/SPARK application versus C++, or Rust, although the type system is equally rather complex.
Or the development workflow in Haskell and OCaml is much better than Rust, despite having a rather complex type system, because those ecosystems have invested into having interpreted toolchains, also able to load compiled code.
The way Ada handles generics removes a lot of complexity wrt things like inference and overload resolution, but at the expense of making things less convenient for the programmer because of the need to explicitly instantiate.
Rust and C++ are at this weird intersection of features where, on one hand, they need to have very fancy optimizers to remove all the overhead that idiomatic high-level library code (including stdlib) has, and on the other, they have generics and some degree of type inference that results in lots of generated code that consequently needs to go through the optimizer.
You can argue that it is that intersection that it itself problematic. But then again, C++ is arguably so popular precisely because it offers it, and Rust became popular because it was the first thing other than C++ that targeted the same broad niche.
Ada 2022/SPARK has plenty of complex language features, not sure why callout to generics in particular.
Using C++23 modules (import std) is quite fast, even more so with binary libraries, or the proprietary C++ Builder packages, and has been proven that the reason Rust is slow is the amount of unoptimized IR code the frontend shoves down into LLVM backend, and not the type system.
What I argue is the lack of investment in compiler tooling, once upon a time IBM and Lucid, showed the world how to have a Smalltalk/Lisp Machine like experience with C++ for example.
> not exactly the easiest language to compile
Besides "call by name", also using gotos for dynamic non-local exits, even by passing labels as an argument to a procedure, is pretty tricky.
How many C or C++ compilers are in wider use today?
Not really working in the area and did not research now, but I can come up with:
* gcc
* clang
* Microsoft probably has their own implementation
* Intel probably still has their own implementation
* ?
Edit: OpenVMS maybe, but not sure whether that qualifies for in wider use
Edit2: ARM of course
>Intel probably still has their own implementation
Back in 2021, Intel moved the back end of their compiler toolchain to LLVM . Intel still has their own proprietary front end (icpx).
https://www.google.com/search?q=intel+compiler+llvm+adoption
https://www.intel.com/content/www/us/en/developer/articles/g...
Is icpx still and EDG derivative?
EDG used to be the gold standard of ISO conformance.
EDG: 6 employees according to Wikipedia. That's what I call travelling light, not much room for corporate BS there, I bet.
They used to be three, all of them members of the committee.
> Edit: OpenVMS maybe, but not sure whether that qualifies for in wider use
In the x86-64 port, they've been moving OpenVMS to use LLVM too. With the VAX->Alpha port, DEC introduced a common backend for all their OpenVMS compilers, called "GEM". For the Alpha->Itanium port, Compaq/HP kept the GEM backend but modified it to generate Itanium code instead of Alpha.
For the x86-64 port, instead of investing in modifying "GEM" to support x86-64, VSI decided to use LLVM as a backend, and then write a translator from GEM's intermediate representation to LLVM's. So they keep the same frontends as before – except for C++, where they replaced the EDG/Intel frontend with LLVM – but now use LLVM as a backend. https://llvm.org/devmtg/2017-10/slides/Reagan-Porting%20Open...
OpenVMS has a somewhat unusual ABI. For x86-64 they've tried to make it more consistent with the standard Linux x86-64 ABI, but there remain some distinctive features. One feature (which I actually really like, I think it is a shame that other ABIs don't do this), is that variadic function calls pass the argument count in a register (%rax on x86-64). Another is that all functions must have a 32-bit address; the code of a function can be allocated above 4GB, but they allocate a trampoline under 4GB which jumps to its actual code – this was done to simplify interoperability between 32-bit and 64-bit code in the same address space. https://docs.vmssoftware.com/vsi-openvms-calling-standard
IBM still has their classic XL C/C++ compilers for AIX, IBM i and mainframe, but they've been converging to LLVM. I believe the pre-LLVM versions share a common backend with some of their other language compilers (COBOL, PL/I, PL/X, PL.8, etc), ultimately going back to work of their Toronto and Yorktown labs starting in the 1980s. More recently they've started replacing their C/C++ frontends with LLVMs, while keeping their own backend; and even more recently swapping out their backend for LLVM's too.
Good point. I completely forgot IBM. And I even was a trainee there, when mainframes where a big thing, PCs an exception and nobody including myself had heard about AIX (although technically it was a year old).
Probably not qualifies as "wide use" but embedded compilers like SDCC still matters. Yes, 8-bits are giving way to ARM, but there will be a few more millennia before they are out.
Clang, GCC and MSVC are the big ones, the rest are more or less niche compilers - with Clang and GCC only differing in small details.
(one could argue that MSVC is slowly becoming a niche compiler too - it feels like it's not as important anymore as it was 10..15 years ago, maybe because a lot of new C/C++ programmers seem to start on Linux, not Windows - that's at least the 'general vibe' I'm getting)
Depends on how much one considers the game industry niche, where Linux hardly has a role other than game servers.
True of course, but then MSVC only matters for building Windows and Xbox games (and at least on Windows I would also give Clang a try since it typically optimizes better than MSVC).
And as IDE for Sony and PlayStation toolchains.
While Microsoft has embraced clang as well, including on XBox, I am certain it does not consume all Windows SDKs, not yet.
Per [1], Zig seems to have its own C compiler (AFAICT it's not just a frontend to an existing one)
[1] https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
Currently, zig cc is the Clang frontend which is linked into the Zig compiler (most things in that blog post are about differences between the Zig toolchain and a typical Clang toolchain for cross-compilation, and how Zig invokes the Clang frontend).
I'm not sure what the current plan is for C compilation after the LLVM divorce and after Clang will be moved into a Zig package (e.g. I remember talk that arocc [1] is going to be integrated into the Zig compiler, but not sure if that's still planned, or if C compilation will also be delegated to the new Clang package along with C++ and ObjC compilation).
[1] https://github.com/Vexu/arocc
For C++, there are exactly four independent frontends in wide use (gcc, clang, msvc, edg) and three independent stdlibs (libstdc++, libc++, STL).
I’m not sure how many independent compiler backends are widely used with those frontends and stdlibs.
tcc is popular too, as a fast non-optimizing compiler.
Similarly, cproc with the QBE backend. Focused on simplicity an getting to 80% of gcc/clang with 20% of the complexity/size.
The maintainer of oksh has an interesting comparison[1] between tcc and cproc as it comes to the performance of the generated code on OpenBSD. To be clear, cproc delivers. But tcc beats it, usually significantly, for reasons I can’t even begin to see how to investigate. It actually beats cparser/libfirm in most cases, which just makes no sense. (I ran the benchmarks myself on Linux some months ago; with the versions given in the original post, they reproduce.)
[1] https://briancallahan.net/blog/20211010.html
Those results are truly strange, tcc on the same level as gcc/clang for all tests? Something's going on
Right?.. There’s a reason I decided to spend the couple of hours it took to get libfirm to compile. Then again, it’s a shell benchmark, most of it is bound to be syscalls and (nonfancy) parsing, so the space for an optimizing compiler to make a difference is bound to be fairly limited. All the compilers in the test are within 2× of the best one, and CompCert isn’t exactly a speed demon either. I’d love to dig into this, but there are so many layers of stuff I don’t really know where to start.
I tested with tcc, cparser and my own slimcc, the benchmark showed multiple process with htop, so there are some dynamic cpu scaling randomness going on. I saw larger run-to-run differences with the same binary, than with comparing cherry-picked best-run of each, so probably not the best benchmark.
Fabrice Bellard, unbelievable what a single person can get done.
One cool thing about Algol is that, while the language itself is over 60 years old, most modern programmers can have a decent understanding of source code written in it. It's a bit like reading 17th century English. You don't quite understand everything, but you can grasp most of it.
I wonder why the retrocomputing crowd hasn't done much in ALGOL. Perhaps because it's just easier to write in BASIC, which was influenced by it.
Assuming we're talking about Algol 60, it really depends on what kind of source code. When you look at the more creative applications of call-by-name, for example, it looks utterly alien. Or stuff like label arguments and switches.
OTOH if you disregard those, the rest is simple to understand because there isn't much of it. E.g. there's a grand total of three types: integer, real, and Boolean. There are no structs, pointers, or really any derived types (it has arrays, but they aren't technically types but rather an orthogonal property of variables).
But, for the same reason, it's not particularly useful. The lack of pointers especially limits what you can do with it, even if you use it for its original primary purpose (reference implementations of algorithms).
Probably because of the mix with nostalgia, and the folks that would be nostalgic about Algol 60 hardly care about computers nowadays.
When I was a kid, the systems that we cared about were 8 bit home systems, starting with anything CP/M, and then there was the whole big machines being used at universities and our parents jobs, which we only knew from computer magazines.
Spaniard here; Spanish is like Lisp. Reading 17th century Spanish is almost a no brainer minus some Medieval knights' jargon (put from Cervantes on purpose on making fun on old-farts) and some 17th jargon too, specially from popular sayings just said in La Mancha region which are still in use because they are related to daily farm/fields/food but the rest of Spain has no clear references off.
Also the RAE institution for the Spanish Language does something similar to SRFI's + RSR?s for Scheme.
(Eleventh-century English is ... suprisingly approachable with the right textboox: https://ancientlanguage.com/vergil-press/osweald-bera/ https://youtu.be/YwECgGWCwis?t=508 .)
IIRC Algol was already dead and buried in the 80's, it certainly wasn't relevant or even available on most computer systems an 80's teenager had access to. The most popular high level language on 8-bit home computers was certainly BASIC, for compiled languages probably some dialect of PASCAL, and maybe FORTH for something entirely different.
BASIC didn't have to be a bad choice either, Acorn/BBC BASIC even had proper inline assembly.
It was pretty much dead by the mid-70s. PASCAL, C, FORTRAN, and COBOL killed it on mainframes + minis and it was too complex for micros. (Thus - BASIC.)
The twitching corpse lived on in the form of JOVIAL for DOD work (until ADA happened), and CORAL persisted in the UK because of bureaucratic momentum. Simula was another derivative that lasted for a while.
But C and PASCAL were better, simpler, equally productive languages. As soon as they appeared ALGOL 60 didn't really need to exist any more. And ALGOL 68 was an ALGOL too far.
Lest we forget, Pascal itself was an evolution of Algol-W, which was basically Algol with pointers and records. So in that sense, Pascal and the entire family of languages that it spawned are all direct descendants.
Algol 68 doesn't really have anything in common with Algol 60 apart from name. The syntax is completely different, as is the overall feel of the language.
W being Wirth.
Interesting - I remember working with someone that told me that they'd seen an ALGOL codebase finally get decommissioned in 2005 (the replacement system had run in parallel with it for a few years first) after first going live in 1979.
It surprised me because people talk about COBOL and Fortran being dead, but ALGOL always seemed really dead, and I couldn't believe that there was still ALGOL running this century.
I think ALGOL nowadays would likely work best as sort of a lingua franca psuedocode, I don’t think it would be too difficult to make that work. It kind of was already for a long time, as much of the Collected Algorithms of the ACM were originally written in it IIRC.
With a few exceptions (like APL), I've always found it pretty easy to read code in most languages, as long as I understood the paradigm, and the original author made at least a tiny effort to use sensible naming.
Imperative languages only have a handful of concepts, like variables, type declaration, looping, branching, function call, etc. and the language and the context generally make those pretty easy to identify.
The other language types (functional, forth-like, etc.) have similar (but often different) concepts, and once you understand the concept and can see it through the syntax, it's pretty easy to follow along.
Writing new code in a new language is the difficult part.
How many of them could pass Knuth test [1]?
[1] https://en.wikipedia.org/wiki/Man_or_boy_test
Key Innovations Introduced by ALGOL 60: Block Structure: It introduced the concept of structuring code into blocks, which paved the way for modular programming.
Formal Syntax: ALGOL 60 used Backus-Naur Form (BNF) to formally define its syntax, setting a standard for future language specifications.
Recursion: It supported recursive function calls, which was revolutionary at the time.
Lexical Scoping: It allowed nested functions and controlled variable scope.
Platform Independence: It aimed to be machine-independent, making it suitable for documenting algorithms.
My friend Colin Broughton completed the first full implementation of the Algo68 spec. Apparently there was some concern that it was not going to be implementable!
https://en.wikipedia.org/wiki/FLACC
Reading the ALGOL 68 spec[1] today, it feels like complexity is mainly about the unusual verbiage used (which, as I understand, was equally unusual at the time, being introduced specifically for it). The language itself is pretty mundane by modern standards.
[1] http://www.math.bas.bg/bantchev/place/algol68/a68rr.html
Algol 68 ftw!