If you like Forth, but find it challenging to build real stuff with, Factor (https://factorcode.org/) is most or all of the good stuff about Forth designed in a way that's much easier to do things with. It was designed by Slava Pestov (who I think had a big hand in Swift), and honestly it's a lot of fun to build webapps and other programs with, and much less brutal to read than Forth can be.
I have very fond memories of programming in PostScript within NeWS/HyperNeWS - it did quite a few things that I've never seen in any other environment.
Edit: To be fair relying on PostScript probably did limit the appeal, but I actually really liked it.
>Arthur van Hoff wrote PdB, and we used it for to develop HyperLook (nee HyperNeWS nee GoodNeWS). You could actually subclass PostScript classes in C, and vice-verse!
>That's interesting! I love Forth, but I love PostScript even more, because it's so much like Lisp. What is it about PostScript that you dislike, that doesn't bother you about Forth?
>Arthur van Hoff wrote "PdB" for people who prefer object oriented C syntax to PostScript. I wrote some PdB code for HyperLook, although I preferred writing directly in PostScript.
Leigh Klotz used PdB at Xerox PARC, and wrote this about it here:
>OK, I think I’ve written more PostScript by hand than Jamie, so I assume he thinks I’m not reading this. Back in the old days, I designed a system that used incredible amounts of PostScript. One thing that made it easier for us was a C-like syntax to PS compiler, done by a fellow at the Turning Institute. We licensed it and used it heavily, and I extended it a bit to be able to handle uneven stack-armed IF, and added varieties of inheritance. The project was called PdB and eventually it folded, and the author left and went to First Person Software, where he wrote a very similar language syntax for something called Oak, and it compiled to bytecodes instead of PostScript. Oak got renamed Java.
Syntactic Extensions to PdB to Support TNT Classing Mechanisms:
Most of the built-in HyperLook components were written in C with PdB.
I wrote HyperLook wrapper components around TNT 2.0 (The NeWS Toolkit) objects like pie menus, Open Look menus, sliders, scrolling lists, buttons, etc. I used them in the HyperLook edition of SimCity, which you can see in this screen snapshot:
Arthur later went on to join Sun (James Gosling's "First Person" group), wrote the Java compiler in Java, and AWT, then left Sun to form Marimba, where they developed "Castanet" (push code and content distribution), and Bongo (HyperCard/HyperLook for Java, with a WYSIWYG UI editor and script editor, that dynamically ran the Java compiler to compile and hot patch scripts attached to objects on the fly. Which was groundbreaking at the time, though IDEs do it all the time now).
>Marimba was formed in early 1996 by four members of the team that created Java. Kim Polese, Jonathan Payne, Sami Shaio, and I left Sun Microsystems and founded Marimba with the goal to build commercial consumer applications written entirely in Java.
>While at Sun we concentrated on creating a great multi-platform, portable, efficient, object-oriented, multi-threaded, and buzzword-compliant language. However, we paid too little attention to developing tools. In early 1996 Java was largely still a language for skilled programmers who are happy with emacs, a Java compiler, and lots of coffee. Luckily these so-called "Rambo" programmers loved Java and made it very successful.
>Creating large applications in Java turned out to be much harder than we had anticipated, so we decided that we needed better tools before we could build better applications. That is why we created Bongo. Bongo is a tool that allows you to quickly create a user interface using a variety of widgets, images, audio, and animation. After you have created a user interface you can script it in Java, or you can easily hook it up to a Java program.
>Bongo is a high-level tool that provides a clean separation of semantics and design elements.
>It allows multi-disciplinary teams to work simultaneously on a large application without getting in each other's hair. You will find that it is a very powerful tool that is great for creating good-looking, functional, but still very flexible user interfaces. In addition to the standard widgets, Bongo enables you to extend the widget set by creating new widget classes in Java.
>This means that you can develop your own set of widgets which are easily integrated into user interfaces developed with Bongo.
>One of the great features of Bongo is its capability to incorporate applets into user interfaces.
>This enables you to use applet creation tools from third-party software vendors to create components of your user interface and combine these components into a single consistent application using Bongo. This is the way of the future: In future releases, Bongo will naturally support Sun's JavaBeans which will further simplify the process of integrating components created by different tools. This way, you can choose the tools that are appropriate for the job, rather than being stuck with the tools provided by the environment.
>A lot of the ideas behind Bongo are based on a tool called Hyper NeWS which I developed for the NeWS windows system during the late '80s (NeWS was another brain-child of Sun's James Gosling). HyperNeWS used the stack, background, and card model which was popularized by Apple's HyperCard. Bongo goes a lot further than HyperNeWS by allowing arbitrary container hierarchies and scripting.
>I am really excited that Danny has written this excellent book on Bongo. It clearly explains the concepts behind Bongo, and it takes you through many examples step by step. This book is an essential tool for all serious Bongo users.
>Have fun,
Arthur van Hoff,
Chief Tenology Officer,
Marimba, Inc.
> The Factor UI is a GUI toolkit together with a set of developer tools, written entirely in Factor, implemented on top of a combination of OpenGL and native platform APIs: X11, Win32 and Cocoa.
> UI gadgets are rendered using the cross-platform OpenGL API, while native platform APIs are used to create windows and receive events. The platform bindings can be also used independently; X11 binding has also been used in a Factor window manager, Factory, which is no longer maintained. The Cocoa binding is used directly by the webkit-demo vocabulary in Factor.
Fascinating. Probably dead and no mention of Wayland, but fascinating.
The cross-platform UI that Factor has works on macOS, Windows, and Linux. On Linux, it unfortunately still uses a GTK2-GLext project for the OpenGL widget that we render into, but modern GTK3/4 has a Gtk.GlArea that we need to switch to using which will improve the compatibility on Wayland. However, it works fine with even the latest Ubuntu 25.10 release.
And of course, you could use other libraries easily, such as Raylib:
In my first proper job as a software engineer I wrote a bunch of Forth for "fruit machines". I don't know what the US equivalent would be but they are low stakes gambling machines which are quite common in UK pubs. The core processor was a 6809 and Forth was chosen because the interpreter was super small and easy to implement. I really appreciated the quick interactive way you could update and tweak code as you tested it. I did get slightly weary of having to keep the state of the stack in your head as you DUP and SWAP stuff around but that was probably due to my inexperience and not decomposing things enough.
They continued to use Forth as the basis for their 68000 based video gaming machines although when it came to the hand classifier for video poker we ended up using C - mostly because we wanted to run a lot of simulations on one of these new fangled "Pentium" processors to make sure we got the prize distribution right to meet the target repayment rate of ~98%.
Stepping away from Forth in particular, one of the benefits of a stack-based / concatenative language is that it's easy to implement on constrained hardware. uxn [1] is a great example of that.
And shameless self-promotion, if you're interested in how these kinds of languages compare with more traditional named-based languages, with more theoretical constructs like the lambda calculus and combinatory logic, and with gadgets like a PyBadge — well you're in luck! I gave a talk about exactly that at the final Strange Loop [2].
This is long winded, but maybe you have some thoughts here.
I've been building a DOM-builder API recently in Rust. The existing options (there are many) tend to use textual templating, which can't reason well about anything, or basic macros, which never support indentation. I wanted something that was just code, where I'd be in full control on the indentation (or lack thereof) programmatically. The closest equivalent is Python's Dominate [1], though its indentation mechanisms are buggy.
So I built a system using the traditional tree where Nodes own other Nodes at random addresses, and I built a renderer that renders those nodes and concatenates their strings recursively. It ended up working but it was hacky and very slow for the large inputs. In release mode, it was taking almost a minute to render 70 files, and I want about two orders of magnitude lower.
I ran it through profilers and optimized it a bit, but wanted to see if I could simplify the architecture and reduce the amount of work the computer could do. I read about flattening ASTs [2] and how through optimizing that format, you can end up with a sort of bytecode [3]. I also looked into Data-Oriented Design, watching Mike Acton's famous talk [4], Andrew Kelley's talk about DoD in Zig [5], and reading through the DoD book by Richard Fabian [6].
I ended up with something that works quite well for traversing and rendering, which is a stack that can be traversed and rendered in O(n), but I lost my nice Dominate-like API. As in, I can build these beautiful, flat trees, but to embed those trees in my code, I need to either materialize a tree in the traditional style first and then push it onto these stacks, or do some sort of macro magic to make these stack pushes.
I wonder if this is a common issue with stack-based programming. It is, in my case, quite simple for the computer, but hard to fit into an API without building up the stack manually!
Many people glorify the simplicity of Lisp as an interpreter, but Forth is similar and underappreciated. Sadly, the only code I've written in Forth is... PostScript. Yeah, PostScript is a dialect of Forth. As a child, I really was amused by the demo of GraFORTH on Apple ][, which included 3D wireframe animations, which at the time were magical.
> As a child, I really was amused by the demo of GraFORTH on Apple ][, which included 3D wireframe animations, which at the time were magical.
I originally wrote GraFORTH (https://archive.org/details/a2_GraFORTH_1981_Lutus_Paul) to escape the slow world of integer BASIC on my first computer (an Apple II). Because it relied on large blocks of assembly code to produce nice graphics, it perhaps misled people about what Forth could do on its own.
Someone mentioning childhood tech and the creator showing up is peak HN, in the best possible way. I love little threads like this... I never used a Forth as a child, but I recall reading about it and marvelling over it at a time when getting hold of huge amounts of pirated games was easy, but finding anywhere to even buy more serious tools could be challenge... I think it was probably 20+ years before I actually ended up trying a Forth.
Given you were around at about the correct time period, could you hazard a guess at what dialect this very old Forth game from Byte magazine was written in?
It has some graphics commands in that I couldn't find in any other version of Forth on the Apple II. I'm a little outside the Apple II demographic, since they didn't really take off in the UK - although the very first home computer I ever used was an Apple II owned by the father of the guy that founded Rockstar Games :-)
> could you hazard a guess at what dialect this very old Forth game from Byte magazine was written in?
The writeup identifies the original Forth source/version as most likely FIGForth '78, so I assume that's correct. This doesn't mean it has no code borrowed from elsewhere, and we might never sort that out.
I should add that Forth has the property that you go from nothing to writing programs pretty quickly, because it's all based on RPN (like HP calculators) and there's very little infrastructure required to create a usable environment -- unlike virtually every other language I've created/used.
My having been a fan of HP calculators beforehand played a part in getting me started with Forth -- RPN was an aspect of Forth I didn't have to learn before getting started.
Remember also that the 6502 (the Apple II processor) had a rather easily understood assembly instruction set, which meant any adept 6502 programmer could basically decode and grab other people's work without needing a source listing. No longer true for modern processors.
Guess how we updated each other during program development and updates? Ready? 5 1/4 inch floppy disks stuffed into big manila envelopes, then snail-mailed. No, not making this up.
Yup. I am old enough to have posted cassettes rather than 5.25" floppies, although I was using a Z80-based machine that ran Forth and took 5.25" floppies as late as the early 90s. Wildly specialised bit of kit, and I wish I'd stolen it when I had the chance because it's probably been thrown over the side of an oil rig by now.
The first Forth machine I used was the Jupiter Ace, which was a home computer sold in the UK by a couple of guys who spun their company off from Sinclair. It was a bit too underpowered and a bit too late and a bit too weird to really "land" - everyone either had a ZX Spectrum or a Commodore 64 by then, and schools and rich kids had BBC Micros (I got right into 6502 machine code on the one we had in school when I was about 9, and then got my hands on its predecessor, the Acorn Atom). I also had a couple of Epson HX20s that the company my dad worked for had used as data loggers, which had Forth ROMs fitted. That's how I got right into 6809 programming, and that chip is ludicrously suitable for Forth!
I got Cosmic Conquest working on an Apple II emulator but the support code around getting it working is frankly terrifying. I used a fig-Forth disk, and wrote my own implementations of the graphics words used there, which I guess is what the original author did.
Tracking them down has so far proved impossible, and it's quite likely they are no longer around.
I used Graforth, that was so cool! I owe you a beer for pirating it. I also (like most Forth enthusiasts) developed my own Apple ][, based on FIG-FORTH, with its own graphics libraries and PRODOS integration, and used it to write terminal emulators.
Then I discovered Mitch Bradley's Sun Forth (aka ForthMacs, Open Firmware, IEEE 1275-1994), which was originally based on Langton and Perry Forth 83, but has a metacompiler and can target many platforms and word sizes and cpus.
More thoughts and links on Mitch Bradley, Open Firmware and Forth programming:
Has anybody else ever had the dubious experience of using "Cap'n Software Forth"? That's what John Draper wrote [Sl]EasyWriter with (which he wrote on work furlough from the Alameda County Jail). During the 90's SF Raves scene I would always carry some emergency tobacco around as repellent, just in case I ran into him.
>The first Forth system I used was Cap'n Software Forth, on the Apple ][, by John Draper. The first time I met John Draper was when Mike Grant brought him over to my house, because Mike's mother was fed up with Draper, and didn't want him staying over any longer. So Mike brought him over to stay at my house, instead. He had been attending some science fiction convention, was about to go to the Galopagos Islands, always insisted on doing back exercises with everyone, got very rude in an elevator when someone lit up a cigarette, and bragged he could smoke Mike's brother Greg under the table. In case you're ever at a party, and you have some pot that he wants to smoke and you just can't get rid of him, try filling up a bowl with some tobacco and offering it to him. It's a good idea to keep some "emergency tobacco" on your person at all times whenever attending raves in the bay area. My mom got fed up too, and ended up driving him all the way to the airport to get rid of him. On the way, he offered to sell us his extra can of peanuts, but my mom suggested that he might get hungry later, and that he had better hold onto them. What tact!
As annoying and creepy as he is, he does have a lot of great stories to tell...
My favorite John Draper story -- not sure if it's true, heard it from several sources.
One day IBM called me up and asked if I would write them something like Apple Writer, for their new PC. I instantly asked, "Under what terms?" I think that surprised them -- I was wrongly rumored to be all programmer and no business sense.
They replied, "We give you $100,000 in royalties, after which we own the program." I thought a bit and said, "Hmm ... $100,000 ... that's about 15 days of Apple Writer royalties." A long silence on the phone line.
So they realized I wasn't going to write anything for them. Then, according to rumor, they asked John Draper and he agreed -- he wrote them a word processor. A really terrible one.
After IBM voluntarily withdrew his program from the market, Draper is rumored to have said, "They asked for a $100,000 program and I gave them one."
Yep, we were pretty snotty in those days. But then, we didn't have to compete with AI.
The difference between Forth and Lisp could not be more pronounced. Forth source code has entirely implicit structure, you can't even tell which function is called on which arguments. Lisp has entirely explicit structure which makes it much easier to read and edit. Lisp needs only a single primitive (lambda) to create the entire programming language, whereas Forth needs many primitives which break the core idea of the language in order to be usable. All of what is elegant about Lisp is ultimately lacking in Forth.
I think I'd agree from a mathematical perspective that lisp is more elegant, but implementation-wise, I really do like Forth's simplicity. They're both really cool.
I had a copy of that as well - I forget whether it was a Christmas gift or if I bought it. The demos were neat, but I was lacking in ideas when I had time to play with it, and the Apple didn't go to college with me.
But if I were going to do some "from the ground up, using first principles, with nobody else's libraries" embedded work, Forth would certainly be something I'd consider.
Like many here, the annual "language" issue of Byte Magazine in 1980 introduced me to Forth. Although I was enrolled as an engineer in college, I was a frustrated (mediocre) programmer but my upstate NY institute did not offer a Comp Sci degree at that time. Forth was a gateway drug for me and demystified the concept of compiling for me. Prior to Forth, when a thousand Freshman engineers were writing their Fortran IV projects for an 8am class the next morning on mainframe 3270 terminals, we imagined that the operators must have to continuously pour water on the compiler to keep it cool. Yeah, it was 1980 and computers were still a little magic.
But Forth and threaded code was a life changer; it explained so much! My 3rd year, I partially implemented a 32-bit Forth in IBM S360 Assembler but getting I/O to work was my downfall (mostly due to my poor skills and lack of experience.) But the threaded interpreter and the basic stack ops all worked. But then I was introduced to Lisp...
But my love of Forth never left me. I make my living with C in the early days, but predominantly SQL and Bash these days. When I had my second (third?) midlife crisis, I got two tattoos on my arm: one is the Y Combinator in Lisp (I 'lost' a trailing parenthesis due to a cut-n-paste error in the template given to the tattoo artist, so I had to go back and get another tattoo with an error message pointing out the missing parenthesis.), the second tattoo is the implementation of an ANSI Forth word:
: ? @ . ;
The fact that I could write an entire function with only punctuation characters was mind-blowing and reminds me to approach problems in unique ways. The tattoos are also great ways to start up conversations in bars...
I've had a soft spot for Forth and am toying with a silly Forth-like interpreter for web programming ... if not for actual use, at least for some fun time. One concept it adds is the notion of a "current selection" which can define the available vocabulary to use and is used to select and work with DOM elements. Just experimenting.
Edit: As a kid, I disliked BASIC as a language though it let me do fun stuff. So I made an interpreter in BASIC for a language I'd like and it turned out Forth-like (and I didn't know about Forth at that time). I guess I'm still that kid some 35 years later.
I used to be a fan of these languages like Lisp and Forth and Joy (and Factor and Haskell), but then I found that what I a really long for is just (untyped) lambda calculus (as a universal language). (Combinatory logic is just a similar representation of lambda calculus, but the differences go away quickly once you start abstracting stuff.)
I think expressing semantics of all (common) programming languages in lambda calculus would give us a solid foundation for automated program translation. And we should aim for that, the babel tower of languages doesn't really help anyone.
The current issue I have is with type theory. So I am trying to embed notion of types directly into the lambda terms, so they would sort of "automatically typecheck" when composed. The crucial in this, in my opinion, are lambda terms that do not use lambda abstraction in their body, because you can think of these terms as a kind of mini-DSLs that have yet to be interpreted.
Anyway, once we can translate calculus of constructions (and other common formal logics) into untyped lambda calculus, it will also help us doing automated theorem proving. It must be theoretically possible, but to my knowledge nobody has really done this sort of hardcore formalization.
I implemented the Calculus of Constructions in untyped lambda calculus in order to shorten the 643 byte C program computing Loader's Number to 1850 bits (under 232 bytes) [1], as one of the milestones reached by my functional busy beaver function [2].
I think this is great. I think you should write a paper on it.
I suspect it might need some kind of commutative diagram proof, i.e. if you express things in CoC formalized within BLC you will get the same result as when you express in BLC formalized within CoC, I am not sure from the top of my head.
(Kind of similar to showing that self-interpreting quoted interpreter on quoted program is the same as quoting the result of running the interpreter on the program.)
And of course, this proof of equivalence should have formalization both in CoC (Lean?) and BLC.
My hope is eventually someone writing a book on logic where the metalogic will be just untyped lambda calculus. Proofs will be just beta-reduction and judgemental equality. And everything will be in the form, let's study properties of these lambda terms that I came up with (the terms will of course represent some other logic such as CoC, simply typed LC, or even LC itself etc.).
Grounding programming languages in mathematics like this is essentially the goal of Strachey and Scott's denotational semantics, which has been very influential in programming language theory:
Not really a big fan, because the formalization of DS always left something desired, but I think a big difference with formalization in ULC is that ULC is (mostly) materialist while DS is structuralist.
So the initial formalization into ULC can be automated - if you have semantics of your language already implemented as an interpreter in another language, you can use this as a starting point.
With DS - I am not sure. I feel most people who build new languages don't provide DS specification.
It's a real cool idea to compile everything down to lambda calculus and then you solve all semantics issues. (If something fits) you can convert 1:1, use general libraries in one language in others without loss etc. Ah, what a beautiful world it could be!
Conversion between spagetti stacks and pure stack programming(in which the stack contains numbers and no GC) has a massive translation cost if you go from LC to Forth and back.
Forth is an imperative language and as such you will have to model memory state (at least) somehow, if you want to use purely functional language as a representation. But that's the cost of doing business.
The thing is though, you don't translate to LC for performance, but for understanding. At any point, the LC evaluator might recognize a known subterm and replace it with a known equivalent. Depending on the goal, that might help to improve evaluation performance (skip known steps of beta reduction), reduce memory consumption (reduce size of the term), or improve readability (refactor code to use better known abstractions).
It's difficult to understand what they were actually doing, but reading between the lines, it sounds like an advantage of 'machine Forth' is writing some Forth words in assembly? I can see why that would run much faster for a jpeg decoder.
Well, I feel that lot of code is written again and again, just in different languages. If we could automatically compare and translate different implementations, I think it would be beneficial for finding bugs.
Everytime somebody comes up with a new programming language, I am like, yeah, so you added these abstraction, just in a different syntax. I think people who come up with new languages should implement the primitives on top of lambda calculus (which really is the simplest logical system we know), then we could potentially have automated translators between languages, and that way we could cater to everyone's preferences, and expand a standard library across different languages.
So in short, yes, I think proliferation of programming languages without formal understanding of their differences is detrimental to interoperability of our computer systems.
It would also allow wider notion of metaprogramming - automated manipulation of programs. For example, let's say I need to upgrade my source code from one interpreter version to another. If both interpreters are represented as a set of terms in lambda calculus, I can see how express one in the other, and formalize it as some kind of transformation. No more manual updates when something changes.
It would also allow to build a library of universal optimizations, etc. So I think programmers would benefit from having a single universal language.
> Everytime somebody comes up with a new programming language, I am like, yeah, so you added these abstraction, just in a different syntax.
I got this feeling too, until I started to explore languages outside of the C/Algol-like syntaxes. There is a wide range of languages out there, from array languages to lisps, and they don't give me the feeling of "just a different syntax" but actually changed the way I think.
So yeah, I love lisp now and spend most of my days writing it, but it also comes with the downside that now Java, C# and Golang look more similar than different to each other.
> It would also allow to build a library of universal optimizations, etc. So I think programmers would benefit from having a single universal language.
I think assuming everyone would use the same hardware, same environment and same workflows to solve the same problems, this would make a lot of sense and would be hugely beneficial!
But in reality lots of problems need different solutions, which has to be made in different ways, by different people who think differently. So because the world is plural, we need many programming languages too. Overall I feel like that's a benefit.
> they don't give me the feeling of "just a different syntax" but actually changed the way I think.
Not only that, but languages that enable more concise expression of an idea (without losing clarity for readers) reduces the error rates of programs written in them.
It's been proven that when not accounting for constraints imposed by a compiler/interpreter (like the Rust borrow checker), the average error rate per unit size of programs across widely varying languages is constant. So reducing the number of lines/expressions required to express ideas reduces the number of errors by the same factor. Brevity pays when readability isn't sacrificed.
> assuming everyone would use the same hardware, same environment and same workflows
They don't have to. I would ideally represent HW, OS and compiler in the LC as well. Then primitives of the language that could be proven to be HW/OS/compiler independent could be abstracted and readily translated. For the ones that could not - well you have exact description of what the differences are.
Racket is at least related to the kind of metalanguage system you're talking about. I've never actually done it, but to implement a new "#lang" in Racket, your job is essentially to write a "reader" for it that transliterates it to the classic Schemey Racket language. Libraries written in one #lang can then be called from one another (or not, if that's what you want -- a lot of the point of this in Racket is building little restricted teaching languages that have limited capabilities).
That's a rather cheap retort. I am not saying everybody should use raw untyped lambda calculus for their programming, just that we would all benefit if we could translate languages we use to and from it, because then we could interoperate with any other code, refactor it, etc.
A CIL to LC compiler is effectively an emulator of CIL in LC. That is, every primitive of CIL has a corresponding LC term, which operates on the CIL execution state. So then you can express functions from .NET standard library as terms in LC.
Now let's say you do the same for JVM. Now you can start looking at all these lambda terms and search for some similarities. For example, you might notice that some list functions are equivalent if you transform the representation in a certain invertible way. This gives you a way to express functions from one standard library in the other.
In general, I think we should try to translate wide variety of human programs into lambda calculus and then try to refactor/transform the lambda terms to spot the common patterns.
That sounds very thankless, but on the other hand we have very fast computers and maybe "the bitter lesson" of just throwing more compute at finding patterns can be applied here, as well.
It divides effort, spreads it too thinly among too many disparate projects with essentially the same goals, and as a result, they all advance much more slowly.
Examples: how many successors to C are there now? Hare, Odin, Joy, Zig, Nim, Crystal, Jai, Rust, D... And probably as many again that are lower-profile or one-person efforts.
For a parallel example, consider desktop environments on FOSS xNix OSes.
I have tried to count and I found about 20.
A "desktop" here meaning that it provides a homogenous environment, including things like a file manager and tools to switching between apps, plus accessories such as text editors, media viewers, and maybe even an email client, calendar, and/or address book. I am trying to explicitly exclude simple window managers here.
The vast majority are simply re-implementations of the Windows 9x desktop. Taskbar along 1 edge of the screen, with buttons for open apps, start menu, system tray, hierarchical file explorer, a Control Panel app with icons for individual pages, etc.
This includes:
* KDE Plasma (and Trinity)
* GNOME Flashback (AKA GNOME Classic, including the Consort fork)
* Cinnamon
* Xfce
* Budgie
* MATE
* LXDE (including Raspberry Pi PIXEL)
* LXQt
* UKUI (from Ubuntu Kylin, openKylin, etc.)
* DDE (from Deepin but also UOS, Ubuntu DDE and others)
* Enlightenment (and Moksha etc.)
* ChromeOS Aura
And more that are now obsolete:
* EDE
* XPde
* Lumina
That's about 15, more if you count variants and forks. There are more.
The main differences are whether they use Gtk 2, 3 or 4, or Qt. That's it.
It's easier to count the ones that aren't visibly inspired by Windows >= 95:
Arguably: GNUstep (whose project lead angrily maintains is not a desktop after all), and the long-dormant ROX Desktop...
So, arguably, 3 you can run on a modern distro today.
CDE is older than Linux or Free/NetBSD so doesn't count. I only know 1 distro that offers it, anyway: Sparky Linux.
MAXX Interactive Desktop looks interesting but it's not (yet?) FOSS.
All that effort that's gone into creating and maintaining 8-10 different Win9x desktops in C using Gtk. It's tragic.
And yet there is still no modern FOSS classic-MacOS desktop, or Mac OS X desktop, or GEM desktop, or Amiga desktop, or OS/2 Workplace Shell... it's not like inspiration is lacking. There are at least 3 rewrites of AmigaOS (AROS, MorphOS, AmigaOS 4.x) but despite so much passion nobody bothered to bring the desktop to Linux?
Defenders of each will vigorously argue that theirs is the best and there are good reasons why it's the best, I'm sure, but at the end of the day, a superset of all of the features of all of them would not be visibly different from any single one.
> There are at least 3 rewrites of AmigaOS (AROS, MorphOS, AmigaOS 4.x) but despite so much passion nobody bothered to bring the desktop to Linux?
The passion is there for the whole AmigaOS, of which the desktop metaphor, Workbench, is just a part. What fun is AmigaOS without Exec, Intuition and AmigaDOS? The passion is to see AmigaOS run, not to see Linux wearing its skin.
GUIs for manipulating files a-la Workbench are readily available, nobody seems to have built an Amiga-skinned one when a Win95 one will do. DOpus is already a clone of Midnight Commander, and there are clones of that aplenty, the most DOPus-like one I've seen is Worker (http://www.boomerangsworld.de/cms/worker/)
Given this visible interest in running Amiga stuff on Linux and integrating AmigaOS (and AROS) I am very surprised that in ~30 years, nothing has progressed beyond a simple window manager.
Intuition isn't that big or complicated. It's already been recreated several times over, in MorphOS and in AROS.
I am so tired of seeing Linux desktops that are just another inferior recreation of Win95.
I want to see something different and this seems such an obvious candidate to me.
I think you can categorise Amiga enthusiasts in various ways, this is my taxonomy:
1. Hardware enthusiasts who specifically love the Amiga's original hardware, its peripherals, and the early post-Commodore direction (PowerPC accelerators), and/or modding all of the above. These sort of people used WarpOS back in the day and probably use MorphOS or AmigaOS 4 today. The question is whether, for these people, modern single-board computers "count" as Amigas or not.
2. Nostalgic enthusiasts of the system that the Amiga was, who are happy with a real Amiga, or with an emulated one, or an emulated one running on some board in a box shaped like an Amiga. Possibly with a non-Amiga UI to boot some classic games. These enthusiasts may enjoy fake floppy drive sounds that remind them of booting disks in their youth.
3. Software enthusiasts of the Amiga's OS, and the directions it took that were different from its contemporaries, and the software ecosystem that came from it. These people have a longer user-startup than startup-sequence. They probably have most of Aminet downloaded. These people might be interested in other alternative OSes, e.g. QNX or BeOS. If they're still using Amiga hardware, or emulators, they'd be interested in AmigaOS 3.5/3.9 and 3.1.4/3.2. This can also include AROS and the work to get it running on native hardware, not just m68k but also x86 and arm... but it's unlikely that it will ever support as broad a range of hardware that Linux does, which limits how many people would want to use it, because it's unlikely to be able to drive a random modern laptop.
4. The reverse of 3, Amiga users that were big UNIX fans, e.g. Fred Fish, the sort of people who ran GeekGadgets and replaced their AmigaShell with pdksh. They probably just moved wholesale to Linux and didn't look back.
There are probably other categories, but I think the one you're looking for is 5: enthusiasts of the Amiga's look and feel, but not its broader OS or its software. If they did care about that, they'd be in groups 2 and 3, and emulators or alternative AmigaOSes would satisfy them most.
I can't say why there aren't many alternative desktops for Linux. Probably because it takes a lot of resources to build a full desktop environment for linux - a Window Manager, or even just an existing Window Manager theme is not enough. A file browser is not enough. Ultimately it takes the applications themselves to play along, which only works when have the clout to make people write software in your style (e.g. KDE, GNOME, Windows, macOS, Android, etc.).
The only alternative UI taken from retro machines to Linux, that I can think of, is ROX Desktop (https://en.wikipedia.org/wiki/ROX_Desktop) with its ROX-Filer... and even that doesn't look entirely like RISC OS, which you could be running instead of Linux.
You say a WM isn't enough, and I agree, but in this case, amiwm is still right there.
I own an Amiga and I'm interested and I try to cover Amiga news, but back in the 1980s, I was an Archimedes owner. I loved RISC OS and I never mastered AmigaOS. This is not something I personally want, although I'd love to write about it.
I feel the same, too many people doing similar stuff in slightly different syntax, too few people looking at how things are similar and could be unified.
I think it's time to look beyond syntax in programming and untyped lambda calculus is the simplest choice (that is universal and can easily express new abstractions).
Mathematics suffers to some extent from a similar problem, but recent formalization efforts are really tackling it.
The thing is, the most fundamental obstacle to unification, is that unification is a very hard feature to obtain. Even in an LC formalization, do you expect that two "text editor" programs would be interchangeable? Would only a canonical text editor be allowed? Does the LC facilitate arbitrary extensions, that perhaps follow some rules at the interface/boundary? While I also lament what is, on the whole, wasted work and interest, I think the alternative is not some much simpler solution, but rather to offer something that is better. https://xkcd.com/927 is a warning, as true as it is annoying. We gravitate towards variety because it is simpler and natural to how we act; how will your proposal fundamentally improve on this? You call out misguided "defenders", and again I note the same problem, but you seem to be calling for One True Way with no significant realizable improvement.
Sometimes you can prove mathematically that two different approaches are equivalent, but differ only in name or some parameter, and that's a unification of sorts, without proliferation of standards.
In case of UIs, this is still an open problem and will be for many years, but I suspect we could do it by defining, in some kind of fuzzy logic for example, what a good UI does look like. Then we could transfer elements from one UI into another.
Or we could just start at the edge. For example, we can unify what a "user input" is, and what a "widget on the screen" is. Formalizing everything would allow us to do such transformations on programs, for example, universal input/output remapping. And then we could continue to unify into more specific logic of what input is being read and how things are drawn on the screen.
Untyped lambda calculus (and its sibling combinatory logic) is just language for expressing logic, nothing more. And it already exists (arguably it's one of the first programming languages, predates Forth and Lisp by at least two decades) and is among the simplest ones we know. I actually became interested in LC so much recently because, believe it or not, expressing things in classical logic is often more complicated than expressing things in LC.
> Sometimes you can prove mathematically that two different approaches are equivalent, but differ only in name or some parameter, and that's a unification of sorts, without proliferation of standards.
Certainly, but that would already be supposing most of the work to unify at the source level is done, unless an extremely strong normalization is possible.
> Untyped lambda calculus (and its sibling combinatory logic) is just language for expressing logic, nothing more. And it already exists (arguably it's one of the first programming languages, predates Forth and Lisp by at least two decades) and is among the simplest ones we know. I actually became interested in LC so much recently because, believe it or not, expressing things in classical logic is often more complicated than expressing things in LC.
(Personally, I prefer combinators slightly more)
I don't deny the power of LC for some uses, but taking one program, reducing it down to an LC equivalent, and then resurfacing as another program (in a different language, but otherwise equivalent), or some other program transformations you may desire, would certainly be elegant in some sense, but very complex. It's like programming in Brainfuck; the language itself is very simple, and making mechanistic tooling for it is very simple, but I don't think the tooling we could invent in 50 years would be sufficient to make Brainfuck simple to read or write. Moreover, formalizations of, say, "button" are not a problem, but scaling to different screens, devices, use cases, and so on will greatly increase the scope. This OS represents input events this way, this hardware provides that sort of data. I think this is the same problem as to why people, everyday, don't bother to make formal arguments almost all of the time. It's not that a formal argument along the lines of "you didn't take out the trash today, so I have reason to be frustrated with you" can't be formulated or proven, but rather that the level of rigor is generally considered both fatiguing and unnecessary.
Any time someone suggests something that should make things much simpler, I'm skeptical. There are things that have essential complexity too great to be made simple, and then humans maybe have an inherent overhead of accidental complexity, above and beyond the accidental complexity we accidentally add. I'm still interested to see where your efforts lead, but I'm not expecting to see cheap nuclear fusion for another 10 years at least, so.
Because it is the simplest thing we have, and has a pretty straightforward self-interpreter.
It feels like you need a lot more metamathematics to deal with typed lambda calculus than with untyped one, and types are something that comes without a justification.
Anyway, the idea is, if you have a language, you can think of source code written in the language as a giant lambda term, where you have all lambdas upfront and only composition in the body. A tree of symbols to be composed, essentially. And then to interpret this source code in a language, you supply definitions of the language's primitives as arguments to the source code term.
Now if your language is typed, the primitives need to be chosen in such a way, so that the interpreted term (the source code applied to language primitives) fails to normalize if the program is not typed correctly.
You can then have a correspondence between the primitives of the typed language that typecheck and simpler set of primitives of the same language used purely for computation, used under the assumption that the program typechecks. This correspondence "defines" the typing mechanism of your language (in untyped lambda terms).
Yes, that's deliberate though. (I will call these "concrete" terms, because they lack abstraction in the body, but I am looking for a good name. In fact, no universal one-point basis can be expressed as a single term like that, you need a composition of two; the concrete property is not preserved in composition, although it might be preserved in weaker bases than S,K, such as B,C,K - kind of affine logic.)
Anyway, the reason I am interested in concrete terms is I want to define typing judgements in ULC somehow. (In CoC, all mathematical questions can be rephrased as is there an object of given type T, and you need to be able to formalize questions somehow to formalize metamathematics.)
An obvious definition of typing judgement in ULC would be: term x is of type B in a typed language A (both A and B are given terms) iff there exists a term x that satisfies Ax = B (the equality is judgemental after beta-normalization).
However, this definition doesn't work, because a general term x can just disregard A and return B directly. But - I think if we restrict x to be a concrete term in the above definition (possibly with a given number of upfront lambdas - not sure if that is required), then I think we can make the definition to work.
I also suspect concrete terms can be sort of "deconstructed from the outside". In general, we cannot resolve equality of terms, but I suspect that concrete terms can be analyzed (and quoted) within ULC.
One thing I realized about your CoC implementation - I think you embed CoC into ULC in an indirect way, working with CoC terms in a quoted way. And that's fine, but what I want is to have CoC in ULC directly, i.e. have a base set of ULC terms for the primitives of CoC language. But that also can be thought of as the CoC interpreter being restricted to apply on concrete terms, giving them meaning by substituting the base inside.
In other words, concrete terms are kind of "inside view" of quoting. (Maybe we should call them "data terms", because they effectively carry only data and no executable payload.) Having the concept of concrete terms in the ULC metalanguage can also help to define interaction to the external world, that you only accept data not executables, something that you kinda do through the "monadic" interface. (You need to "remember" that a term only accepts quoted data, it cannot be made explicit in ULC language. The advantage of concrete terms is they are independent on your choice of quoting operator.)
Anyway, I am not sure if I am making sense, I am trying to grapple with this myself, so it's OK if you don't think concrete terms are a useful concept.
No, that equation can be created as a special case from equation TAx = B (where Txy = yx is a transposition combinator).
> Not if you require x to be strict, i.e. it must use its argument.
I am not sure it would generally help but maybe you mean it only in case of equations xA=B (not Ax=B as I suggested). I also think this strictness condition would be too limiting.
In equation Ax=B, we can take A of the form R_n A_1 .. A_n, where R_n reduces Ax to x A_1 .. A_n (R_n rotates x in front of n arguments).
So if x is concrete and takes n arguments, we can think of it as "text" (parenthesised expression) in symbols that are to be interpreted as A_1, .. A_n. Requiring that every possible symbol is used in a language text fragment is a weird condition, which doesn't make much sense to me.
In any case, my definition of typing judgment then allows to put a chosen condition on a text composed from chosen symbols, which is quite powerful and can be used to e.g. determine whether a quoted expression is valid. And arguably it is a very natural definition in the context of ULC. (I read to Mock a Mockingbird 30 years ago - in partial Slovak translation - and most problems there could be formulated as the Ax=B equation for x being a concrete term.)
I was captivated by the August 1980 issue of Byte magazine, which had a cover dedicated to Forth. It was supposed to be easy to implement, and I imagined I might do that with my new KIM-1 6502 board. Alas, the KIM-1 was lost when I went to college, and life forced me down different pathways for the next 45 years.
About a year ago I finally began to work on my dream of a Forth implementation by building a Forth-based flight management computer into a spaceflight simulation game that I am working on. Now, instead of writing mostly C# or GDscript code in Godot, I am trying to figure out ways to create a useful device using this awkwardly elegant language. I'm having fun with it.
One of the interesting bits is that I have been able to make the Forth code an entirely separate project on Github (https://github.com/Eccentric-Anomalies/Sky-Dart-FMS), with a permissive open-source license. If anyone actually built a real spacecraft like the one in my game, they could use the FMS code in a real computer to run it.
There is one part of the linked article that really speaks to me: "Implement a Forth to understand how it works" and "But be aware of what this will not teach you". Figuring out the implementation just from reading books was a fascinating puzzle. Once I got it running, I realized I had zero experience actually writing Forth code. I am enjoying it, but it is a lot like writing in some weird, abstract assembly language.
Circa 1980 BASIC was the dominant language for micros because you could fit BASIC in a machine with 4k of RAM. Although you got 64k to play with pretty quickly (1983 or so) it still was a pain in the ass to implement compilers on many chips, especially the 6502, which had so few registers and addressing modes that you're likely to use virtual machine techniques, like Wozniak's SWEET 16 or the atrocious p-code machine that turned a generation of programmers away from PASCAL.
FORTH was an alternative language for small systems. From the viewpoint of a BASIC programmer in 1981 the obvious difference between BASIC and all the other languages which that you could write your own functions to add "words" to the language. FORTH, like Lisp, lets you not only write functions but create new control structures based on "words" having both a compile-time and run-time meaning.
FORTH's answer to line numbers in BASIC was that it provided direct access to blocks (usually 1024 bytes) on the disk with a screen editor (just about a screenful on a 40x25) You could type your code into blocks and later load them into the interpreter. Circa 1986 I wrote a FORTH for the TRS-80 Color Computer running the OS-9 operating system and instead of using blocks it had POSIX-style I/O functions.
FORTH was faster than BASIC and better for systems work, but BASIC was dominant. Probably the best way to use FORTH was to take advantage of it's flexibility to create a DSL that you write your applications in.
Why is it that languages like this don't scale? It's not the first time I see a powerful language that got forgotten. Other examples include SmallTalk and Common Lisp (tiny community).
It is because some languages are "too powerful"? What does that say about our industry? That we're still not that advanced of a specie to be able to handle the full power of such languages?
I say that because it seems languages that are "dumbed down" seem to absolutely dominate our world (Python, Ruby, JS, etc.)
One simpler explanation: in forth you are forced to keep the stack, and modifications to the stack, in your short term memory, albeit only really three numbers in most cases. Whereas with C et al you simply look down the page at the variables, far less taxing on your short term memory.
Well-written and designed high-level forth words often transcend that and tend to be, quite literally, readable however, in a way that is incredibly rare to see in C et al. Of course the argument is that other programmers shouldn't be expected to see the problem in the way the original problem solver did.
This is probably why you see things like locals get used a lot as modern Forth programs grow. It doesn't have to be brutal early days Chuck Moore genius programs, but I guess you start getting away from the original ethos.
I think even with locals you're still mentally dealing with a few items on the stack in each word usually. But, yes, locals do help you from passing around items from word to word: you see the usage of the local far easier than you see the location of the stack elements.
Forth was an excellent way to write a powerful and expressive programming language that could self-host with a bare minimum of assembly language "bare metal" programming.
The fridge-sized computer that Forth was originally developed on had double-digit kilobytes of memory (maybe 8192kwords, with 16-bit words) and clocked instructions through at a whopping 300kHz or so. The microcontroller that drives the Caps Lock LED on your keyboard is a hundred times faster with a hundred times the memory.
These days we do not need to squeeze editor, compiler, and target binary into such a tiny machine. If you're developing for a microcontroller you just use C on your "big" computer, which is unimaginably more powerful.
In the olden days of the 1990s I used a development system for embedded stuff that was written in and targetted Forth on a Z80 with a whopping 64kB of RAM and 5.25" floppies, but that was at least ten years old and five years out of date at the time.
You're probably reading my words on a slice of glass the size of half a sandwich that contains more computing power than existed in the whole world when Forth was first written.
It's a shame because writing something like Forth from the ground up (and I mean, assembly code to load the registers to start the ACIA to begin transmitting text to the terminal) perhaps in an emulated early 80s home computer is a great way to get a sense of what the chip behind it all is doing, and I feel that makes you a better programmer in "real" languages like Go or Python or C.
Find an existing implementation that runs on some computer you already have, or have an emulator for.
Then find a computer you're really into, and port fig-Forth to it, just for fun. Don't copy the source across, type it in with your own changes as you go.
Edit: Don't forget to have fun. That's the most important thing. You're doing this because you *can*, and just to see what will happen.
I was lucky, early in my career, to work at a place which used a lot of Perl and to read Damian Conway’s book, Object Oriented Perl. It was an amazing, mind-expanding book for me. It was filled with examples of different approaches to object-oriented programming, more than I ever dreamt existed, and it showed how to implement them all in Perl.
So much power! And right in line with Perl’s mantra, “there’s more than one way to do it.”
Unfortunately, our codebase contained more than one way of doing it. Different parts of the code used different, incompatible object systems. It was a lot of extra work to learn them all and make them work with each other.
It was a relief to later move to a language which only supported a single flavor of object-oriented programming.
What I heard is with Forth, basically no 2 environments are alike, but highly customized, meaning every forth programmer creates his own language in the end for his custom needs.
So collaborating is a bit hard like this. The only serious forth programmer that I know, lives alone in the woods doing his things.
So from a aesthetic point of view, I really like the language, but for getting things done, especially in a collaborative way?
But who knows, maybe someone will write the right tools for that to change?
This is not a real issue, because the same thing can be said about C. No two C projects are the same, each has its own set of libraries, macros, types, etc.
I think the main problem is that Forth systems don't have a standard way of creating interfaces like C and other languages have. So the diversity of environments becomes a big issue because it's difficult to combine libraries from different sources.
Have you tried collaborating with Forth? There's a lot of documented history of people doing so in industry when it was actually used, and more recently I've usually found Forth codebases approachable and easy to follow.
Personally I think this is the pay-off for writing the code in the first place because Forth is very difficult to write in a clear way, if you actually manage to do it you've probably made it very clear to follow because otherwise it's hard to finish your project and make it work at all.
I don't think "power" is really that helpful a metric in determining how useful a programming language is. If you think of programming from the standpoint of trying to specify the program you want out of all of the possibly programs you could write, one of the most helpful things a programming language can do is eliminate programs that you don't want by making them impossible to write. From that standpoint, constraints are a feature, not a drawback.
And at the extremes, too much power makes a tool less useful. I don’t drive an F1 car to work, I don’t plant tulips with an excavator, I don’t use a sledgehammer when hanging a picture. Those tools are all too powerful for the job.
once you specify "the job", the best tool is "the solution" to that job only. anything else is excess complexity
however if "the job" is unspecified, power is inverse to the length of "the solution"
so is constraint of power bad?
--
a fascinating question
just like music can be created by both additive and subtractive synthesis; every line of code creates both a feature and a constraint on the final program
in which case power can be thought of as the ability to constrain...
it implies expressivity is the ability to constrain
it implies drawing on a page, or more broadly, every choice we make, is in equal parts a creative and destructive act
so maybe life, or human flourishing is choosing the restrictions that increase freedom of choice? it's so meta it's almost oxymoronic; concretely: we imprison people to maximize freedom; or, we punish children with the aim of setting them free from punishment
this is the same as the walk from law into grace found in Christian ethics
maybe the ultimate programming language then, provides the maximal step down that path, and this is also the most useful definition of "power"
i.e. place on people those restrictions that increase their ability to choose
I worked at a place that had a big Forth codebase that was doing something mission critical. It was really neat and cool once you finally got it, and probably hundreds or maybe thousands of people had touched it, worked on it and learned it, but the ramp was pretty brutal for your average developer and thus someone decided it would be better to build the same thing over with a shitty almost-C-but-not-quite interpreted language. It certainly made it easier for more people to understand and build, even if the solution was less elegant.
Honestly, when I write forth now, which is usually for embedded targets, I've got a customized version of zforth that I've grafted some stuff like locals into. If it's a small program, it's better to not be afraid of things like globals, and just spend at least twice as much time factoring, writing comments and thinking than writing. It's important to read other people's Forth code and try to understand, as there's a zen and style that looks very different than how you'd write something like Java. It's freeing and enlightening once it clicks, but you have to fight a ton of the way you think about "normal" code.
As far as the codebase, I probably shouldn't say too much (may it's been long enough now, but Idk), but all I'll say is that was a important part of things at a certain disk drive manufacturer.
Powerful languages invites people to do needlessly complex things. Needlessly complex things are harder to understand. Harder to understand is worse.
Code that matters is usually read and extended many more times than it is written, over time by different people, so being straightforward beats most other things in practice
It kinda happened with markup languages. HTML, SVG, and some other domain specific markup languages are all XML, which is a subset of SGML.
The thing there is those DSLs have their own specs.
Coding is a social activity. Reading code is hard. When there are multiple ways of doing things, it's extra hard. People want to have relatively standardized ways of doing things so they can share code and reason about it easier.
If there's a lisp or racket or a forth that's defined as a DSL, it might take off if it's standardized and it's the best solution for the domain.
HTML uses a ton of SGML features not part of XML (sometimes erroneously though to be non-standard ‘tag soup’, not to mention self-closing tags). You need either a specialized parser or an SGML processor + DTD.
Sadly our industry carries mostly about brick layers and usually tries to go into technologies that make it easier to deal with employees like replaceable servants at low wage prices.
The large scale salaries SV style isn't something that you will find all over the globe, in many countries the pay is similar across all office workers, regardless if they are working with Git, or Office.
That argument implies that you would actually see these languages in communities with large SV style salaries which isn’t the case.
It turns out that “brick layer” languages are also easier to understand not just for the next person taking over but yourself after a few months. That’s valuable even to yourself unless you value your time at 0.
Why? The less the VCs have to spend with employees the better.
See the famous quote about Go's target audience, or 2000's Java being a blue colour job language.
Not only do languages like Lisp, Forth, Smalltalk require a people to actually get them, a bit like the meme with burritos in Haskell, they suffered from bad decisions from companies pushing them.
Lisp suffered with Xerox PARC, Symbolics and TI losing against UNIX workstations, followed by the first AI Winter, which also took Japan's 5th project with Prolog alongside with it.
Smalltalk was getting alright outside Xerox PARC, with big name backers like IBM, where it had a major role on OS/2, similar to .NET on Windows, until Java came out, and IBM decided to pivot all their Smalltalk efforts into Java, Eclipse has roots on Visual Age for Smalltalk.
Your entire post makes the claim that it’s because the vast majority of programmers get paid the same as other roles and that’s why there’s the language selection pressure there is.
High salary jobs would be the exception yet they also make pragmatic choices about languages. It’s a two sided market problem - employers want popular languages to be used so they have a talent pool to hire from and don’t end up having a hard time finding talent (which then also implies something about the salary of course but it’s a secondary effect). Employees look to learn languages that are popular and are easy to find employment in.
Not sure if you’ve spent any time with them but VCs and investors more broadly generally could give two fucks about the language a business is built in. There are exceptions but generally they just want to see the business opportunity and that you’re the team to go do it.
There’s a reason it’s difficult to find employment with Haskell or Lisp or other niche languages and it’s because they’re niche languages that “you have to get” - not easy to learn and generally not as easy to work with as “popular” languages that see significantly more man hours dedicated to building out tooling and libraries. There’s also secondary things like runtime performance which is quite poor for Haskell or Lisp if you’re a beginner and even people familiar with the language can struggle to right equivalent programs that don’t use significantly more memory or CPU. And finally the languages can just be inherently more difficult and alien (Haskell) which attracts a niche and guarantees it remains a niche language that attracts a particular kind of person.
I'm not entirely sure this is different from other languages but I believe a common complaint about lisp is every solution ends up writing a DSL for that solution, making it hard to understand for anyone else. So it's a super power if you're a small team and especially if you're a team of 1. But if you're a large team it doesn't scale.
I think it's a simple abstraction situation and the move for programming environments that include everything.
Geordi Laforge doesn't code much on the Enterprise. He simply asks the computer to build him a model of the anomaly so he can test out ideas. In a way, modern languages like Python (even before LLMs) let you get a lot closer to that reality. Sure you had to know some language basics, but this was pretty minimal and you'd use those basic building blocks to glue together libraries to make an application. Python has a good library for practically anything I do and since this is standard, it's expected that a task doesn't take too long. I can't tell my boss I'll need 3 years to code my own solution that uses my own libraries for numpy and scipy. You're expected to glue libraries together. This is why MIT moved SICP from scheme to Python. It's a different world.
With Forth, every program is a work of art that encapsulates an entire solution to a problem from scratch. It's creator chuck moore takes this to such a level that he also fabs his own chips to work with his forth software optimally. These languages had libraries, but they weren't easy to share and didn't have any kind of repository before Perl's CPAN. Perl really took off for awhile, but Python won out by having a simpler language with builtin OO (Perl's approach was a really hacky builtin OO or you download a library...).
To be honest though, I spent a decade trying many languages (dozens including common lisp, Prolog, APL, C, Ada, Smalltalk, Perl, C#, C++, Tcl, Lua, Rust...etc) looking for the best and although I never became experts in those languages, I kept coming to the conclusion that for my particular set of needs, Python was the best I could find. I wasted a lot of time reading common lisp books and just found it much easier to get the same thing done in Python. Your mileage will vary if you're doing something like building a game engine. A lot of people are just doing process automation and stuff like that and languages like Python are just better than common lisp due to the environment and tooling benefits. Also, although Python isn't as conceptually beautiful as lisp, I found it much easier to learn. The syntax just really clicked for me and some people do prefer it.
It is too risky for companies to rely on a language that have a small pool of programmers. The bigger the company, the bigger the language must be. AI multiplies this availability, not productivity.
Flipside: it looks like the most productive programmers are those who work alone and not in a large pool. The core point of the article is that team development is slower and less efficient.
Which means management must make a choice: getting good code relatively fast from a small pool of high-value individuals that it must therefore cherish and treat well...
Or get poor-quality code, slowly, but from a large and redundant group of less skilled developers, who are cheaper and easier to replace.
It is a truth universally acknowledged that from the three characteristics of "good, fast, and cheap", you can pick which two you want.
In this case, maybe the choice is as simple as "good and fast" or "cheap."
If the structure of the business or the market requires management to pick "cheap" (with concomitant but unspoken "bad and slow") then the structure, I submit, is bad.
I've concluded that Forth isn't as powerful as Lisp because it can't do lists or heaps. STOIC addresses these and other limitations. Unfortunately it's got the least search friendly language name ever.
It just tells you the top N words by frequency in its input (default N=100) with words of the same frequency ordered alphabetically and all words converted to lowercase. Knuth's version was about 7 pages of Pascal, maybe 3 pages without comments. It took akkartik 50 lines of idiomatic, simple Lua. I tried doing it in Perl; it was 6 lines, or 13 without relying on any of the questionable Perl shorthands. Idiomatic and readable Perl would be somewhere in between.
#!/usr/bin/perl -w
use strict;
my $n = @ARGV > 1 ? pop @ARGV : 100;
my %freq;
while (my $line = <>) {
for my $w ($line =~ /(\w+)/g) {
$freq{(lc $w)}++;
}
}
for my $w (sort { $freq{$b} <=> $freq{$a} || $a cmp $b } keys %freq) {
print "$w\t$freq{$w}\n";
last unless --$n;
}
I think Python, Ruby, or JS would be about the same.
Then I tried writing a Common Lisp version. Opening a file, iterating over lines, hashing words and getting 0 as default, and sorting are all reasonably easy in CL, but splitting a line into words is a whole project on its own. And getting a command-line argument requires implementation-specific facilities that aren't standardized by CL! At least string-downcase exists. It was a lark, so I didn't finish.
(In Forth you'd almost have to write something equivalent to Knuth's Pascal, because it doesn't come with even hash tables and case conversion.)
My experience with Smalltalk is more limited but similar. You can do anything you want in it, it's super flexible, the tooling is great, but almost everything requires you to just write quite a bit more code than you would in Perl, Python, Ruby, JS, etc. And that means you have more bugs, so it takes you longer. And it doesn't really want to talk to the rest of the world—you can forget about calling a Squeak method from the Unix command line.
Smalltalk and CL have native code compilers available, which ought to be a performance advantage over things like Perl. Often enough, though, it's not. Part of the problem is that their compilers don't produce highly performant code, but they certainly ought to beat a dumb bytecode interpreter, right? Well, maybe not if the program's hot loop is inside a regular expression match or Numpy array operation.
And a decent native code compiler (GCC, HotSpot, LuaJIT, the Golang compilers, even ocamlopt) will beat any CL or Smalltalk compiler I have tried by a large margin. This is a shame because a lot of the extra hassle in Smalltalk and CL seems to be aimed at efficiency.
(Scheme might actually deliver the hoped-for efficiency in the form of Chez, but not Chicken. But Chicken can build executables and easily call C. Still, you'd need more code to solve this problem in Scheme than in Lua, much less Ruby.)
—·—
One of the key design principles of the WWW was the "principle of least power", which says that you should do each job with the least expressive language that you can. So the URL is a very stupid language, just some literal character strings glued together with delimiters. HTML is slightly less stupid, but you still can't program in it; you can only mark up documents. HTTP messages are similarly unexpressive. As much as possible of the Web is built out of these very limited languages, with only small parts being written in programming languages, where these limited DSLs can't do the job.
Lisp, Smalltalk, and Forth people tend to think this is a bad thing, because it makes some things—important things—unnecessarily hard to write. Alan Kay has frequently deplored the WWW being built this way. He would have made it out of mobile code, not dead text files with markup.
But the limited expressivity of these formats makes them easier to read and to edit.
I have two speech synthesis programs, eSpeak and Festival. Festival is written in Scheme, a wonderful, liberating, highly expressive language. eSpeak is in C++, which is a terrible language, so as much as possible of its functionality is in dumb data files that list pronunciations for particular letter sequences or entire words and whatnot. Festival does all of this configuration in Scheme files, and consequently I have no idea where to start. Fixing problems in eSpeak is easy, as long as they aren't in the C++ core; fixing problems in Festival is, so far, beyond my abilities.
(I'm not an expert in Scheme, but I don't think that's the problem—I mean, my Scheme is good enough that I wrote a compiler in it that implements enough of Scheme to compile itself.)
—·—
SQL is, or until recently was, non-Turing-complete, but expressive enough that 6 lines of SQL can often replace a page or three of straightforward procedural code—much like Perl in the example above, but more readable rather than less.
Similarly, HTML (or JSX) is often many times smaller than the code to produce the same layout with, say, GTK. And when it goes wrong, you can inspect the CSS rules applying to your DOM elements in a way that relies on them being sort of dumb, passive data. It makes them much more tractable in practice than Turing-complete layout systems like LaTeX and Qt3.
—·—
Perl and Forth both have some readability problems, but I think their main difficulty is that they are too error-prone. Forth, aside from being as typeless as conventional assembly, is one of the few languages where you can accidentally pass a parameter to the wrong call.
This sort of rhymes with what I was saying in 02001 in https://paulgraham.com/redund.html, that often we intentionally include redundancy in our expressions of programs to make them less error-prone, or to make the errors easily detectable.
> splitting a line into words is a whole project on its own
Is it[1]? My version below accumulates alphabetical characters until it encounters a non-alphabetical one, then increments the count for the accumulated word and resets the accumulator.
It does look a lot like what I was thinking would be necessary. About 9 of the 19 lines are concerned with splitting the input into words. Also, I think you have omitted the secondary key sort (alphabetical ascending), although that's only about one more line of code, something like
#'(lambda (a b)
(or (< (car a) (car b))
(and (= (car a) (car b))
(string> (cadr a) (cadr b)))))
Because the lines of code are longer, it's about 3× as much code as the verbose Perl version.
In SBCL on my phone it's consistently slower than Perl on my test file (the King James Bible), but only slightly: 2.11 seconds to Perl's 2.05–2.07. It's pretty surprising that they are so close.
Were I trying to optimise this, I would test to see if a hash table of alphabetical characters is better, or just checking (or (and (char>= c #\A) (char<= c #\Z)) (and (char>= c #\a) (char<= c #\z))). The accumulator would probably be better as an adjustable array with a fill pointer allocated once, filled with VECTOR-PUSH-EXTEND and reset each time. It might be better to use DO, initializing C and declaring its type.
Also worth giving it a shot with (optimize (speed 3) (safety 0)) just to see if it makes a difference.
Yes, definitely more verbose. Perl is good at this sort of task!
The article in CACM that presents Knuth's solution [1] also includes some criticism of Knuth's approach, and provides an alternate that uses a shell pipeline:
With great respect to Doug McIlroy (in the CACM article), the shell pipeline has a serious problem that Knuth's Pascal program doesn't have. (I'm assuming Knuth's program is written in standard Pascal.) You could have compiled and run Knuth's program on an IBM PC XT running MS-DOS; indeed on any computer having a standard Pascal compiler. Not so the shell pipeline, where you must be running under an operating system with pipes and 4 additional programs: tr, sort, uniq, and sed.
McIlroy also discusses how a program "built for the ages" should have "a large factor of safety". McIlroy was worried about how Knuth's program would scale up to larger bodies of text. Also, Bentley's/McIlroy's critique was published in 1986, which I think was well before there was a major look into Unix tools and their susceptibility to buffer overruns, etc. In 1986, could people have determined the limits of tr, sort, uniq, sed, and pipes--both individually and collectively--when handling large bodies of text? With a lot of effort, yes, but if there was a problem, Knuth at least only had one program to look at. With the shell pipeline, one would have to examine the 4 programs plus the shell's implementation of pipes.
(I'm not defending Pascal and Knuth, Bentley, and McIlroy are always worth reading on any topic -- thanks for posting the link!)
Bringing this back to Forth, Bernd Paysan, who needs no introduction to the people in the Forth community, wrote "A Web-Server in Forth", https://bernd-paysan.de/httpd-en.html . It only took him a few hours, but in fairness to us mortals, it's an HTTP request processor that reads a single HTTP request from stdin, processes it, and writes it output to stdout. In other words, it's not really a full web server because it depends on an operating system with an inetd daemon for all the networking. As with McIlroy's shell pipeline, there is a lot of heavy lifting done by operating system tools. (Paysan's article is highly recommended for people learning Forth, like me when I read it back in the 2000s.)
> You can do anything you want in it, it's super flexible, the tooling is great, but almost everything requires you to just write quite a bit more code than you would in Perl, Python, Ruby, JS, etc.
Given that Smalltalk precedes JS by many years: if it is true, then it was not always true.
Given that Smalltalk was early to the GUI WIMP party: if it is true, then it was not always true for GUI WIMP use.
I don't think there's a unifying reason why programming languages languish in obscurity; it's certainly not because they're "too powerful." What does "powerful" even mean? I used to care more about comparing programming languages, but I mostly don't these days. Actually used/useful languages mostly just got lucky: C was how you wrote code for Unix; Python was Perl but less funny-looking; Ruby was Rails; JavaScript is your only choice in a web browser; Lisp had its heyday in the age of symbolic AI.
Forth and (R4RS) Scheme are simple to implement, so they're fun toys. Some other languages like Haskell have interesting ideas but don't excel at solving any particular problems. Both toy and general-purpose programming languages are plentiful.
Alike to big fortunes, no one wants to hear the truth about lot of them existing due to simple luck. There is a significant amount of post-hoc rationalization to explain the success by some almost magic virtues. Or even explain the success by lack of such virtues - "worse is better" and so on.
One thing I note is that all of the languages you name are very far from the machine. Also Forth is not close to the modern machine. Note that it only has two integer types and the larger one can be aligned either way you make sure it is not.
> One thing I note is that all of the languages you name are very far from the machine
Common lisp is one step away from assembly - you disassemble any function and it is, in fact, a valid strategy of one wants to check the compiler optimizations.
I googled a bit on how common lisp is compiled. Apparently it is possible to add some sort of type hints and ensure that parameters/variables have a certain type. If one uses that for most code, it would potentially be enough to qualify as being close to the machine.
To me it means that one attempts to use the machine well. I.e., avoid introducing overheads that have nothing to do with the problem one is trying to solve. As an example of something that is very far from the machine imagine wanting to add some integers together. One can do this in untyped lambda calculus by employing Church Numberals. If one looks at the memory representation now your numerals are a linked list of a size equal, or proportional, to the number. However, the machine actually has machine language instructions to add numbers in a much more efficient way. For this discussion maybe the most relevant example is that using dynamic typing for algorithms that don't need it is distant from the machine because every value now has a runtime type label that is actually not needed because if your program could actually be statically typed, one would know in advance what the type labels are so they are redundant.
They scale extremely effectively to large problems solved by a team size of one, maybe two.
The story goes that changing the language to fit how you're thinking about the problem is obstructive the rest of the people thinking about the same problem.
I'm pretty sure this story is nonsense. Popular though.
frankly it's a miracle any of them scaled at all, such popularity mostly comes down to an arbitrary choice made decades ago by a lucky vendor instead of some grand overarching design
I spent a few months playing with forth after seeing a talk on it at Boston Code Camp. I struggled to find a practical application (I do web dev), but it had a lasting effect on my style of programming. Something about the way you factor a forth program changed me. Now I mainly do functional-flavored typescript, and while forth is NOT an FP language, there is a lot that carries over.
In Forth, the language rewards you for keeping your words focused and applying the single responsibility principal. It’s very easy to write a lot of small words that do one thing and then compose your program out of them. It’s painful to not do this.
There is no state outside the stack. If you call a word it pulls values off the stack and deposits values back on the stack. Having no other mechanism for transferring data requires you to basically create data pipelines that start to look like spoken language.
Forth has been a peripheral fascination of mine for about a decade, just because it seems to do well at nearly every level of the software stack. Like a part of me wants to build a kernel with it, or make a web server, or anything in between.
I've never actually done any Forth, though, just because it's a bit arcane compared to the C-inspired stuff that took over.
FORTH has some elegance and it's so simple that it is tempting to implement it.
However, no language should permit defining the value of 4 by 12, as there is no situation in which this can bring more good than harm in the long term.
Another issue that affects FORTH but also Perl and other languages is that they deal with a lot of things implicitly (e.g. the stack, or arguments to functions). Most people agree that explicity is more easy to read than implicit.
> However, no language should permit defining the value of 4 by 12, as there is no situation in which this can bring more good than harm in the long term.
A Skil saw should not permit you sticking your fingers in the spinning blade, yet most people know that this is a stupid and dangerous thing to do.
I wish "Simple Made Easy," by Rich Hickey, could be applied here. Forth is simple but not easy. If there is something as simple as Forth but also accessible to mere mortals (aka easy) then I'd like to know what it is (I don't consider Clojure itself as a language to be simple in this sense).
"Working without names (also known as implicit or tacit or point-free programming) is sometimes a more natural and less irritating way to compute. Getting rid of names can also lead to much more concise code. And less code is good code."
Does Forth really reduce the burden of naming things? You don't name results but don't
you have to pay for it with the burden of naming words? (My impression is that there's more words in a Forth program than functions in an equivalent program in a language that has named variables).
> Does Forth really reduce the burden of naming things?
I would say that you have less names, but they are more important. Plus, it is more difficult to name things because you prefer short names; in all languages, when you have a good naming "discipline", follow a naming convention, you end up with an informal "grammar" inside of your names. In Forth this is even more important.
> My impression is that there's more words in a Forth program than functions in an equivalent program in a language that has named variables
Yes, some people have called that "ravioli code" or "confetti code", IIRC. But most of them are support words. In Forth, you also eventually end up with "module APIs". This also exists in C or Java or ..., except the ratio useful:support is lower.
The quote makes more sense IMO for array languages like J that support a tacit style. J's "trains" just make things flow without a lot of variables. Aaron Hsu's Co Dfns compiler (spoken about on here and YouTube) also uses this style with Dyalog APL.
Forth is concatenative, so you can build the words on top of each other without worrying about a ton of variables. So I think it's partially true for Forth.
RPN interpreters require very little core memory. So they were popular with computers where core memory was under ten kilobytes.
But its horrible for software engineering with multiple programmers and large codebases. Lacks structures, interfaces, modules, data abstraction that you expect in a modern language. We called it the "Chinese food" of coding- ten minutes later you had nomidea what you just coded.
Coco Conn and Paul Rother wrote this up about what they did with FORTH at HOMER & Assoc, who made some really classic music videos including Atomic Dog, and hired Charles Moore himself! Here's what Coco Conn posted about it, and some discussion and links about it that I'm including with her permission:
Mitch Bradley came up with a nice way to refactor the Forth compiler/interpreter and control structures, so that you could use them immediately at top level! Traditional FORTHs only let you use IF, DO, WHILE, etc in : definitions, but they work fine at top level in Mitch's Forths (including CForth and Open Firmware):
Back in 2004 or so - ancient days now - I remember an elderly programmer on
#gobolinux (freenode IRC back in the days) who kept on praising Forth. I never
understood why, but he liked Forth a lot.
Now - that in itself doesn't mean a whole lot, as it is just anecdotal, but people
who are very passionate about programming languages are quite rare. I've not seen
something like that happen with any other language (excluding also another guy on
#gobolinux who liked Haskell). I did not see anyone praise, say, PHP, perl, JavaScript etc....
Some languages people don't like to talk about much. Forth though was different in
that regard. I never got into it; I feel it has outlived the modern era like many
other languages, but that guy who kept on talking about it I still remember. His
website also was built in Forth and it was oddly enough kind of an "interactive"
website (perhaps he also used JavaScript, I forgot, but I seem to remember he
said most or all of it was implemented in Forth - turtles all down the way).
The Forth super power is that you have full control over how a symbol is evaluated, both at compile and runtime. I don't know of anything else that offers that. Lisp doesn't.
That gives the developer pretty much free rein to do whatever they want, which can be both good and bad.
I've always loved the elegance of Frank Sergeant's 3 Instruction Forth paper [1], it's very cool once you wrap your head around it.
Also, studying the F83 Metacompiler is valuable as well. F83 is a very capable 8/16-bit Forth system.
I honestly marvel at how much work must have gone into F83, given the tools of the time. I wish I knew more about its development journey. How it got bootstrapped.
There's a certain mesmerizing effect that creeps in once you start digging into programming language fundamentals.
Any kind of notation, really, can do that to a person. It's kind of hypnotic.
I avoid it like the plague (getting too much into it). Not because I dislike it, but because I like it so much.
I believe the ideal programming language must be full of problems, and then obvious ways to get around those problems. It's better than a near-perfect language with one or two problems that are very hard to get around.
The "Stop Writing Dead Programs" video mentioned is quite nice. It's surprising how the web is a platform for many of the languages the presenter offer as inspiration.
I first encountered Forth on a TI-99/4A, complete with that magnificent expansion box that looked like industrial HVAC equipment. Hearing me complain about TI Extended BASIC's glacial pace, my parents saw in one of my magazines that Forth was faster and bought it hoping I would find it helpful.
It was mind-bending but fascinating. I managed a few text adventures, some vaguely Pac-Man-esque clones, and a lingering sense that I was speaking a language from another dimension.
I've since forgiven my parents. Forth resurfaces now and then, usually when I reread Leo Brodie's thought-provoking Forth books, and I feel like I'm decoding the sacred texts of a minimalist cult. I came away thinking better, even if I've never completely caught up with the language.
it mentions sometimes not naming things as great, but... what does naming intermediate values in forth look like? Is there even a naming scope that would allow for me to give values names in case I don't want to get entirely lost in the sauce?
In early 80s when I was a wee nerd in college a gentleman named Ray who owned Laboratory Microsystems was nice enough to give a poor college kid a copy of his excellent Forth implementation for the then-nascent IBM PC.
I breadboarded a little EPROM programmer (driven by a parallel printer port with the programming code done in Forth because I couldn't afford a real one). Then breadboarded implemented a little Z80 system with a bunch of general purpose I/O and a Forth "OS" in EPROM.
Used that little setup as the basis for a number of projects, including a home alarm system with phone-based control plus voice synthesis phone-based alert calling (which a couple silicon valley VCs were gracious enough to take a meeting about).
Forth gave me wings. Despite it's reputation as a "write-only language". Good times.
"There is absolutely no reason we have to use increasingly inefficient and poorly-constructed software with steeper and steeper hardware requirements in the decades to come."
The term "we" as used here hopefully means individual, free-thinking computer users, not so-called "tech" companies
If Silicon Valley companies want to use increasingly-inefficient, poorly-constructed, resource-insatiable software, then nothing stops them from doing do
"Forth is not easy. It may not always even be pleasant. But it is certainly simple."
Complex isn't easy, either
That is why (a) "insecurity", unreliability, expense, etc. and (b) complexity generally go hand-in-hand
If you like Forth, but find it challenging to build real stuff with, Factor (https://factorcode.org/) is most or all of the good stuff about Forth designed in a way that's much easier to do things with. It was designed by Slava Pestov (who I think had a big hand in Swift), and honestly it's a lot of fun to build webapps and other programs with, and much less brutal to read than Forth can be.
I have very fond memories of programming in PostScript within NeWS/HyperNeWS - it did quite a few things that I've never seen in any other environment.
Edit: To be fair relying on PostScript probably did limit the appeal, but I actually really liked it.
Thank you fellow HyperLooker! For me, PostScript WAS the appeal!
For the rest of the civilized world, Arthur van Hoff wrote "PdB", an object oriented PostScript => C compiler.
https://news.ycombinator.com/item?id=10088193
>Arthur van Hoff wrote PdB, and we used it for to develop HyperLook (nee HyperNeWS nee GoodNeWS). You could actually subclass PostScript classes in C, and vice-verse!
https://news.ycombinator.com/item?id=29964271
>That's interesting! I love Forth, but I love PostScript even more, because it's so much like Lisp. What is it about PostScript that you dislike, that doesn't bother you about Forth?
>Arthur van Hoff wrote "PdB" for people who prefer object oriented C syntax to PostScript. I wrote some PdB code for HyperLook, although I preferred writing directly in PostScript.
Leigh Klotz used PdB at Xerox PARC, and wrote this about it here:
https://regex.info/blog/2006-09-15/247#comment-18269
>OK, I think I’ve written more PostScript by hand than Jamie, so I assume he thinks I’m not reading this. Back in the old days, I designed a system that used incredible amounts of PostScript. One thing that made it easier for us was a C-like syntax to PS compiler, done by a fellow at the Turning Institute. We licensed it and used it heavily, and I extended it a bit to be able to handle uneven stack-armed IF, and added varieties of inheritance. The project was called PdB and eventually it folded, and the author left and went to First Person Software, where he wrote a very similar language syntax for something called Oak, and it compiled to bytecodes instead of PostScript. Oak got renamed Java.
Syntactic Extensions to PdB to Support TNT Classing Mechanisms:
https://www.donhopkins.com/home/archive/NeWS/PdB.txt
Most of the built-in HyperLook components were written in C with PdB.
I wrote HyperLook wrapper components around TNT 2.0 (The NeWS Toolkit) objects like pie menus, Open Look menus, sliders, scrolling lists, buttons, etc. I used them in the HyperLook edition of SimCity, which you can see in this screen snapshot:
https://www.donhopkins.com/home/catalog/hyperlook/HyperLook-...
Arthur later went on to join Sun (James Gosling's "First Person" group), wrote the Java compiler in Java, and AWT, then left Sun to form Marimba, where they developed "Castanet" (push code and content distribution), and Bongo (HyperCard/HyperLook for Java, with a WYSIWYG UI editor and script editor, that dynamically ran the Java compiler to compile and hot patch scripts attached to objects on the fly. Which was groundbreaking at the time, though IDEs do it all the time now).
https://news.ycombinator.com/item?id=25434613
>Bongo is to Java+HyperCard as HyperLook is to PostScript+HyperCard.
Danny Goodman himself (the HyperCard book author) wrote a book about Bongo! Arthur's Forward explains it well.
https://www.amazon.com/Official-Marimba-Guide-Bongo-Goodman/...
https://archive.org/details/officialmarmba00good
>Foreward
>Marimba was formed in early 1996 by four members of the team that created Java. Kim Polese, Jonathan Payne, Sami Shaio, and I left Sun Microsystems and founded Marimba with the goal to build commercial consumer applications written entirely in Java.
>While at Sun we concentrated on creating a great multi-platform, portable, efficient, object-oriented, multi-threaded, and buzzword-compliant language. However, we paid too little attention to developing tools. In early 1996 Java was largely still a language for skilled programmers who are happy with emacs, a Java compiler, and lots of coffee. Luckily these so-called "Rambo" programmers loved Java and made it very successful.
>Creating large applications in Java turned out to be much harder than we had anticipated, so we decided that we needed better tools before we could build better applications. That is why we created Bongo. Bongo is a tool that allows you to quickly create a user interface using a variety of widgets, images, audio, and animation. After you have created a user interface you can script it in Java, or you can easily hook it up to a Java program.
>Bongo is a high-level tool that provides a clean separation of semantics and design elements.
>It allows multi-disciplinary teams to work simultaneously on a large application without getting in each other's hair. You will find that it is a very powerful tool that is great for creating good-looking, functional, but still very flexible user interfaces. In addition to the standard widgets, Bongo enables you to extend the widget set by creating new widget classes in Java.
>This means that you can develop your own set of widgets which are easily integrated into user interfaces developed with Bongo.
>One of the great features of Bongo is its capability to incorporate applets into user interfaces.
>This enables you to use applet creation tools from third-party software vendors to create components of your user interface and combine these components into a single consistent application using Bongo. This is the way of the future: In future releases, Bongo will naturally support Sun's JavaBeans which will further simplify the process of integrating components created by different tools. This way, you can choose the tools that are appropriate for the job, rather than being stuck with the tools provided by the environment.
>A lot of the ideas behind Bongo are based on a tool called Hyper NeWS which I developed for the NeWS windows system during the late '80s (NeWS was another brain-child of Sun's James Gosling). HyperNeWS used the stack, background, and card model which was popularized by Apple's HyperCard. Bongo goes a lot further than HyperNeWS by allowing arbitrary container hierarchies and scripting.
>I am really excited that Danny has written this excellent book on Bongo. It clearly explains the concepts behind Bongo, and it takes you through many examples step by step. This book is an essential tool for all serious Bongo users.
>Have fun, Arthur van Hoff, Chief Tenology Officer, Marimba, Inc.
I had to have a peek if it's all just web. Apparently, no.
https://concatenative.org/wiki/view/Factor/UI
> The Factor UI is a GUI toolkit together with a set of developer tools, written entirely in Factor, implemented on top of a combination of OpenGL and native platform APIs: X11, Win32 and Cocoa.
> UI gadgets are rendered using the cross-platform OpenGL API, while native platform APIs are used to create windows and receive events. The platform bindings can be also used independently; X11 binding has also been used in a Factor window manager, Factory, which is no longer maintained. The Cocoa binding is used directly by the webkit-demo vocabulary in Factor.
Fascinating. Probably dead and no mention of Wayland, but fascinating.
Factor is not dead, but continues to make development progress. If you're curious you can find more information on the main page:
https://factorcode.org
The latest release of 0.100 was September 2024, and we are getting close to a new release which we hope to do end of the year or so.
https://github.com/factor/factor
The cross-platform UI that Factor has works on macOS, Windows, and Linux. On Linux, it unfortunately still uses a GTK2-GLext project for the OpenGL widget that we render into, but modern GTK3/4 has a Gtk.GlArea that we need to switch to using which will improve the compatibility on Wayland. However, it works fine with even the latest Ubuntu 25.10 release.
And of course, you could use other libraries easily, such as Raylib:
https://re.factorcode.org/2025/05/raylib.html
Factor is super cool! And the amount of packages ("vocabularies") it comes bundled with is just astonishing.
note to those interested: no apple silicon support.
No, but works fine in Rosetta emulation. And can use native libraries installed via for example the Intel Homebrew.
We do hope to get native aarch64 support in the near future. Let's see.
In my first proper job as a software engineer I wrote a bunch of Forth for "fruit machines". I don't know what the US equivalent would be but they are low stakes gambling machines which are quite common in UK pubs. The core processor was a 6809 and Forth was chosen because the interpreter was super small and easy to implement. I really appreciated the quick interactive way you could update and tweak code as you tested it. I did get slightly weary of having to keep the state of the stack in your head as you DUP and SWAP stuff around but that was probably due to my inexperience and not decomposing things enough.
They continued to use Forth as the basis for their 68000 based video gaming machines although when it came to the hand classifier for video poker we ended up using C - mostly because we wanted to run a lot of simulations on one of these new fangled "Pentium" processors to make sure we got the prize distribution right to meet the target repayment rate of ~98%.
We just refer to them as “slot machines” in the US
Stepping away from Forth in particular, one of the benefits of a stack-based / concatenative language is that it's easy to implement on constrained hardware. uxn [1] is a great example of that.
And shameless self-promotion, if you're interested in how these kinds of languages compare with more traditional named-based languages, with more theoretical constructs like the lambda calculus and combinatory logic, and with gadgets like a PyBadge — well you're in luck! I gave a talk about exactly that at the final Strange Loop [2].
[1] https://100r.co/site/uxn.html
[2] https://dcreager.net/talks/concatenative-languages/
TL;DR: I'm trying to Forth a Lisp.
This is long winded, but maybe you have some thoughts here.
I've been building a DOM-builder API recently in Rust. The existing options (there are many) tend to use textual templating, which can't reason well about anything, or basic macros, which never support indentation. I wanted something that was just code, where I'd be in full control on the indentation (or lack thereof) programmatically. The closest equivalent is Python's Dominate [1], though its indentation mechanisms are buggy.
So I built a system using the traditional tree where Nodes own other Nodes at random addresses, and I built a renderer that renders those nodes and concatenates their strings recursively. It ended up working but it was hacky and very slow for the large inputs. In release mode, it was taking almost a minute to render 70 files, and I want about two orders of magnitude lower.
I ran it through profilers and optimized it a bit, but wanted to see if I could simplify the architecture and reduce the amount of work the computer could do. I read about flattening ASTs [2] and how through optimizing that format, you can end up with a sort of bytecode [3]. I also looked into Data-Oriented Design, watching Mike Acton's famous talk [4], Andrew Kelley's talk about DoD in Zig [5], and reading through the DoD book by Richard Fabian [6].
I ended up with something that works quite well for traversing and rendering, which is a stack that can be traversed and rendered in O(n), but I lost my nice Dominate-like API. As in, I can build these beautiful, flat trees, but to embed those trees in my code, I need to either materialize a tree in the traditional style first and then push it onto these stacks, or do some sort of macro magic to make these stack pushes.
I wonder if this is a common issue with stack-based programming. It is, in my case, quite simple for the computer, but hard to fit into an API without building up the stack manually!
---
1. https://pypi.org/project/dominate/
2. https://www.cs.cornell.edu/~asampson/blog/flattening.html
3. https://old.reddit.com/r/ProgrammingLanguages/comments/mrifd...
4. [Mike Acton] https://www.youtube.com/watch?v=rX0ItVEVjHc
5. [Zig] https://www.youtube.com/watch?v=IroPQ150F6c
6. https://www.dataorienteddesign.com/dodbook.pdf
Many people glorify the simplicity of Lisp as an interpreter, but Forth is similar and underappreciated. Sadly, the only code I've written in Forth is... PostScript. Yeah, PostScript is a dialect of Forth. As a child, I really was amused by the demo of GraFORTH on Apple ][, which included 3D wireframe animations, which at the time were magical.
> As a child, I really was amused by the demo of GraFORTH on Apple ][, which included 3D wireframe animations, which at the time were magical.
I originally wrote GraFORTH (https://archive.org/details/a2_GraFORTH_1981_Lutus_Paul) to escape the slow world of integer BASIC on my first computer (an Apple II). Because it relied on large blocks of assembly code to produce nice graphics, it perhaps misled people about what Forth could do on its own.
Later I wrote a variation I called TransFORTH (https://mirrors.apple2.org.za/ftp.apple.asimov.net/documenta...) that supported floating-point. I intended to combine GraFORTH and TransFORTH, but my computer didn't have enough RAM.
Innocent times, different world, before the personal computing tail began wagging the dog.
Someone mentioning childhood tech and the creator showing up is peak HN, in the best possible way. I love little threads like this... I never used a Forth as a child, but I recall reading about it and marvelling over it at a time when getting hold of huge amounts of pirated games was easy, but finding anywhere to even buy more serious tools could be challenge... I think it was probably 20+ years before I actually ended up trying a Forth.
Thank you for one of the coolest things I'd ever seen back then... The space shuttle animation was pure magic to me back then.
> I originally wrote GraFORTH
Oh really?
Given you were around at about the correct time period, could you hazard a guess at what dialect this very old Forth game from Byte magazine was written in?
https://github.com/RickCarlino/Cosmic-Conquest-1982
It has some graphics commands in that I couldn't find in any other version of Forth on the Apple II. I'm a little outside the Apple II demographic, since they didn't really take off in the UK - although the very first home computer I ever used was an Apple II owned by the father of the guy that founded Rockstar Games :-)
Yes, Paul Lutus wrote GraForth.
As for vhtab, I don't know.
https://groups.google.com/g/comp.lang.forth/c/WqrpoPtxwoM/m/...
A customized figforth 79forth proforth or some other forth lost in a basement.
> could you hazard a guess at what dialect this very old Forth game from Byte magazine was written in?
The writeup identifies the original Forth source/version as most likely FIGForth '78, so I assume that's correct. This doesn't mean it has no code borrowed from elsewhere, and we might never sort that out.
I should add that Forth has the property that you go from nothing to writing programs pretty quickly, because it's all based on RPN (like HP calculators) and there's very little infrastructure required to create a usable environment -- unlike virtually every other language I've created/used.
My having been a fan of HP calculators beforehand played a part in getting me started with Forth -- RPN was an aspect of Forth I didn't have to learn before getting started.
Remember also that the 6502 (the Apple II processor) had a rather easily understood assembly instruction set, which meant any adept 6502 programmer could basically decode and grab other people's work without needing a source listing. No longer true for modern processors.
Guess how we updated each other during program development and updates? Ready? 5 1/4 inch floppy disks stuffed into big manila envelopes, then snail-mailed. No, not making this up.
Yup. I am old enough to have posted cassettes rather than 5.25" floppies, although I was using a Z80-based machine that ran Forth and took 5.25" floppies as late as the early 90s. Wildly specialised bit of kit, and I wish I'd stolen it when I had the chance because it's probably been thrown over the side of an oil rig by now.
The first Forth machine I used was the Jupiter Ace, which was a home computer sold in the UK by a couple of guys who spun their company off from Sinclair. It was a bit too underpowered and a bit too late and a bit too weird to really "land" - everyone either had a ZX Spectrum or a Commodore 64 by then, and schools and rich kids had BBC Micros (I got right into 6502 machine code on the one we had in school when I was about 9, and then got my hands on its predecessor, the Acorn Atom). I also had a couple of Epson HX20s that the company my dad worked for had used as data loggers, which had Forth ROMs fitted. That's how I got right into 6809 programming, and that chip is ludicrously suitable for Forth!
I got Cosmic Conquest working on an Apple II emulator but the support code around getting it working is frankly terrifying. I used a fig-Forth disk, and wrote my own implementations of the graphics words used there, which I guess is what the original author did.
Tracking them down has so far proved impossible, and it's quite likely they are no longer around.
As a user, I'm just here to offer ::applause:: for GraFORTH.
I used Graforth, that was so cool! I owe you a beer for pirating it. I also (like most Forth enthusiasts) developed my own Apple ][, based on FIG-FORTH, with its own graphics libraries and PRODOS integration, and used it to write terminal emulators.
Then I discovered Mitch Bradley's Sun Forth (aka ForthMacs, Open Firmware, IEEE 1275-1994), which was originally based on Langton and Perry Forth 83, but has a metacompiler and can target many platforms and word sizes and cpus.
More thoughts and links on Mitch Bradley, Open Firmware and Forth programming:
https://news.ycombinator.com/item?id=21822840
https://github.com/MitchBradley/openfirmware
Has anybody else ever had the dubious experience of using "Cap'n Software Forth"? That's what John Draper wrote [Sl]EasyWriter with (which he wrote on work furlough from the Alameda County Jail). During the 90's SF Raves scene I would always carry some emergency tobacco around as repellent, just in case I ran into him.
https://en.wikipedia.org/wiki/EasyWriter
http://www.art.net/~hopkins/Don/lang/forth.html
>The first Forth system I used was Cap'n Software Forth, on the Apple ][, by John Draper. The first time I met John Draper was when Mike Grant brought him over to my house, because Mike's mother was fed up with Draper, and didn't want him staying over any longer. So Mike brought him over to stay at my house, instead. He had been attending some science fiction convention, was about to go to the Galopagos Islands, always insisted on doing back exercises with everyone, got very rude in an elevator when someone lit up a cigarette, and bragged he could smoke Mike's brother Greg under the table. In case you're ever at a party, and you have some pot that he wants to smoke and you just can't get rid of him, try filling up a bowl with some tobacco and offering it to him. It's a good idea to keep some "emergency tobacco" on your person at all times whenever attending raves in the bay area. My mom got fed up too, and ended up driving him all the way to the airport to get rid of him. On the way, he offered to sell us his extra can of peanuts, but my mom suggested that he might get hungry later, and that he had better hold onto them. What tact!
As annoying and creepy as he is, he does have a lot of great stories to tell...
Calling Richard Nixon:
https://news.ycombinator.com/item?id=22671636
Execute Some "Get High" Instructions:
https://news.ycombinator.com/item?id=39575987
Forging BART Cards:
https://news.ycombinator.com/item?id=34568618
My favorite John Draper story -- not sure if it's true, heard it from several sources.
One day IBM called me up and asked if I would write them something like Apple Writer, for their new PC. I instantly asked, "Under what terms?" I think that surprised them -- I was wrongly rumored to be all programmer and no business sense.
They replied, "We give you $100,000 in royalties, after which we own the program." I thought a bit and said, "Hmm ... $100,000 ... that's about 15 days of Apple Writer royalties." A long silence on the phone line.
So they realized I wasn't going to write anything for them. Then, according to rumor, they asked John Draper and he agreed -- he wrote them a word processor. A really terrible one.
After IBM voluntarily withdrew his program from the market, Draper is rumored to have said, "They asked for a $100,000 program and I gave them one."
Yep, we were pretty snotty in those days. But then, we didn't have to compete with AI.
The difference between Forth and Lisp could not be more pronounced. Forth source code has entirely implicit structure, you can't even tell which function is called on which arguments. Lisp has entirely explicit structure which makes it much easier to read and edit. Lisp needs only a single primitive (lambda) to create the entire programming language, whereas Forth needs many primitives which break the core idea of the language in order to be usable. All of what is elegant about Lisp is ultimately lacking in Forth.
I think I'd agree from a mathematical perspective that lisp is more elegant, but implementation-wise, I really do like Forth's simplicity. They're both really cool.
I had a copy of that as well - I forget whether it was a Christmas gift or if I bought it. The demos were neat, but I was lacking in ideas when I had time to play with it, and the Apple didn't go to college with me.
But if I were going to do some "from the ground up, using first principles, with nobody else's libraries" embedded work, Forth would certainly be something I'd consider.
>Yeah, PostScript is a dialect of Forth.
My understanding is they were developed independently.
Like many here, the annual "language" issue of Byte Magazine in 1980 introduced me to Forth. Although I was enrolled as an engineer in college, I was a frustrated (mediocre) programmer but my upstate NY institute did not offer a Comp Sci degree at that time. Forth was a gateway drug for me and demystified the concept of compiling for me. Prior to Forth, when a thousand Freshman engineers were writing their Fortran IV projects for an 8am class the next morning on mainframe 3270 terminals, we imagined that the operators must have to continuously pour water on the compiler to keep it cool. Yeah, it was 1980 and computers were still a little magic.
But Forth and threaded code was a life changer; it explained so much! My 3rd year, I partially implemented a 32-bit Forth in IBM S360 Assembler but getting I/O to work was my downfall (mostly due to my poor skills and lack of experience.) But the threaded interpreter and the basic stack ops all worked. But then I was introduced to Lisp...
But my love of Forth never left me. I make my living with C in the early days, but predominantly SQL and Bash these days. When I had my second (third?) midlife crisis, I got two tattoos on my arm: one is the Y Combinator in Lisp (I 'lost' a trailing parenthesis due to a cut-n-paste error in the template given to the tattoo artist, so I had to go back and get another tattoo with an error message pointing out the missing parenthesis.), the second tattoo is the implementation of an ANSI Forth word:
The fact that I could write an entire function with only punctuation characters was mind-blowing and reminds me to approach problems in unique ways. The tattoos are also great ways to start up conversations in bars...I've had a soft spot for Forth and am toying with a silly Forth-like interpreter for web programming ... if not for actual use, at least for some fun time. One concept it adds is the notion of a "current selection" which can define the available vocabulary to use and is used to select and work with DOM elements. Just experimenting.
https://github.com/srikumarks/pjs
Edit: As a kid, I disliked BASIC as a language though it let me do fun stuff. So I made an interpreter in BASIC for a language I'd like and it turned out Forth-like (and I didn't know about Forth at that time). I guess I'm still that kid some 35 years later.
I used to be a fan of these languages like Lisp and Forth and Joy (and Factor and Haskell), but then I found that what I a really long for is just (untyped) lambda calculus (as a universal language). (Combinatory logic is just a similar representation of lambda calculus, but the differences go away quickly once you start abstracting stuff.)
I think expressing semantics of all (common) programming languages in lambda calculus would give us a solid foundation for automated program translation. And we should aim for that, the babel tower of languages doesn't really help anyone.
The current issue I have is with type theory. So I am trying to embed notion of types directly into the lambda terms, so they would sort of "automatically typecheck" when composed. The crucial in this, in my opinion, are lambda terms that do not use lambda abstraction in their body, because you can think of these terms as a kind of mini-DSLs that have yet to be interpreted.
Anyway, once we can translate calculus of constructions (and other common formal logics) into untyped lambda calculus, it will also help us doing automated theorem proving. It must be theoretically possible, but to my knowledge nobody has really done this sort of hardcore formalization.
I implemented the Calculus of Constructions in untyped lambda calculus in order to shorten the 643 byte C program computing Loader's Number to 1850 bits (under 232 bytes) [1], as one of the milestones reached by my functional busy beaver function [2].
[1] https://codegolf.stackexchange.com/questions/176966/golf-a-n...
[2] https://oeis.org/A333479
I think this is great. I think you should write a paper on it.
I suspect it might need some kind of commutative diagram proof, i.e. if you express things in CoC formalized within BLC you will get the same result as when you express in BLC formalized within CoC, I am not sure from the top of my head.
(Kind of similar to showing that self-interpreting quoted interpreter on quoted program is the same as quoting the result of running the interpreter on the program.)
And of course, this proof of equivalence should have formalization both in CoC (Lean?) and BLC.
My hope is eventually someone writing a book on logic where the metalogic will be just untyped lambda calculus. Proofs will be just beta-reduction and judgemental equality. And everything will be in the form, let's study properties of these lambda terms that I came up with (the terms will of course represent some other logic such as CoC, simply typed LC, or even LC itself etc.).
Grounding programming languages in mathematics like this is essentially the goal of Strachey and Scott's denotational semantics, which has been very influential in programming language theory:
https://en.wikipedia.org/wiki/Denotational_semantics
All approaches to semantics of programming languages are mathematical, the denotational one is not "more mathematical" than the rest.
Not really a big fan, because the formalization of DS always left something desired, but I think a big difference with formalization in ULC is that ULC is (mostly) materialist while DS is structuralist.
So the initial formalization into ULC can be automated - if you have semantics of your language already implemented as an interpreter in another language, you can use this as a starting point.
With DS - I am not sure. I feel most people who build new languages don't provide DS specification.
It's a real cool idea to compile everything down to lambda calculus and then you solve all semantics issues. (If something fits) you can convert 1:1, use general libraries in one language in others without loss etc. Ah, what a beautiful world it could be!
A lot of languages(including forth) maps really poorly to LC. Read some of the Forth writing on portability at all costs:
http://www.ultratechnology.com/antiansi.htm
Simple pure concatenative languages map quite well though [1].
[1] https://github.com/tromp/AIT/blob/master/ait/mlatu.lam
Joy, right? Not Forth.
Conversion between spagetti stacks and pure stack programming(in which the stack contains numbers and no GC) has a massive translation cost if you go from LC to Forth and back.
Forth is an imperative language and as such you will have to model memory state (at least) somehow, if you want to use purely functional language as a representation. But that's the cost of doing business.
The thing is though, you don't translate to LC for performance, but for understanding. At any point, the LC evaluator might recognize a known subterm and replace it with a known equivalent. Depending on the goal, that might help to improve evaluation performance (skip known steps of beta reduction), reduce memory consumption (reduce size of the term), or improve readability (refactor code to use better known abstractions).
It's difficult to understand what they were actually doing, but reading between the lines, it sounds like an advantage of 'machine Forth' is writing some Forth words in assembly? I can see why that would run much faster for a jpeg decoder.
> And we should aim for that, the babel tower of languages doesn't really help anyone.
What exactly do you mean with this? That the amount of programming languages available isn't actually helpful, it's detrimental?
Well, I feel that lot of code is written again and again, just in different languages. If we could automatically compare and translate different implementations, I think it would be beneficial for finding bugs.
Everytime somebody comes up with a new programming language, I am like, yeah, so you added these abstraction, just in a different syntax. I think people who come up with new languages should implement the primitives on top of lambda calculus (which really is the simplest logical system we know), then we could potentially have automated translators between languages, and that way we could cater to everyone's preferences, and expand a standard library across different languages.
So in short, yes, I think proliferation of programming languages without formal understanding of their differences is detrimental to interoperability of our computer systems.
It would also allow wider notion of metaprogramming - automated manipulation of programs. For example, let's say I need to upgrade my source code from one interpreter version to another. If both interpreters are represented as a set of terms in lambda calculus, I can see how express one in the other, and formalize it as some kind of transformation. No more manual updates when something changes.
It would also allow to build a library of universal optimizations, etc. So I think programmers would benefit from having a single universal language.
> Everytime somebody comes up with a new programming language, I am like, yeah, so you added these abstraction, just in a different syntax.
I got this feeling too, until I started to explore languages outside of the C/Algol-like syntaxes. There is a wide range of languages out there, from array languages to lisps, and they don't give me the feeling of "just a different syntax" but actually changed the way I think.
So yeah, I love lisp now and spend most of my days writing it, but it also comes with the downside that now Java, C# and Golang look more similar than different to each other.
> It would also allow to build a library of universal optimizations, etc. So I think programmers would benefit from having a single universal language.
I think assuming everyone would use the same hardware, same environment and same workflows to solve the same problems, this would make a lot of sense and would be hugely beneficial!
But in reality lots of problems need different solutions, which has to be made in different ways, by different people who think differently. So because the world is plural, we need many programming languages too. Overall I feel like that's a benefit.
> they don't give me the feeling of "just a different syntax" but actually changed the way I think.
Not only that, but languages that enable more concise expression of an idea (without losing clarity for readers) reduces the error rates of programs written in them.
It's been proven that when not accounting for constraints imposed by a compiler/interpreter (like the Rust borrow checker), the average error rate per unit size of programs across widely varying languages is constant. So reducing the number of lines/expressions required to express ideas reduces the number of errors by the same factor. Brevity pays when readability isn't sacrificed.
> assuming everyone would use the same hardware, same environment and same workflows
They don't have to. I would ideally represent HW, OS and compiler in the LC as well. Then primitives of the language that could be proven to be HW/OS/compiler independent could be abstracted and readily translated. For the ones that could not - well you have exact description of what the differences are.
Racket is at least related to the kind of metalanguage system you're talking about. I've never actually done it, but to implement a new "#lang" in Racket, your job is essentially to write a "reader" for it that transliterates it to the classic Schemey Racket language. Libraries written in one #lang can then be called from one another (or not, if that's what you want -- a lot of the point of this in Racket is building little restricted teaching languages that have limited capabilities).
But why stop at Racket? Why not reduce the needed primitives even further down to LC?
Everyone should be free to unite behind my choices. It's obviously what is best for everyone.
That's a rather cheap retort. I am not saying everybody should use raw untyped lambda calculus for their programming, just that we would all benefit if we could translate languages we use to and from it, because then we could interoperate with any other code, refactor it, etc.
Has even a single programming language made the complete documentation and implementation effort you're describing? It'd be interesting to read about.
Isn’t structure lost in the compilation process?
I mean, we already have bits and pieces of what you want, like an assembler to C decompiler, but the output isn’t very nice without the types.
And how many languages can run on the CIL in dotnet. Say we create a CIL to lambda calculus compiler. Where do we go from there?
A CIL to LC compiler is effectively an emulator of CIL in LC. That is, every primitive of CIL has a corresponding LC term, which operates on the CIL execution state. So then you can express functions from .NET standard library as terms in LC.
Now let's say you do the same for JVM. Now you can start looking at all these lambda terms and search for some similarities. For example, you might notice that some list functions are equivalent if you transform the representation in a certain invertible way. This gives you a way to express functions from one standard library in the other.
In general, I think we should try to translate wide variety of human programs into lambda calculus and then try to refactor/transform the lambda terms to spot the common patterns.
That sounds very thankless, but on the other hand we have very fast computers and maybe "the bitter lesson" of just throwing more compute at finding patterns can be applied here, as well.
Certainly that is how I read it.
It divides effort, spreads it too thinly among too many disparate projects with essentially the same goals, and as a result, they all advance much more slowly.
Examples: how many successors to C are there now? Hare, Odin, Joy, Zig, Nim, Crystal, Jai, Rust, D... And probably as many again that are lower-profile or one-person efforts.
For a parallel example, consider desktop environments on FOSS xNix OSes.
I have tried to count and I found about 20.
A "desktop" here meaning that it provides a homogenous environment, including things like a file manager and tools to switching between apps, plus accessories such as text editors, media viewers, and maybe even an email client, calendar, and/or address book. I am trying to explicitly exclude simple window managers here.
The vast majority are simply re-implementations of the Windows 9x desktop. Taskbar along 1 edge of the screen, with buttons for open apps, start menu, system tray, hierarchical file explorer, a Control Panel app with icons for individual pages, etc.
This includes:
* KDE Plasma (and Trinity)
* GNOME Flashback (AKA GNOME Classic, including the Consort fork)
* Cinnamon
* Xfce
* Budgie
* MATE
* LXDE (including Raspberry Pi PIXEL)
* LXQt
* UKUI (from Ubuntu Kylin, openKylin, etc.)
* DDE (from Deepin but also UOS, Ubuntu DDE and others)
* Enlightenment (and Moksha etc.)
* ChromeOS Aura
And more that are now obsolete:
* EDE
* XPde
* Lumina
That's about 15, more if you count variants and forks. There are more.
The main differences are whether they use Gtk 2, 3 or 4, or Qt. That's it.
It's easier to count the ones that aren't visibly inspired by Windows >= 95:
* GNOME Shell, ElementaryOS's Pantheon, Ubuntu's Unity.
Arguably: GNUstep (whose project lead angrily maintains is not a desktop after all), and the long-dormant ROX Desktop...
So, arguably, 3 you can run on a modern distro today.
CDE is older than Linux or Free/NetBSD so doesn't count. I only know 1 distro that offers it, anyway: Sparky Linux.
MAXX Interactive Desktop looks interesting but it's not (yet?) FOSS.
All that effort that's gone into creating and maintaining 8-10 different Win9x desktops in C using Gtk. It's tragic.
And yet there is still no modern FOSS classic-MacOS desktop, or Mac OS X desktop, or GEM desktop, or Amiga desktop, or OS/2 Workplace Shell... it's not like inspiration is lacking. There are at least 3 rewrites of AmigaOS (AROS, MorphOS, AmigaOS 4.x) but despite so much passion nobody bothered to bring the desktop to Linux?
Defenders of each will vigorously argue that theirs is the best and there are good reasons why it's the best, I'm sure, but at the end of the day, a superset of all of the features of all of them would not be visibly different from any single one.
That's rather sad, IMHO.
> There are at least 3 rewrites of AmigaOS (AROS, MorphOS, AmigaOS 4.x) but despite so much passion nobody bothered to bring the desktop to Linux?
The passion is there for the whole AmigaOS, of which the desktop metaphor, Workbench, is just a part. What fun is AmigaOS without Exec, Intuition and AmigaDOS? The passion is to see AmigaOS run, not to see Linux wearing its skin.
GUIs for manipulating files a-la Workbench are readily available, nobody seems to have built an Amiga-skinned one when a Win95 one will do. DOpus is already a clone of Midnight Commander, and there are clones of that aplenty, the most DOPus-like one I've seen is Worker (http://www.boomerangsworld.de/cms/worker/)
The rest of the Workbench metaphor is available via AmiWM (https://www.lysator.liu.se/~marcus/amiwm.html), or requires apps to play along (e.g. Gadtools, MUI, Commodities, ARexx)
Well, you do you, and indeed, the entire community is free to do as it wishes.
What I find surprising is that there are multiple entire Amiga-themed Linux distros – for example:
https://www.commodoreos.net/CommodoreOS.aspx
I reviewed it. I was not very impressed.
https://www.theregister.com/2025/05/06/commodore_os_3/
And ones which put an Amiga emulator front and centre:
https://cubiclenate.com/pimiga/
https://wilkiecat.wordpress.com/2025/05/31/pimiga4-by-chris-...
(Which I looked at, but decided that there wasn't enough here to review.)
And new hardware like the A1200NG:
https://www.a1200.com/index.php/the-a1200-ng/
Which is an Arm board running Linux running a full-screen Amiga emulator.
And AROS Portable:
https://arosnews.github.io/aros-portable/
Which I also reviewed:
https://www.theregister.com/2025/05/22/aros_live/
Given this visible interest in running Amiga stuff on Linux and integrating AmigaOS (and AROS) I am very surprised that in ~30 years, nothing has progressed beyond a simple window manager.
Intuition isn't that big or complicated. It's already been recreated several times over, in MorphOS and in AROS.
I am so tired of seeing Linux desktops that are just another inferior recreation of Win95.
I want to see something different and this seems such an obvious candidate to me.
I think you can categorise Amiga enthusiasts in various ways, this is my taxonomy:
1. Hardware enthusiasts who specifically love the Amiga's original hardware, its peripherals, and the early post-Commodore direction (PowerPC accelerators), and/or modding all of the above. These sort of people used WarpOS back in the day and probably use MorphOS or AmigaOS 4 today. The question is whether, for these people, modern single-board computers "count" as Amigas or not.
2. Nostalgic enthusiasts of the system that the Amiga was, who are happy with a real Amiga, or with an emulated one, or an emulated one running on some board in a box shaped like an Amiga. Possibly with a non-Amiga UI to boot some classic games. These enthusiasts may enjoy fake floppy drive sounds that remind them of booting disks in their youth.
3. Software enthusiasts of the Amiga's OS, and the directions it took that were different from its contemporaries, and the software ecosystem that came from it. These people have a longer user-startup than startup-sequence. They probably have most of Aminet downloaded. These people might be interested in other alternative OSes, e.g. QNX or BeOS. If they're still using Amiga hardware, or emulators, they'd be interested in AmigaOS 3.5/3.9 and 3.1.4/3.2. This can also include AROS and the work to get it running on native hardware, not just m68k but also x86 and arm... but it's unlikely that it will ever support as broad a range of hardware that Linux does, which limits how many people would want to use it, because it's unlikely to be able to drive a random modern laptop.
4. The reverse of 3, Amiga users that were big UNIX fans, e.g. Fred Fish, the sort of people who ran GeekGadgets and replaced their AmigaShell with pdksh. They probably just moved wholesale to Linux and didn't look back.
There are probably other categories, but I think the one you're looking for is 5: enthusiasts of the Amiga's look and feel, but not its broader OS or its software. If they did care about that, they'd be in groups 2 and 3, and emulators or alternative AmigaOSes would satisfy them most.
I can't say why there aren't many alternative desktops for Linux. Probably because it takes a lot of resources to build a full desktop environment for linux - a Window Manager, or even just an existing Window Manager theme is not enough. A file browser is not enough. Ultimately it takes the applications themselves to play along, which only works when have the clout to make people write software in your style (e.g. KDE, GNOME, Windows, macOS, Android, etc.).
The only alternative UI taken from retro machines to Linux, that I can think of, is ROX Desktop (https://en.wikipedia.org/wiki/ROX_Desktop) with its ROX-Filer... and even that doesn't look entirely like RISC OS, which you could be running instead of Linux.
Interesting analysis. Thanks for that.
I'm well aware of ROX Desktop and have used it in the past. I've written about it:
https://www.theregister.com/2025/01/14/the_end_of_absolute_l...
ROX's AppDirs provided the app format used in AppImage.
I am sure I remember a Mac-like one, and I've asked -- https://news.ycombinator.com/item?id=29937562 -- but to no avail.
You say a WM isn't enough, and I agree, but in this case, amiwm is still right there.
I own an Amiga and I'm interested and I try to cover Amiga news, but back in the 1980s, I was an Archimedes owner. I loved RISC OS and I never mastered AmigaOS. This is not something I personally want, although I'd love to write about it.
I feel the same, too many people doing similar stuff in slightly different syntax, too few people looking at how things are similar and could be unified.
I think it's time to look beyond syntax in programming and untyped lambda calculus is the simplest choice (that is universal and can easily express new abstractions).
Mathematics suffers to some extent from a similar problem, but recent formalization efforts are really tackling it.
The thing is, the most fundamental obstacle to unification, is that unification is a very hard feature to obtain. Even in an LC formalization, do you expect that two "text editor" programs would be interchangeable? Would only a canonical text editor be allowed? Does the LC facilitate arbitrary extensions, that perhaps follow some rules at the interface/boundary? While I also lament what is, on the whole, wasted work and interest, I think the alternative is not some much simpler solution, but rather to offer something that is better. https://xkcd.com/927 is a warning, as true as it is annoying. We gravitate towards variety because it is simpler and natural to how we act; how will your proposal fundamentally improve on this? You call out misguided "defenders", and again I note the same problem, but you seem to be calling for One True Way with no significant realizable improvement.
Sometimes you can prove mathematically that two different approaches are equivalent, but differ only in name or some parameter, and that's a unification of sorts, without proliferation of standards.
In case of UIs, this is still an open problem and will be for many years, but I suspect we could do it by defining, in some kind of fuzzy logic for example, what a good UI does look like. Then we could transfer elements from one UI into another.
Or we could just start at the edge. For example, we can unify what a "user input" is, and what a "widget on the screen" is. Formalizing everything would allow us to do such transformations on programs, for example, universal input/output remapping. And then we could continue to unify into more specific logic of what input is being read and how things are drawn on the screen.
Untyped lambda calculus (and its sibling combinatory logic) is just language for expressing logic, nothing more. And it already exists (arguably it's one of the first programming languages, predates Forth and Lisp by at least two decades) and is among the simplest ones we know. I actually became interested in LC so much recently because, believe it or not, expressing things in classical logic is often more complicated than expressing things in LC.
> Sometimes you can prove mathematically that two different approaches are equivalent, but differ only in name or some parameter, and that's a unification of sorts, without proliferation of standards.
Certainly, but that would already be supposing most of the work to unify at the source level is done, unless an extremely strong normalization is possible.
> Untyped lambda calculus (and its sibling combinatory logic) is just language for expressing logic, nothing more. And it already exists (arguably it's one of the first programming languages, predates Forth and Lisp by at least two decades) and is among the simplest ones we know. I actually became interested in LC so much recently because, believe it or not, expressing things in classical logic is often more complicated than expressing things in LC.
(Personally, I prefer combinators slightly more)
I don't deny the power of LC for some uses, but taking one program, reducing it down to an LC equivalent, and then resurfacing as another program (in a different language, but otherwise equivalent), or some other program transformations you may desire, would certainly be elegant in some sense, but very complex. It's like programming in Brainfuck; the language itself is very simple, and making mechanistic tooling for it is very simple, but I don't think the tooling we could invent in 50 years would be sufficient to make Brainfuck simple to read or write. Moreover, formalizations of, say, "button" are not a problem, but scaling to different screens, devices, use cases, and so on will greatly increase the scope. This OS represents input events this way, this hardware provides that sort of data. I think this is the same problem as to why people, everyday, don't bother to make formal arguments almost all of the time. It's not that a formal argument along the lines of "you didn't take out the trash today, so I have reason to be frustrated with you" can't be formulated or proven, but rather that the level of rigor is generally considered both fatiguing and unnecessary.
Any time someone suggests something that should make things much simpler, I'm skeptical. There are things that have essential complexity too great to be made simple, and then humans maybe have an inherent overhead of accidental complexity, above and beyond the accidental complexity we accidentally add. I'm still interested to see where your efforts lead, but I'm not expecting to see cheap nuclear fusion for another 10 years at least, so.
Why specifically untyped?
Because it is the simplest thing we have, and has a pretty straightforward self-interpreter.
It feels like you need a lot more metamathematics to deal with typed lambda calculus than with untyped one, and types are something that comes without a justification.
Anyway, the idea is, if you have a language, you can think of source code written in the language as a giant lambda term, where you have all lambdas upfront and only composition in the body. A tree of symbols to be composed, essentially. And then to interpret this source code in a language, you supply definitions of the language's primitives as arguments to the source code term.
Now if your language is typed, the primitives need to be chosen in such a way, so that the interpreted term (the source code applied to language primitives) fails to normalize if the program is not typed correctly.
You can then have a correspondence between the primitives of the typed language that typecheck and simpler set of primitives of the same language used purely for computation, used under the assumption that the program typechecks. This correspondence "defines" the typing mechanism of your language (in untyped lambda terms).
> have all lambdas upfront and only composition in the body
That is only possible for a very limited subset of lambda terms. For example, it's not possible for the one-point basis
from which any closed lambda term can be constructed by composition.Yes, that's deliberate though. (I will call these "concrete" terms, because they lack abstraction in the body, but I am looking for a good name. In fact, no universal one-point basis can be expressed as a single term like that, you need a composition of two; the concrete property is not preserved in composition, although it might be preserved in weaker bases than S,K, such as B,C,K - kind of affine logic.)
Anyway, the reason I am interested in concrete terms is I want to define typing judgements in ULC somehow. (In CoC, all mathematical questions can be rephrased as is there an object of given type T, and you need to be able to formalize questions somehow to formalize metamathematics.)
An obvious definition of typing judgement in ULC would be: term x is of type B in a typed language A (both A and B are given terms) iff there exists a term x that satisfies Ax = B (the equality is judgemental after beta-normalization).
However, this definition doesn't work, because a general term x can just disregard A and return B directly. But - I think if we restrict x to be a concrete term in the above definition (possibly with a given number of upfront lambdas - not sure if that is required), then I think we can make the definition to work.
I also suspect concrete terms can be sort of "deconstructed from the outside". In general, we cannot resolve equality of terms, but I suspect that concrete terms can be analyzed (and quoted) within ULC.
One thing I realized about your CoC implementation - I think you embed CoC into ULC in an indirect way, working with CoC terms in a quoted way. And that's fine, but what I want is to have CoC in ULC directly, i.e. have a base set of ULC terms for the primitives of CoC language. But that also can be thought of as the CoC interpreter being restricted to apply on concrete terms, giving them meaning by substituting the base inside.
In other words, concrete terms are kind of "inside view" of quoting. (Maybe we should call them "data terms", because they effectively carry only data and no executable payload.) Having the concept of concrete terms in the ULC metalanguage can also help to define interaction to the external world, that you only accept data not executables, something that you kinda do through the "monadic" interface. (You need to "remember" that a term only accepts quoted data, it cannot be made explicit in ULC language. The advantage of concrete terms is they are independent on your choice of quoting operator.)
Anyway, I am not sure if I am making sense, I am trying to grapple with this myself, so it's OK if you don't think concrete terms are a useful concept.
> there exists a term x that satisfies Ax = B
Don't you mean xA = B ?
> x can just disregard A and return B
Not if you require x to be strict, i.e. it must use its argument.
> Don't you mean xA = B ?
No, that equation can be created as a special case from equation TAx = B (where Txy = yx is a transposition combinator).
> Not if you require x to be strict, i.e. it must use its argument.
I am not sure it would generally help but maybe you mean it only in case of equations xA=B (not Ax=B as I suggested). I also think this strictness condition would be too limiting.
In equation Ax=B, we can take A of the form R_n A_1 .. A_n, where R_n reduces Ax to x A_1 .. A_n (R_n rotates x in front of n arguments).
So if x is concrete and takes n arguments, we can think of it as "text" (parenthesised expression) in symbols that are to be interpreted as A_1, .. A_n. Requiring that every possible symbol is used in a language text fragment is a weird condition, which doesn't make much sense to me.
In any case, my definition of typing judgment then allows to put a chosen condition on a text composed from chosen symbols, which is quite powerful and can be used to e.g. determine whether a quoted expression is valid. And arguably it is a very natural definition in the context of ULC. (I read to Mock a Mockingbird 30 years ago - in partial Slovak translation - and most problems there could be formulated as the Ax=B equation for x being a concrete term.)
The typed lambda calculus is not Turing complete.
Not quite true, depends on the flavor of the type system you use. Church's simply typed is not, but other type systems are.
I think one way to make it Turing complete is to add typing rule for Y combinator as an axiom to your simple type system, but I am not sure.
Yes, I had Church's simply typed lambda calculus in mind. You need to add a fix point combinator or general recursion.
I was captivated by the August 1980 issue of Byte magazine, which had a cover dedicated to Forth. It was supposed to be easy to implement, and I imagined I might do that with my new KIM-1 6502 board. Alas, the KIM-1 was lost when I went to college, and life forced me down different pathways for the next 45 years.
About a year ago I finally began to work on my dream of a Forth implementation by building a Forth-based flight management computer into a spaceflight simulation game that I am working on. Now, instead of writing mostly C# or GDscript code in Godot, I am trying to figure out ways to create a useful device using this awkwardly elegant language. I'm having fun with it.
One of the interesting bits is that I have been able to make the Forth code an entirely separate project on Github (https://github.com/Eccentric-Anomalies/Sky-Dart-FMS), with a permissive open-source license. If anyone actually built a real spacecraft like the one in my game, they could use the FMS code in a real computer to run it.
There is one part of the linked article that really speaks to me: "Implement a Forth to understand how it works" and "But be aware of what this will not teach you". Figuring out the implementation just from reading books was a fascinating puzzle. Once I got it running, I realized I had zero experience actually writing Forth code. I am enjoying it, but it is a lot like writing in some weird, abstract assembly language.
Circa 1980 BASIC was the dominant language for micros because you could fit BASIC in a machine with 4k of RAM. Although you got 64k to play with pretty quickly (1983 or so) it still was a pain in the ass to implement compilers on many chips, especially the 6502, which had so few registers and addressing modes that you're likely to use virtual machine techniques, like Wozniak's SWEET 16 or the atrocious p-code machine that turned a generation of programmers away from PASCAL.
FORTH was an alternative language for small systems. From the viewpoint of a BASIC programmer in 1981 the obvious difference between BASIC and all the other languages which that you could write your own functions to add "words" to the language. FORTH, like Lisp, lets you not only write functions but create new control structures based on "words" having both a compile-time and run-time meaning.
FORTH's answer to line numbers in BASIC was that it provided direct access to blocks (usually 1024 bytes) on the disk with a screen editor (just about a screenful on a 40x25) You could type your code into blocks and later load them into the interpreter. Circa 1986 I wrote a FORTH for the TRS-80 Color Computer running the OS-9 operating system and instead of using blocks it had POSIX-style I/O functions.
FORTH was faster than BASIC and better for systems work, but BASIC was dominant. Probably the best way to use FORTH was to take advantage of it's flexibility to create a DSL that you write your applications in.
I had that issue, and I think I still might have it in my closet. (Weren't those Robert Tinney covers amazing?)
I always wanted to try out Forth but had no real opportunity. Maybe I should now?
https://archive.org/details/byte-magazine-1980-08
Why is it that languages like this don't scale? It's not the first time I see a powerful language that got forgotten. Other examples include SmallTalk and Common Lisp (tiny community).
It is because some languages are "too powerful"? What does that say about our industry? That we're still not that advanced of a specie to be able to handle the full power of such languages?
I say that because it seems languages that are "dumbed down" seem to absolutely dominate our world (Python, Ruby, JS, etc.)
One simpler explanation: in forth you are forced to keep the stack, and modifications to the stack, in your short term memory, albeit only really three numbers in most cases. Whereas with C et al you simply look down the page at the variables, far less taxing on your short term memory.
Well-written and designed high-level forth words often transcend that and tend to be, quite literally, readable however, in a way that is incredibly rare to see in C et al. Of course the argument is that other programmers shouldn't be expected to see the problem in the way the original problem solver did.
This is probably why you see things like locals get used a lot as modern Forth programs grow. It doesn't have to be brutal early days Chuck Moore genius programs, but I guess you start getting away from the original ethos.
I think even with locals you're still mentally dealing with a few items on the stack in each word usually. But, yes, locals do help you from passing around items from word to word: you see the usage of the local far easier than you see the location of the stack elements.
It's a different solution for a different time.
Forth was an excellent way to write a powerful and expressive programming language that could self-host with a bare minimum of assembly language "bare metal" programming.
The fridge-sized computer that Forth was originally developed on had double-digit kilobytes of memory (maybe 8192kwords, with 16-bit words) and clocked instructions through at a whopping 300kHz or so. The microcontroller that drives the Caps Lock LED on your keyboard is a hundred times faster with a hundred times the memory.
These days we do not need to squeeze editor, compiler, and target binary into such a tiny machine. If you're developing for a microcontroller you just use C on your "big" computer, which is unimaginably more powerful.
In the olden days of the 1990s I used a development system for embedded stuff that was written in and targetted Forth on a Z80 with a whopping 64kB of RAM and 5.25" floppies, but that was at least ten years old and five years out of date at the time.
You're probably reading my words on a slice of glass the size of half a sandwich that contains more computing power than existed in the whole world when Forth was first written.
It's a shame because writing something like Forth from the ground up (and I mean, assembly code to load the registers to start the ACIA to begin transmitting text to the terminal) perhaps in an emulated early 80s home computer is a great way to get a sense of what the chip behind it all is doing, and I feel that makes you a better programmer in "real" languages like Go or Python or C.
What would your top two tips for beginning Forth programmers be? Other than "don't use Forth".
Find an existing implementation that runs on some computer you already have, or have an emulator for.
Then find a computer you're really into, and port fig-Forth to it, just for fun. Don't copy the source across, type it in with your own changes as you go.
Edit: Don't forget to have fun. That's the most important thing. You're doing this because you *can*, and just to see what will happen.
Thanks!
I was lucky, early in my career, to work at a place which used a lot of Perl and to read Damian Conway’s book, Object Oriented Perl. It was an amazing, mind-expanding book for me. It was filled with examples of different approaches to object-oriented programming, more than I ever dreamt existed, and it showed how to implement them all in Perl.
So much power! And right in line with Perl’s mantra, “there’s more than one way to do it.”
Unfortunately, our codebase contained more than one way of doing it. Different parts of the code used different, incompatible object systems. It was a lot of extra work to learn them all and make them work with each other.
It was a relief to later move to a language which only supported a single flavor of object-oriented programming.
What I heard is with Forth, basically no 2 environments are alike, but highly customized, meaning every forth programmer creates his own language in the end for his custom needs.
So collaborating is a bit hard like this. The only serious forth programmer that I know, lives alone in the woods doing his things.
So from a aesthetic point of view, I really like the language, but for getting things done, especially in a collaborative way?
But who knows, maybe someone will write the right tools for that to change?
This is not a real issue, because the same thing can be said about C. No two C projects are the same, each has its own set of libraries, macros, types, etc.
I think the main problem is that Forth systems don't have a standard way of creating interfaces like C and other languages have. So the diversity of environments becomes a big issue because it's difficult to combine libraries from different sources.
That's not right. C coders have no problem at all diving right into most other C codebases.
I think it goes beyond that no, because you can do meta programming with Forth
Have you tried collaborating with Forth? There's a lot of documented history of people doing so in industry when it was actually used, and more recently I've usually found Forth codebases approachable and easy to follow.
Personally I think this is the pay-off for writing the code in the first place because Forth is very difficult to write in a clear way, if you actually manage to do it you've probably made it very clear to follow because otherwise it's hard to finish your project and make it work at all.
"Have you tried collaborating with Forth?"
No, I never did more than very simple experiments with Forth (which is why I started my comment with "what I heard").
"because Forth is very difficult to write in a clear way"
But that pretty much means, that the average programmer will have problems with collaborating.
Not for simple collaboration or modification, which is most of what you want to do. But yes any serious development is difficult in Forth, period.
I think like C, there's standard/ANS Forth.
But most likely lots of those Forth coders implement their own which don't necessarily conform to the standard.
I don't think "power" is really that helpful a metric in determining how useful a programming language is. If you think of programming from the standpoint of trying to specify the program you want out of all of the possibly programs you could write, one of the most helpful things a programming language can do is eliminate programs that you don't want by making them impossible to write. From that standpoint, constraints are a feature, not a drawback.
And at the extremes, too much power makes a tool less useful. I don’t drive an F1 car to work, I don’t plant tulips with an excavator, I don’t use a sledgehammer when hanging a picture. Those tools are all too powerful for the job.
planting flowers? trowel
planting foundations? excavator
once you specify "the job", the best tool is "the solution" to that job only. anything else is excess complexity
however if "the job" is unspecified, power is inverse to the length of "the solution"
so is constraint of power bad?
--
a fascinating question
just like music can be created by both additive and subtractive synthesis; every line of code creates both a feature and a constraint on the final program
in which case power can be thought of as the ability to constrain...
that is quite wild...
it implies expressivity is the ability to constrain
it implies drawing on a page, or more broadly, every choice we make, is in equal parts a creative and destructive act
so maybe life, or human flourishing is choosing the restrictions that increase freedom of choice? it's so meta it's almost oxymoronic; concretely: we imprison people to maximize freedom; or, we punish children with the aim of setting them free from punishment
this is the same as the walk from law into grace found in Christian ethics
maybe the ultimate programming language then, provides the maximal step down that path, and this is also the most useful definition of "power"
i.e. place on people those restrictions that increase their ability to choose
I worked at a place that had a big Forth codebase that was doing something mission critical. It was really neat and cool once you finally got it, and probably hundreds or maybe thousands of people had touched it, worked on it and learned it, but the ramp was pretty brutal for your average developer and thus someone decided it would be better to build the same thing over with a shitty almost-C-but-not-quite interpreted language. It certainly made it easier for more people to understand and build, even if the solution was less elegant.
That sounds interesting! Do you have any tips for us on how to use Forth effectively? What was the codebase?
Honestly, when I write forth now, which is usually for embedded targets, I've got a customized version of zforth that I've grafted some stuff like locals into. If it's a small program, it's better to not be afraid of things like globals, and just spend at least twice as much time factoring, writing comments and thinking than writing. It's important to read other people's Forth code and try to understand, as there's a zen and style that looks very different than how you'd write something like Java. It's freeing and enlightening once it clicks, but you have to fight a ton of the way you think about "normal" code.
As far as the codebase, I probably shouldn't say too much (may it's been long enough now, but Idk), but all I'll say is that was a important part of things at a certain disk drive manufacturer.
That makes a lot of sense! Thanks!
What were the most common mistakes you saw people new to Forth making? Being afraid of global variables is one of them, I infer.
Powerful languages invites people to do needlessly complex things. Needlessly complex things are harder to understand. Harder to understand is worse.
Code that matters is usually read and extended many more times than it is written, over time by different people, so being straightforward beats most other things in practice
Stack-based programming may be simple but it doesn’t seem like it would be easy to read and understand for large-scale programs at all.
It kinda happened with markup languages. HTML, SVG, and some other domain specific markup languages are all XML, which is a subset of SGML.
The thing there is those DSLs have their own specs.
Coding is a social activity. Reading code is hard. When there are multiple ways of doing things, it's extra hard. People want to have relatively standardized ways of doing things so they can share code and reason about it easier.
If there's a lisp or racket or a forth that's defined as a DSL, it might take off if it's standardized and it's the best solution for the domain.
HTML uses a ton of SGML features not part of XML (sometimes erroneously though to be non-standard ‘tag soup’, not to mention self-closing tags). You need either a specialized parser or an SGML processor + DTD.
Wasn't HTML4 the last one defined as a SGML DTD? 5 and on is its own beast.
(rip XHTML)
You are right. There is a third-party DTD that should be mostly compatible (https://sgmljs.sgml.net/docs/html5.html).
In reality, HTML4 was never implemented to the letter by user agents, because people do things like putting -- inside comments.
Sadly our industry carries mostly about brick layers and usually tries to go into technologies that make it easier to deal with employees like replaceable servants at low wage prices.
The large scale salaries SV style isn't something that you will find all over the globe, in many countries the pay is similar across all office workers, regardless if they are working with Git, or Office.
That argument implies that you would actually see these languages in communities with large SV style salaries which isn’t the case.
It turns out that “brick layer” languages are also easier to understand not just for the next person taking over but yourself after a few months. That’s valuable even to yourself unless you value your time at 0.
Why? The less the VCs have to spend with employees the better.
See the famous quote about Go's target audience, or 2000's Java being a blue colour job language.
Not only do languages like Lisp, Forth, Smalltalk require a people to actually get them, a bit like the meme with burritos in Haskell, they suffered from bad decisions from companies pushing them.
Lisp suffered with Xerox PARC, Symbolics and TI losing against UNIX workstations, followed by the first AI Winter, which also took Japan's 5th project with Prolog alongside with it.
Smalltalk was getting alright outside Xerox PARC, with big name backers like IBM, where it had a major role on OS/2, similar to .NET on Windows, until Java came out, and IBM decided to pivot all their Smalltalk efforts into Java, Eclipse has roots on Visual Age for Smalltalk.
Your entire post makes the claim that it’s because the vast majority of programmers get paid the same as other roles and that’s why there’s the language selection pressure there is.
High salary jobs would be the exception yet they also make pragmatic choices about languages. It’s a two sided market problem - employers want popular languages to be used so they have a talent pool to hire from and don’t end up having a hard time finding talent (which then also implies something about the salary of course but it’s a secondary effect). Employees look to learn languages that are popular and are easy to find employment in.
Not sure if you’ve spent any time with them but VCs and investors more broadly generally could give two fucks about the language a business is built in. There are exceptions but generally they just want to see the business opportunity and that you’re the team to go do it.
There’s a reason it’s difficult to find employment with Haskell or Lisp or other niche languages and it’s because they’re niche languages that “you have to get” - not easy to learn and generally not as easy to work with as “popular” languages that see significantly more man hours dedicated to building out tooling and libraries. There’s also secondary things like runtime performance which is quite poor for Haskell or Lisp if you’re a beginner and even people familiar with the language can struggle to right equivalent programs that don’t use significantly more memory or CPU. And finally the languages can just be inherently more difficult and alien (Haskell) which attracts a niche and guarantees it remains a niche language that attracts a particular kind of person.
psst blue "collar" :-)
I'm not entirely sure this is different from other languages but I believe a common complaint about lisp is every solution ends up writing a DSL for that solution, making it hard to understand for anyone else. So it's a super power if you're a small team and especially if you're a team of 1. But if you're a large team it doesn't scale.
I think it's a simple abstraction situation and the move for programming environments that include everything.
Geordi Laforge doesn't code much on the Enterprise. He simply asks the computer to build him a model of the anomaly so he can test out ideas. In a way, modern languages like Python (even before LLMs) let you get a lot closer to that reality. Sure you had to know some language basics, but this was pretty minimal and you'd use those basic building blocks to glue together libraries to make an application. Python has a good library for practically anything I do and since this is standard, it's expected that a task doesn't take too long. I can't tell my boss I'll need 3 years to code my own solution that uses my own libraries for numpy and scipy. You're expected to glue libraries together. This is why MIT moved SICP from scheme to Python. It's a different world.
With Forth, every program is a work of art that encapsulates an entire solution to a problem from scratch. It's creator chuck moore takes this to such a level that he also fabs his own chips to work with his forth software optimally. These languages had libraries, but they weren't easy to share and didn't have any kind of repository before Perl's CPAN. Perl really took off for awhile, but Python won out by having a simpler language with builtin OO (Perl's approach was a really hacky builtin OO or you download a library...).
To be honest though, I spent a decade trying many languages (dozens including common lisp, Prolog, APL, C, Ada, Smalltalk, Perl, C#, C++, Tcl, Lua, Rust...etc) looking for the best and although I never became experts in those languages, I kept coming to the conclusion that for my particular set of needs, Python was the best I could find. I wasted a lot of time reading common lisp books and just found it much easier to get the same thing done in Python. Your mileage will vary if you're doing something like building a game engine. A lot of people are just doing process automation and stuff like that and languages like Python are just better than common lisp due to the environment and tooling benefits. Also, although Python isn't as conceptually beautiful as lisp, I found it much easier to learn. The syntax just really clicked for me and some people do prefer it.
> Why is it that languages like this don't scale?
Stanislav Datskovskiy addressed this rather well:
https://www.loper-os.org/?p=69
I've read that a number of times, but this is the first time since the rise of vibe engineering.
> I predict that no tool of any kind which too greatly amplifies the productivity of an individual will ever be permitted to most developers.
There's a new essay in here, somewhere, about why copilot and AI coding is succeeding at bridging this gap.
It is too risky for companies to rely on a language that have a small pool of programmers. The bigger the company, the bigger the language must be. AI multiplies this availability, not productivity.
Flipside: it looks like the most productive programmers are those who work alone and not in a large pool. The core point of the article is that team development is slower and less efficient.
Which means management must make a choice: getting good code relatively fast from a small pool of high-value individuals that it must therefore cherish and treat well...
Or get poor-quality code, slowly, but from a large and redundant group of less skilled developers, who are cheaper and easier to replace.
It is a truth universally acknowledged that from the three characteristics of "good, fast, and cheap", you can pick which two you want.
In this case, maybe the choice is as simple as "good and fast" or "cheap."
If the structure of the business or the market requires management to pick "cheap" (with concomitant but unspoken "bad and slow") then the structure, I submit, is bad.
> why copilot and AI coding is succeeding at bridging this gap.
I mean, unless it proves with time that in fact it does not help at all and actually slows people down.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
>Why is it that languages like this don't scale?
I've concluded that Forth isn't as powerful as Lisp because it can't do lists or heaps. STOIC addresses these and other limitations. Unfortunately it's got the least search friendly language name ever.
I think those other languages have real advantages you aren't seeing.
—·—
The other day akkartik wrote an implementation of the program Knuth used to introduce literate programming to the CACM readers: https://basiclang.solarpunk.au/d/7-don-knuths-original-liter...
It just tells you the top N words by frequency in its input (default N=100) with words of the same frequency ordered alphabetically and all words converted to lowercase. Knuth's version was about 7 pages of Pascal, maybe 3 pages without comments. It took akkartik 50 lines of idiomatic, simple Lua. I tried doing it in Perl; it was 6 lines, or 13 without relying on any of the questionable Perl shorthands. Idiomatic and readable Perl would be somewhere in between.
I think Python, Ruby, or JS would be about the same.Then I tried writing a Common Lisp version. Opening a file, iterating over lines, hashing words and getting 0 as default, and sorting are all reasonably easy in CL, but splitting a line into words is a whole project on its own. And getting a command-line argument requires implementation-specific facilities that aren't standardized by CL! At least string-downcase exists. It was a lark, so I didn't finish.
(In Forth you'd almost have to write something equivalent to Knuth's Pascal, because it doesn't come with even hash tables and case conversion.)
My experience with Smalltalk is more limited but similar. You can do anything you want in it, it's super flexible, the tooling is great, but almost everything requires you to just write quite a bit more code than you would in Perl, Python, Ruby, JS, etc. And that means you have more bugs, so it takes you longer. And it doesn't really want to talk to the rest of the world—you can forget about calling a Squeak method from the Unix command line.
Smalltalk and CL have native code compilers available, which ought to be a performance advantage over things like Perl. Often enough, though, it's not. Part of the problem is that their compilers don't produce highly performant code, but they certainly ought to beat a dumb bytecode interpreter, right? Well, maybe not if the program's hot loop is inside a regular expression match or Numpy array operation.
And a decent native code compiler (GCC, HotSpot, LuaJIT, the Golang compilers, even ocamlopt) will beat any CL or Smalltalk compiler I have tried by a large margin. This is a shame because a lot of the extra hassle in Smalltalk and CL seems to be aimed at efficiency.
(Scheme might actually deliver the hoped-for efficiency in the form of Chez, but not Chicken. But Chicken can build executables and easily call C. Still, you'd need more code to solve this problem in Scheme than in Lua, much less Ruby.)
—·—
One of the key design principles of the WWW was the "principle of least power", which says that you should do each job with the least expressive language that you can. So the URL is a very stupid language, just some literal character strings glued together with delimiters. HTML is slightly less stupid, but you still can't program in it; you can only mark up documents. HTTP messages are similarly unexpressive. As much as possible of the Web is built out of these very limited languages, with only small parts being written in programming languages, where these limited DSLs can't do the job.
Lisp, Smalltalk, and Forth people tend to think this is a bad thing, because it makes some things—important things—unnecessarily hard to write. Alan Kay has frequently deplored the WWW being built this way. He would have made it out of mobile code, not dead text files with markup.
But the limited expressivity of these formats makes them easier to read and to edit.
I have two speech synthesis programs, eSpeak and Festival. Festival is written in Scheme, a wonderful, liberating, highly expressive language. eSpeak is in C++, which is a terrible language, so as much as possible of its functionality is in dumb data files that list pronunciations for particular letter sequences or entire words and whatnot. Festival does all of this configuration in Scheme files, and consequently I have no idea where to start. Fixing problems in eSpeak is easy, as long as they aren't in the C++ core; fixing problems in Festival is, so far, beyond my abilities.
(I'm not an expert in Scheme, but I don't think that's the problem—I mean, my Scheme is good enough that I wrote a compiler in it that implements enough of Scheme to compile itself.)
—·—
SQL is, or until recently was, non-Turing-complete, but expressive enough that 6 lines of SQL can often replace a page or three of straightforward procedural code—much like Perl in the example above, but more readable rather than less.
Similarly, HTML (or JSX) is often many times smaller than the code to produce the same layout with, say, GTK. And when it goes wrong, you can inspect the CSS rules applying to your DOM elements in a way that relies on them being sort of dumb, passive data. It makes them much more tractable in practice than Turing-complete layout systems like LaTeX and Qt3.
—·—
Perl and Forth both have some readability problems, but I think their main difficulty is that they are too error-prone. Forth, aside from being as typeless as conventional assembly, is one of the few languages where you can accidentally pass a parameter to the wrong call.
This sort of rhymes with what I was saying in 02001 in https://paulgraham.com/redund.html, that often we intentionally include redundancy in our expressions of programs to make them less error-prone, or to make the errors easily detectable.
> And it doesn't really want to talk to the rest of the world—you can forget about calling a Squeak method from the Unix command line.
You seem absolutely certain!
Here's an example of a Pharo Smalltalk program call on the Ubuntu command line, with the calculation result written to stdout --
https://benchmarksgame-team.pages.debian.net/benchmarksgame/... Here's a corresponding Perl program --https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Thanks! I'll take a look.
If you have questions, I'll try to answer.
> splitting a line into words is a whole project on its own
Is it[1]? My version below accumulates alphabetical characters until it encounters a non-alphabetical one, then increments the count for the accumulated word and resets the accumulator.
It’s not exactly pretty or idiomatic, but its 19 lines appear to get the job done.1: Well, technically it is, because there is SPLIT-SEQUENCE: https://github.com/sharplispers/split-sequence
Hey, this is great! Thanks!
It does look a lot like what I was thinking would be necessary. About 9 of the 19 lines are concerned with splitting the input into words. Also, I think you have omitted the secondary key sort (alphabetical ascending), although that's only about one more line of code, something like
Because the lines of code are longer, it's about 3× as much code as the verbose Perl version.In SBCL on my phone it's consistently slower than Perl on my test file (the King James Bible), but only slightly: 2.11 seconds to Perl's 2.05–2.07. It's pretty surprising that they are so close.
Doh, I missed the secondary sort.
Were I trying to optimise this, I would test to see if a hash table of alphabetical characters is better, or just checking (or (and (char>= c #\A) (char<= c #\Z)) (and (char>= c #\a) (char<= c #\z))). The accumulator would probably be better as an adjustable array with a fill pointer allocated once, filled with VECTOR-PUSH-EXTEND and reset each time. It might be better to use DO, initializing C and declaring its type.
Also worth giving it a shot with (optimize (speed 3) (safety 0)) just to see if it makes a difference.
Yes, definitely more verbose. Perl is good at this sort of task!
The article in CACM that presents Knuth's solution [1] also includes some criticism of Knuth's approach, and provides an alternate that uses a shell pipeline:
(I converted a newline to `$'\n'` for readability, but the original pipeline from the article works fine on a current MacOS system)1: https://dl.acm.org/doi/pdf/10.1145/5948.315654
With great respect to Doug McIlroy (in the CACM article), the shell pipeline has a serious problem that Knuth's Pascal program doesn't have. (I'm assuming Knuth's program is written in standard Pascal.) You could have compiled and run Knuth's program on an IBM PC XT running MS-DOS; indeed on any computer having a standard Pascal compiler. Not so the shell pipeline, where you must be running under an operating system with pipes and 4 additional programs: tr, sort, uniq, and sed.
McIlroy also discusses how a program "built for the ages" should have "a large factor of safety". McIlroy was worried about how Knuth's program would scale up to larger bodies of text. Also, Bentley's/McIlroy's critique was published in 1986, which I think was well before there was a major look into Unix tools and their susceptibility to buffer overruns, etc. In 1986, could people have determined the limits of tr, sort, uniq, sed, and pipes--both individually and collectively--when handling large bodies of text? With a lot of effort, yes, but if there was a problem, Knuth at least only had one program to look at. With the shell pipeline, one would have to examine the 4 programs plus the shell's implementation of pipes.
(I'm not defending Pascal and Knuth, Bentley, and McIlroy are always worth reading on any topic -- thanks for posting the link!)
Bringing this back to Forth, Bernd Paysan, who needs no introduction to the people in the Forth community, wrote "A Web-Server in Forth", https://bernd-paysan.de/httpd-en.html . It only took him a few hours, but in fairness to us mortals, it's an HTTP request processor that reads a single HTTP request from stdin, processes it, and writes it output to stdout. In other words, it's not really a full web server because it depends on an operating system with an inetd daemon for all the networking. As with McIlroy's shell pipeline, there is a lot of heavy lifting done by operating system tools. (Paysan's article is highly recommended for people learning Forth, like me when I read it back in the 2000s.)
> You can do anything you want in it, it's super flexible, the tooling is great, but almost everything requires you to just write quite a bit more code than you would in Perl, Python, Ruby, JS, etc.
Given that Smalltalk precedes JS by many years: if it is true, then it was not always true.
Given that Smalltalk was early to the GUI WIMP party: if it is true, then it was not always true for GUI WIMP use.
I don't think there's a unifying reason why programming languages languish in obscurity; it's certainly not because they're "too powerful." What does "powerful" even mean? I used to care more about comparing programming languages, but I mostly don't these days. Actually used/useful languages mostly just got lucky: C was how you wrote code for Unix; Python was Perl but less funny-looking; Ruby was Rails; JavaScript is your only choice in a web browser; Lisp had its heyday in the age of symbolic AI.
Forth and (R4RS) Scheme are simple to implement, so they're fun toys. Some other languages like Haskell have interesting ideas but don't excel at solving any particular problems. Both toy and general-purpose programming languages are plentiful.
Alike to big fortunes, no one wants to hear the truth about lot of them existing due to simple luck. There is a significant amount of post-hoc rationalization to explain the success by some almost magic virtues. Or even explain the success by lack of such virtues - "worse is better" and so on.
One thing I note is that all of the languages you name are very far from the machine. Also Forth is not close to the modern machine. Note that it only has two integer types and the larger one can be aligned either way you make sure it is not.
> One thing I note is that all of the languages you name are very far from the machine
Common lisp is one step away from assembly - you disassemble any function and it is, in fact, a valid strategy of one wants to check the compiler optimizations.
I googled a bit on how common lisp is compiled. Apparently it is possible to add some sort of type hints and ensure that parameters/variables have a certain type. If one uses that for most code, it would potentially be enough to qualify as being close to the machine.
Yes, in a way common lisp code can be locally lowered to a well-typed language.
What people do is just write code they way they usually write dynamic lisp and then add types to functions where necessary for performance.
SBCL generates good assembly, btw.
What does "close to the machine" mean to you?
To me it means that one attempts to use the machine well. I.e., avoid introducing overheads that have nothing to do with the problem one is trying to solve. As an example of something that is very far from the machine imagine wanting to add some integers together. One can do this in untyped lambda calculus by employing Church Numberals. If one looks at the memory representation now your numerals are a linked list of a size equal, or proportional, to the number. However, the machine actually has machine language instructions to add numbers in a much more efficient way. For this discussion maybe the most relevant example is that using dynamic typing for algorithms that don't need it is distant from the machine because every value now has a runtime type label that is actually not needed because if your program could actually be statically typed, one would know in advance what the type labels are so they are redundant.
There are many Forths, and an implementer can and should define words that map well to the target hardware.
They scale extremely effectively to large problems solved by a team size of one, maybe two.
The story goes that changing the language to fit how you're thinking about the problem is obstructive the rest of the people thinking about the same problem.
I'm pretty sure this story is nonsense. Popular though.
frankly it's a miracle any of them scaled at all, such popularity mostly comes down to an arbitrary choice made decades ago by a lucky vendor instead of some grand overarching design
I spent a few months playing with forth after seeing a talk on it at Boston Code Camp. I struggled to find a practical application (I do web dev), but it had a lasting effect on my style of programming. Something about the way you factor a forth program changed me. Now I mainly do functional-flavored typescript, and while forth is NOT an FP language, there is a lot that carries over.
In Forth, the language rewards you for keeping your words focused and applying the single responsibility principal. It’s very easy to write a lot of small words that do one thing and then compose your program out of them. It’s painful to not do this.
There is no state outside the stack. If you call a word it pulls values off the stack and deposits values back on the stack. Having no other mechanism for transferring data requires you to basically create data pipelines that start to look like spoken language.
Forth has been a peripheral fascination of mine for about a decade, just because it seems to do well at nearly every level of the software stack. Like a part of me wants to build a kernel with it, or make a web server, or anything in between.
I've never actually done any Forth, though, just because it's a bit arcane compared to the C-inspired stuff that took over.
FORTH has some elegance and it's so simple that it is tempting to implement it.
However, no language should permit defining the value of 4 by 12, as there is no situation in which this can bring more good than harm in the long term.
Another issue that affects FORTH but also Perl and other languages is that they deal with a lot of things implicitly (e.g. the stack, or arguments to functions). Most people agree that explicity is more easy to read than implicit.
> However, no language should permit defining the value of 4 by 12, as there is no situation in which this can bring more good than harm in the long term.
A Skil saw should not permit you sticking your fingers in the spinning blade, yet most people know that this is a stupid and dangerous thing to do.
Lots of saws have safety features to keep fingers from being removed. It happens all the time.
This is, I think, the best overview of Forth, and computing as a whole, that I've ever seen.
Big compliment, coming from you.
(I wish you would write again. I have immensely enjoyed the stuff on your website)
Thanks! You might want to git clone http://canonical.org/~kragen/sw/pavnotes2.git/.
I wish "Simple Made Easy," by Rich Hickey, could be applied here. Forth is simple but not easy. If there is something as simple as Forth but also accessible to mere mortals (aka easy) then I'd like to know what it is (I don't consider Clojure itself as a language to be simple in this sense).
"Working without names (also known as implicit or tacit or point-free programming) is sometimes a more natural and less irritating way to compute. Getting rid of names can also lead to much more concise code. And less code is good code."
Does Forth really reduce the burden of naming things? You don't name results but don't you have to pay for it with the burden of naming words? (My impression is that there's more words in a Forth program than functions in an equivalent program in a language that has named variables).
> Does Forth really reduce the burden of naming things?
I would say that you have less names, but they are more important. Plus, it is more difficult to name things because you prefer short names; in all languages, when you have a good naming "discipline", follow a naming convention, you end up with an informal "grammar" inside of your names. In Forth this is even more important.
> My impression is that there's more words in a Forth program than functions in an equivalent program in a language that has named variables
Yes, some people have called that "ravioli code" or "confetti code", IIRC. But most of them are support words. In Forth, you also eventually end up with "module APIs". This also exists in C or Java or ..., except the ratio useful:support is lower.
The quote makes more sense IMO for array languages like J that support a tacit style. J's "trains" just make things flow without a lot of variables. Aaron Hsu's Co Dfns compiler (spoken about on here and YouTube) also uses this style with Dyalog APL.
Forth is concatenative, so you can build the words on top of each other without worrying about a ton of variables. So I think it's partially true for Forth.
Yeah, I wouldn't have phrased it like in the article either. What I'd say is that Forth is more about naming processes than variables.
RPN interpreters require very little core memory. So they were popular with computers where core memory was under ten kilobytes.
But its horrible for software engineering with multiple programmers and large codebases. Lacks structures, interfaces, modules, data abstraction that you expect in a modern language. We called it the "Chinese food" of coding- ten minutes later you had nomidea what you just coded.
Coco Conn and Paul Rother wrote this up about what they did with FORTH at HOMER & Assoc, who made some really classic music videos including Atomic Dog, and hired Charles Moore himself! Here's what Coco Conn posted about it, and some discussion and links about it that I'm including with her permission:
https://news.ycombinator.com/item?id=36751574
Mitch Bradley came up with a nice way to refactor the Forth compiler/interpreter and control structures, so that you could use them immediately at top level! Traditional FORTHs only let you use IF, DO, WHILE, etc in : definitions, but they work fine at top level in Mitch's Forths (including CForth and Open Firmware):
https://news.ycombinator.com/item?id=38689282
Back in 2004 or so - ancient days now - I remember an elderly programmer on #gobolinux (freenode IRC back in the days) who kept on praising Forth. I never understood why, but he liked Forth a lot.
Now - that in itself doesn't mean a whole lot, as it is just anecdotal, but people who are very passionate about programming languages are quite rare. I've not seen something like that happen with any other language (excluding also another guy on #gobolinux who liked Haskell). I did not see anyone praise, say, PHP, perl, JavaScript etc....
Some languages people don't like to talk about much. Forth though was different in that regard. I never got into it; I feel it has outlived the modern era like many other languages, but that guy who kept on talking about it I still remember. His website also was built in Forth and it was oddly enough kind of an "interactive" website (perhaps he also used JavaScript, I forgot, but I seem to remember he said most or all of it was implemented in Forth - turtles all down the way).
The Forth super power is that you have full control over how a symbol is evaluated, both at compile and runtime. I don't know of anything else that offers that. Lisp doesn't.
That gives the developer pretty much free rein to do whatever they want, which can be both good and bad.
I've always loved the elegance of Frank Sergeant's 3 Instruction Forth paper [1], it's very cool once you wrap your head around it.
Also, studying the F83 Metacompiler is valuable as well. F83 is a very capable 8/16-bit Forth system.
I honestly marvel at how much work must have gone into F83, given the tools of the time. I wish I knew more about its development journey. How it got bootstrapped.
[1] https://pygmy.utoh.org/3ins4th.html
I remember programming in Forth on my Palm Pilot, as there was a Forth interpreter for it.
There's a certain mesmerizing effect that creeps in once you start digging into programming language fundamentals.
Any kind of notation, really, can do that to a person. It's kind of hypnotic.
I avoid it like the plague (getting too much into it). Not because I dislike it, but because I like it so much.
I believe the ideal programming language must be full of problems, and then obvious ways to get around those problems. It's better than a near-perfect language with one or two problems that are very hard to get around.
The "Stop Writing Dead Programs" video mentioned is quite nice. It's surprising how the web is a platform for many of the languages the presenter offer as inspiration.
I first encountered Forth on a TI-99/4A, complete with that magnificent expansion box that looked like industrial HVAC equipment. Hearing me complain about TI Extended BASIC's glacial pace, my parents saw in one of my magazines that Forth was faster and bought it hoping I would find it helpful.
It was mind-bending but fascinating. I managed a few text adventures, some vaguely Pac-Man-esque clones, and a lingering sense that I was speaking a language from another dimension.
I've since forgiven my parents. Forth resurfaces now and then, usually when I reread Leo Brodie's thought-provoking Forth books, and I feel like I'm decoding the sacred texts of a minimalist cult. I came away thinking better, even if I've never completely caught up with the language.
A long read but one that's quite incredible. Has definitely helped my understanding of computing get closer to the metal so to speak.
it mentions sometimes not naming things as great, but... what does naming intermediate values in forth look like? Is there even a naming scope that would allow for me to give values names in case I don't want to get entirely lost in the sauce?
In early 80s when I was a wee nerd in college a gentleman named Ray who owned Laboratory Microsystems was nice enough to give a poor college kid a copy of his excellent Forth implementation for the then-nascent IBM PC.
I breadboarded a little EPROM programmer (driven by a parallel printer port with the programming code done in Forth because I couldn't afford a real one). Then breadboarded implemented a little Z80 system with a bunch of general purpose I/O and a Forth "OS" in EPROM.
Used that little setup as the basis for a number of projects, including a home alarm system with phone-based control plus voice synthesis phone-based alert calling (which a couple silicon valley VCs were gracious enough to take a meeting about).
Forth gave me wings. Despite it's reputation as a "write-only language". Good times.
"There is absolutely no reason we have to use increasingly inefficient and poorly-constructed software with steeper and steeper hardware requirements in the decades to come."
The term "we" as used here hopefully means individual, free-thinking computer users, not so-called "tech" companies
If Silicon Valley companies want to use increasingly-inefficient, poorly-constructed, resource-insatiable software, then nothing stops them from doing do
"Forth is not easy. It may not always even be pleasant. But it is certainly simple."
Complex isn't easy, either
That is why (a) "insecurity", unreliability, expense, etc. and (b) complexity generally go hand-in-hand
That is assuming that you, with German grammar, write.
I believe, that you that sumes as mean.
[dead]