Just wait. In a few years we'll have computer-use agents that are good enough that people will stop making APIs. Why bother duplicating that effort, when people can just direct their agent to click around inside the app? Trillions of matmuls to accomplish the same result as one HTTP request.
You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s!
Edit: Here's how you start FrameMaker:
In Finder go to NextDeveloper > Demos > FrameMaker.app
Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM??
In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades.
- TeXview.app which at least inspired the award-winning TeXshop.app
- Altsys Virtuoso which became Macromedia Freehand (having been created as a successor to Freehand 3) --- these days one can use Cenon https://cenon.info/ (but it's nowhere near as nice/featureful)
- WriteNow.app --- while this was also a Mac application, the NeXT implementation was my favourite --- WN was probably the last major commercial app written in Assembly (~100,000 lines)
Publisher is the equivalent of InDesign. It was meant for brochures and so on. If you want to write a long technical manual today most people use Word. In that respect we are using less powerful software today than our grandparents.
Note: Adobe bought FrameMaker and continues to sell FrameMaker. But Word has captured the market not because of its technical merit but because of bundling.
I have never written any technical manuals, but I'm surprised that Word is the choice of tool. How does one embed e.g. code easily in the document? I feel there must be a better way to do it, maybe some kind of markdown syntax? Latex?
> How does one embed e.g. code easily in the document?
You don't. For APIs and such, documentation is published online, and you don't need Word for that. Word is used in some industries, where printed manual is needed.
What about the printed manuals? I think they still have some of those not too long ago (e.g. Intel manuals). What was the tool chosen? Very curious to know.
Or, maybe a legacy example -- how were the printed manuals of Microsoft C 6.0 written? That was in the early 90s I think.
I think back then, due to the scarcity of RAM and HDD, developers, especially elite developers working for Apple/Microsoft/Borland/whatever really went for the last mile to squeeze as much performance as they could -- or, at least they spent way more time on this comparing to modern day developers -- even for the same applications (e.g. some native Windows programs on Win 2000 v.s. the re-written programs on Win 11).
Nowadays businesses simply don't care. They already achieved the feudal-ish bastion they have dreamed about, and there is no "business value" to spend too much time on it, unless ofc if it is something performance related, like A.I. or Supercomputing.
On the other hand, hardware today is 100X more complicated than the NeXTStep/Intel i486 days. Greybeards starting from the 70s/80s can gradually adapt to the complexity, while newcomers simply have to swim or die -- there is no "training" because any training on a toy computer or a toy OS is useless comparing to the massive architecture and complexity we face today.
I don't know. I wish the evolution of hardware is slower, but it's going to get to the point anyway. I recently completed the MIT xv6 labs and thought I was good enough to hack the kernel a bit, so I took another Linux device driver class, and OMG the complexity is unfathomable -- even the Makefile and KBuild stuffs are way way beyond my understanding. But hey, if I started from Linux 0.95, or maybe even Linux 1.0, I'd have much elss trouble to drill into a subsystem, and gradually adapt. That's why I think I need to give myself a year or two of training to scroll back to maybe Linux 0.95, and focus on just a simpler device driver (e.g. keyboard), and read EVERY evolution. There is no other way for commoners like us.
While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.
For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!
It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.
I don't think graphics devs changing over would change much. They would probably not lament over 16ms, but they would quickly learn that performance does not matter much in application development, and start building their own abstraction layer cake.
It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.
No need for graphics programmers, anyone that is still around coding since the old days, does remember on how to make use of data structures, algorithms, and how to do much with little hardware resources.
Maybe the RAM prices will help bringing those skills back.
It's mostly on the business side. If business doesn't care then developers have no choice. Ofc the customers need to care too, looks like we don't care either...in general.
That wouldn't make any difference. Graphics programmers spend a lot of effort on performance because spending a lot of $$$$ (time) can make an improvement that people care about. For most applications nobody cares enough about speed to pay the $$$ needed to make it fast.
Many application programmers could make things faster - but their boss says good enough, ship it, move to a new feature that is worth far more to me.
Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.
The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.
Just a genuinely excellent essay written to a broader technical audience than simply those software engineers who live in the guts of databases optimizing hyper-specific edge-cases (and no disrespect to you amazingly talented people, but man your essays can be very chewy reads sometimes). I hope the OP’s got some caching ready, because this is going to get shared.
> You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting, a few milliliters of water and enough power to watch 5% of a TikTok video on a television, it will tell you.
This might be what many of the companies that host and sell time with an LLM want you to do, however. Go ahead, drive that monster truck one mile to pickup fast food! The more that's consumed, the more money that goes in the pockets of those companies....
> The instincts are for people to get the AI to do work for them, not to learn from the AI how to do the work themselves.
Improviny my own learning is one of the few things I find beneficial with LLMs!
> LLMs are still intensely computationally expensive. You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting ... But the computer you have in front of you can perform this calculation a billion times per second.
This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove.
I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.
Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?
I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.
An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing.
It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.
Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?
The actual constraint is how long people are willing to wait for results.
If the results are expected to be really good, people will wait a seriously long time.
That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.
It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
I don't think that's right, even for laypeople. It's just that the pain of things that take 5 seconds when they could take 50 ms is subtle and can be discounted or ignored until you are doing a hundred things in a row that take 5 seconds instead of 50 ms. And if you don't know that it should be doable in 50 ms then you don't necessarily know you should be complaining about that pain.
It's also that the people who pay the price for slowness aren't the people who can fix it. Optimizing a common function in popular code might collectively save centuries of time, but unless that converts to making more money for your employer, they probably don't want you to do it. https://www.folklore.org/Saving_Lives.html
> It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
In recent times I found myself falling for this preconception when a LLM starts to spit text just a couple of seconds after a complex request.
LLMs are a very cool and powerful tool when you've learned how to use them effectively. But most people probably didn't and thus use them in a way that they produce unsatisfying results while maximizing resource and token use.
The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.
He was wrong up until we found the end of Moore's law. Now that hardware cannot get exponentially faster, we are forced to finally write good code. The kind that isn't afraid to touch bare metal. We don't need another level of abstraction. Abstraction does not help anyone. I would love to point to some good examples but it's still too early for this to be seen globally. Your 1000 container swarm for a calculator app is still state of the art infrastructure.
Wirth was complaining about the bloated text editors of the time which used unfathomable amounts of memory - 4 MB.
Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.
But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.
If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.
I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.
This will probably not happen, because of physics.
Both, compute and memory, are getting closer to fundamental physical limits and it is unlikely that the next 60 years will be in any way like last 60 years.
While the argument for compute is relatively simple it is a bit harder to understand for memory. We are not near to any limit for the size of our memory but the limiting factor is how much storage we can bring how close to our computing units.
Now, there is still way to make and low hanging fruit to pick but
I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.
> I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.
In theory, yes. But I bet that the forces of enshittification will be stronger. All software will be built to show ads, and since there is no limit to greed, the ad storage and surveillance requirements will expand to include every last byte of your appliance's storage and memory. Interaction speed will be barely enough to not impact ad watching performance too severely. Linux will not be an out, since the megacorps will buy legislation to require "approved" devices and OSs to interact with indispensable services.
Idk. What are these programmers doing afterwards? Build more shoddy code? Perhaps it's a better idea to focus on what's necessary and not run from feature to feature at top speed. This might require some rethinking in the finance department, though.
I suspect that the next generation of agenticly trained llm-s will have a mode where they first consider solving the problem by writing a program first before doing stuff by hand. At least, it would be interesting if in a few months the llm greets me with "Keep in mind that I run best on ubuntu with uv already installed!".
We haven't yet lost the war against complexity. We would know if we had, because all software would grind to a halt due to errors. We're getting close though; some aspects of software feels deeply dysfunctional; like 2FA and Captcha - They're perfect examples of trying to improve something (security) by adding complexity... And it fails spectacularly... It fails especially hard because those people who made the decision to force these additional hurdles on users are still convinced that they're useful because they have a severely distorted view of the average person's reality. Their trade-off analysis is completely out of whack.
The root problem with 2FA is that the average computer is full of vulnerabilities and cannot be trusted 100% so you need a second device just in case the computer was hacked... But it's not particularly useful because if someone infected your computer with a virus, they can likely also infect your phone the next time you plug it in to your computer to charge it... It's not quite 2-factor... So much hassle for so little security benefit... Especially for the average person who is not a Fortune 500 CEO. Company CEOs have a severely distorted view about how often the average person is targeted by scammers and hackers. Last time someone tried to scam me was 10 years ago... The pain of having to pull up my phone every single day, multiple times per day to type in a code is NOT WORTH the tiny amount of security it adds in my case.
The case of security is particularly pernicious because complexity has an adverse impact on security; so trying to improve security by adding yet more complexity is extremely unwise... Eventually the user loses access to the software altogether. E.g. they forgot their password because they were forced to use some weird characters as part of their password or they downloaded a fake password manager which turned out to be a virus, or they downloaded a legitimate password manager like Lastpass which was hacked because obviously, they'd be a popular target for hackers... Even if everything goes perfectly and the user is so deeply conditioned that they don't mind using a password manager... Their computer may crash one day and they may lose access to all their passwords... Or the company may require them to change their password after 6 month and the password manager misses the update and doesn't know the new password and the user isn't 'approved' to use the 'forgot my password' feature... Or the user forgets their password manager's master password and when they try to recover it via their email, they realize that the password for their email account is inside the password manager... It's INFURIATING!!!
I could probably write the world's most annoying book just listing out all the cascading layers of issues that modern software suffers from. The chapter on security alone would be longer than the entire Lord of the Rings series... And the average reader would probably rather throw themselves into the fiery pits of Mordor than finish reading that chapter... Yet for some bizarre reason, they don't seem to mind EXPERIENCING these exact same cascading failures in their real day-to-day life.
If you read that Wirth 1995 paper (A Plea for Lean Software) referenced by the OP, following paragraphs answered your question:
“
To some, complexity equals power
A system’s ease of use always should be a primary goal, but that ease should be based on an underlying concept that makes the use almost intuitive. Increasingly, people seem to misinterpret complexity as sophistication, which is baffling — the incomprehensible should cause suspicion rather than admiration.
Possibly this trend results from a mistaken belief that using a somewhat mysterious device confers an aura of power on the user. (What it does confer is a feeling of helplessness, if not impotence.) Therefore, the lure of complexity as sale incentive is easily understood; complexity promotes customer dependence on the vendor.”
I am typing (no screenshots or copy and paste) this 30 year old wisdom in to reply here as an archived reminder for myself.
I know competent adults whose login flow for most websites is “forgot password.” Might be better off writing your passwords on post it notes at that point.
I've seen a few sites where the login flow is simply entering your email address and you get a time-limited login link sent to you. You never create any password at all. I was skeptical at first but I've found it seems to work pretty decently.
It's inevitable even if it's unnecessary. Capitalism necessitates 6% growth year on year. Since IT services are the growth sector of course 25% of power will go to data centers in 2040
The EU should do a radical social restructuring betting on no growth. Perhaps even banning all American tech. A modern Tokugawa.
Dull article with no point, numbers or anything of values. Just some quasi philosophical mumbing. Wasted like 10 minutes and I'm still not sure what was the point of the article
Just wait. In a few years we'll have computer-use agents that are good enough that people will stop making APIs. Why bother duplicating that effort, when people can just direct their agent to click around inside the app? Trillions of matmuls to accomplish the same result as one HTTP request.
Take a look at what was possible in the late 1980s with 8 MB of RAM: https://infinitemac.org/1989/NeXTStep%201.0
You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s!
Edit: Here's how you start FrameMaker:
In Finder go to NextDeveloper > Demos > FrameMaker.app
Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM??
In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades.
There were also
- TeXview.app which at least inspired the award-winning TeXshop.app
- Altsys Virtuoso which became Macromedia Freehand (having been created as a successor to Freehand 3) --- these days one can use Cenon https://cenon.info/ (but it's nowhere near as nice/featureful)
- WriteNow.app --- while this was also a Mac application, the NeXT implementation was my favourite --- WN was probably the last major commercial app written in Assembly (~100,000 lines)
Still sad my NeXT Cube stopped booting up....
Here's a screenshot of FrameMaker I just took: https://imgur.com/a/CG8kZk8
Look at the fancy page layout that was possible in the late 1980s. Can Word do this today?
I think Publisher would be the equivalent to FrameMaker from the Office suite. Publisher from Office ~2016 could definitely do that.
Unfortunately I think Publisher has faired even worse than Word in terms of stagnation, and now looks to be discontinued?
Publisher is the equivalent of InDesign. It was meant for brochures and so on. If you want to write a long technical manual today most people use Word. In that respect we are using less powerful software today than our grandparents.
Note: Adobe bought FrameMaker and continues to sell FrameMaker. But Word has captured the market not because of its technical merit but because of bundling.
I have never written any technical manuals, but I'm surprised that Word is the choice of tool. How does one embed e.g. code easily in the document? I feel there must be a better way to do it, maybe some kind of markdown syntax? Latex?
> How does one embed e.g. code easily in the document?
You don't. For APIs and such, documentation is published online, and you don't need Word for that. Word is used in some industries, where printed manual is needed.
What about the printed manuals? I think they still have some of those not too long ago (e.g. Intel manuals). What was the tool chosen? Very curious to know.
Or, maybe a legacy example -- how were the printed manuals of Microsoft C 6.0 written? That was in the early 90s I think.
I think back then, due to the scarcity of RAM and HDD, developers, especially elite developers working for Apple/Microsoft/Borland/whatever really went for the last mile to squeeze as much performance as they could -- or, at least they spent way more time on this comparing to modern day developers -- even for the same applications (e.g. some native Windows programs on Win 2000 v.s. the re-written programs on Win 11).
Nowadays businesses simply don't care. They already achieved the feudal-ish bastion they have dreamed about, and there is no "business value" to spend too much time on it, unless ofc if it is something performance related, like A.I. or Supercomputing.
On the other hand, hardware today is 100X more complicated than the NeXTStep/Intel i486 days. Greybeards starting from the 70s/80s can gradually adapt to the complexity, while newcomers simply have to swim or die -- there is no "training" because any training on a toy computer or a toy OS is useless comparing to the massive architecture and complexity we face today.
I don't know. I wish the evolution of hardware is slower, but it's going to get to the point anyway. I recently completed the MIT xv6 labs and thought I was good enough to hack the kernel a bit, so I took another Linux device driver class, and OMG the complexity is unfathomable -- even the Makefile and KBuild stuffs are way way beyond my understanding. But hey, if I started from Linux 0.95, or maybe even Linux 1.0, I'd have much elss trouble to drill into a subsystem, and gradually adapt. That's why I think I need to give myself a year or two of training to scroll back to maybe Linux 0.95, and focus on just a simpler device driver (e.g. keyboard), and read EVERY evolution. There is no other way for commoners like us.
While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.
For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!
[0] https://youtu.be/LvW1HTSLPEk
It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.
I don't think graphics devs changing over would change much. They would probably not lament over 16ms, but they would quickly learn that performance does not matter much in application development, and start building their own abstraction layer cake.
It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.
No need for graphics programmers, anyone that is still around coding since the old days, does remember on how to make use of data structures, algorithms, and how to do much with little hardware resources.
Maybe the RAM prices will help bringing those skills back.
It's mostly on the business side. If business doesn't care then developers have no choice. Ofc the customers need to care too, looks like we don't care either...in general.
And embedded too. But then again, they do what they do precisely because in that environment those skills are appreciated, and elsewhere they are not.
That wouldn't make any difference. Graphics programmers spend a lot of effort on performance because spending a lot of $$$$ (time) can make an improvement that people care about. For most applications nobody cares enough about speed to pay the $$$ needed to make it fast.
Many application programmers could make things faster - but their boss says good enough, ship it, move to a new feature that is worth far more to me.
Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.
The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.
Just a genuinely excellent essay written to a broader technical audience than simply those software engineers who live in the guts of databases optimizing hyper-specific edge-cases (and no disrespect to you amazingly talented people, but man your essays can be very chewy reads sometimes). I hope the OP’s got some caching ready, because this is going to get shared.
Lots of good thoughts in here.
> You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting, a few milliliters of water and enough power to watch 5% of a TikTok video on a television, it will tell you.
This might be what many of the companies that host and sell time with an LLM want you to do, however. Go ahead, drive that monster truck one mile to pickup fast food! The more that's consumed, the more money that goes in the pockets of those companies....
> The instincts are for people to get the AI to do work for them, not to learn from the AI how to do the work themselves.
Improviny my own learning is one of the few things I find beneficial with LLMs!
> LLMs are still intensely computationally expensive. You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting ... But the computer you have in front of you can perform this calculation a billion times per second.
This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove.
I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.
Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?
I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.
An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing.
It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.
Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?
The actual constraint is how long people are willing to wait for results.
If the results are expected to be really good, people will wait a seriously long time.
That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.
It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
> people simply don't care
I don't think that's right, even for laypeople. It's just that the pain of things that take 5 seconds when they could take 50 ms is subtle and can be discounted or ignored until you are doing a hundred things in a row that take 5 seconds instead of 50 ms. And if you don't know that it should be doable in 50 ms then you don't necessarily know you should be complaining about that pain.
It's also that the people who pay the price for slowness aren't the people who can fix it. Optimizing a common function in popular code might collectively save centuries of time, but unless that converts to making more money for your employer, they probably don't want you to do it. https://www.folklore.org/Saving_Lives.html
> It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
In recent times I found myself falling for this preconception when a LLM starts to spit text just a couple of seconds after a complex request.
https://thedailywtf.com/articles/The-Slow-Down-Loop
LLMs are a very cool and powerful tool when you've learned how to use them effectively. But most people probably didn't and thus use them in a way that they produce unsatisfying results while maximizing resource and token use.
The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.
He was wrong up until we found the end of Moore's law. Now that hardware cannot get exponentially faster, we are forced to finally write good code. The kind that isn't afraid to touch bare metal. We don't need another level of abstraction. Abstraction does not help anyone. I would love to point to some good examples but it's still too early for this to be seen globally. Your 1000 container swarm for a calculator app is still state of the art infrastructure.
The Reiser footnote was on point. I couldn't resist clicking it to find out if it was the same Reiser I was thinking about.
"an interactive text editor could be designed with as little as 8,000 bytes of storage" - meanwhile Microsoft adds copilot integration to Notepad
Wirth was complaining about the bloated text editors of the time which used unfathomable amounts of memory - 4 MB.
Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.
But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.
If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.
I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.
This will probably not happen, because of physics.
Both, compute and memory, are getting closer to fundamental physical limits and it is unlikely that the next 60 years will be in any way like last 60 years.
While the argument for compute is relatively simple it is a bit harder to understand for memory. We are not near to any limit for the size of our memory but the limiting factor is how much storage we can bring how close to our computing units.
Now, there is still way to make and low hanging fruit to pick but I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.
> I think we will eventually see a renaissance of appreciation for effective programs in our lifetimes.
In theory, yes. But I bet that the forces of enshittification will be stronger. All software will be built to show ads, and since there is no limit to greed, the ad storage and surveillance requirements will expand to include every last byte of your appliance's storage and memory. Interaction speed will be barely enough to not impact ad watching performance too severely. Linux will not be an out, since the megacorps will buy legislation to require "approved" devices and OSs to interact with indispensable services.
Hence why I actually happy for the RAM prices getting back to how used to be, maybe new generations rediscover how to do much with little.
Hardware is cheaper than programmers
Maybe one day that will change
Thanks to AI driven scarcity of hardware it's already coming true.
Idk. What are these programmers doing afterwards? Build more shoddy code? Perhaps it's a better idea to focus on what's necessary and not run from feature to feature at top speed. This might require some rethinking in the finance department, though.
I suspect that the next generation of agenticly trained llm-s will have a mode where they first consider solving the problem by writing a program first before doing stuff by hand. At least, it would be interesting if in a few months the llm greets me with "Keep in mind that I run best on ubuntu with uv already installed!".
We haven't yet lost the war against complexity. We would know if we had, because all software would grind to a halt due to errors. We're getting close though; some aspects of software feels deeply dysfunctional; like 2FA and Captcha - They're perfect examples of trying to improve something (security) by adding complexity... And it fails spectacularly... It fails especially hard because those people who made the decision to force these additional hurdles on users are still convinced that they're useful because they have a severely distorted view of the average person's reality. Their trade-off analysis is completely out of whack.
The root problem with 2FA is that the average computer is full of vulnerabilities and cannot be trusted 100% so you need a second device just in case the computer was hacked... But it's not particularly useful because if someone infected your computer with a virus, they can likely also infect your phone the next time you plug it in to your computer to charge it... It's not quite 2-factor... So much hassle for so little security benefit... Especially for the average person who is not a Fortune 500 CEO. Company CEOs have a severely distorted view about how often the average person is targeted by scammers and hackers. Last time someone tried to scam me was 10 years ago... The pain of having to pull up my phone every single day, multiple times per day to type in a code is NOT WORTH the tiny amount of security it adds in my case.
The case of security is particularly pernicious because complexity has an adverse impact on security; so trying to improve security by adding yet more complexity is extremely unwise... Eventually the user loses access to the software altogether. E.g. they forgot their password because they were forced to use some weird characters as part of their password or they downloaded a fake password manager which turned out to be a virus, or they downloaded a legitimate password manager like Lastpass which was hacked because obviously, they'd be a popular target for hackers... Even if everything goes perfectly and the user is so deeply conditioned that they don't mind using a password manager... Their computer may crash one day and they may lose access to all their passwords... Or the company may require them to change their password after 6 month and the password manager misses the update and doesn't know the new password and the user isn't 'approved' to use the 'forgot my password' feature... Or the user forgets their password manager's master password and when they try to recover it via their email, they realize that the password for their email account is inside the password manager... It's INFURIATING!!!
I could probably write the world's most annoying book just listing out all the cascading layers of issues that modern software suffers from. The chapter on security alone would be longer than the entire Lord of the Rings series... And the average reader would probably rather throw themselves into the fiery pits of Mordor than finish reading that chapter... Yet for some bizarre reason, they don't seem to mind EXPERIENCING these exact same cascading failures in their real day-to-day life.
If you read that Wirth 1995 paper (A Plea for Lean Software) referenced by the OP, following paragraphs answered your question:
“ To some, complexity equals power
A system’s ease of use always should be a primary goal, but that ease should be based on an underlying concept that makes the use almost intuitive. Increasingly, people seem to misinterpret complexity as sophistication, which is baffling — the incomprehensible should cause suspicion rather than admiration.
Possibly this trend results from a mistaken belief that using a somewhat mysterious device confers an aura of power on the user. (What it does confer is a feeling of helplessness, if not impotence.) Therefore, the lure of complexity as sale incentive is easily understood; complexity promotes customer dependence on the vendor.”
I am typing (no screenshots or copy and paste) this 30 year old wisdom in to reply here as an archived reminder for myself.
I know competent adults whose login flow for most websites is “forgot password.” Might be better off writing your passwords on post it notes at that point.
I've seen a few sites where the login flow is simply entering your email address and you get a time-limited login link sent to you. You never create any password at all. I was skeptical at first but I've found it seems to work pretty decently.
It's inevitable even if it's unnecessary. Capitalism necessitates 6% growth year on year. Since IT services are the growth sector of course 25% of power will go to data centers in 2040
The EU should do a radical social restructuring betting on no growth. Perhaps even banning all American tech. A modern Tokugawa.
Dull article with no point, numbers or anything of values. Just some quasi philosophical mumbing. Wasted like 10 minutes and I'm still not sure what was the point of the article
Your comment indicates you may be the subject of the following quote: "Those who cannot remember the past are condemned to repeat it."
Have you considered that the article might be fine, but it’s more a case of you not getting the point ?