It shouldn’t take a hosted cloud service and and an LLM to use a computer.
Or to even design a good, humane interface.
I can’t imagine why I would want a system that could happily delete my backups when I asked it to reschedule an appointment. Having to constantly review and confirm everything it is about to do is annoying. Only to have it do the wrong thing anyway is also annoying at best.
Idk, to me there is this weird fringe of people that want computers to be this friendly “machine turns on and smiles and says hello and you talk to it”. But that dream (as far as I’ve always understood it) stems from Steve Jobs quotes about selling computers and products.
I’ve always used the computers we evolved today and have never had the desire to seek a friendship where the other “being” was the computer. It’s always been a tool and writing full sentences/paragraphs to get back correct information doesn’t feel like the next evolution of computers where before I was dropping keywords and filtering myself.
The examples in the article are designed to support the points made but are not remotely accurate.
> Consider the difference:
> GUI era: “Open Photoshop → Create new file → Set dimensions to 1200x628 → Select rectangle tool → Draw rectangle from coordinates (0,0) to (1200,628) → Fill with color #3B5998…”
> AI era: “Create a Facebook cover image with our company logo and a modern blue background.”
The LLM prompt in the article looks simpler but really there is a lot of hidden prompt describing the output which is probably the same info as the GUI era example. Which blue will my LLM pick and how will it know without me telling. How will it know how big to scale the logo and to tilt it 3 degrees left? How does it even know what the “company logo” is?
LLMs might also collect the wrong data. Does my LLM use Facebook header image size specs from 2015 or 2022? Most “blogs” online might be how-to blogspam with outdated answers.
IMO LLMs are an attempt to filter blogspam from search and make knowledge gathering and scraping of walled-gardens easier, less how everyone will be using the computer.
I don't think computation is democratized when most of the barely usable models are in the hands of mega corporations.
Computers have been way more accessible in the DOS era, when travel agents could effortlessly handle console programs. In that era more people had some idea of the basics of computing, there was more diverse and open hardware.
Nowadays people know how to click to upload a video to YouTube, which means they are just sharecroppers. If the upload sequence is replaced by another middleman like "Open""AI" who will store and monetize your data, you are a sharecropper of "Open""AI" and YouTube.
This touches an important debate on user interfaces. Chat/human language based AI vs GUI augmented AI.
OpenAI started with chat based AI but have since then realized "text only" doesn’t scale for serious business needs. We see openAI pivoting towards richer interfaces like "Canvas" that offers richer editing interface (GUI) with AI as an embedded collaborator.
There’s even news floating around that OpenAI is building a Google Docs / Microsoft Word competitor.
Now, take Microsoft. Microsoft owns some of the most serious business products ever like Office Suite. Microsoft however is working back towards a chatbot experience to an extent that they’ve even renamed their entire online office suite as Microsoft Co-pilot 365 - which makes little sense as a name for an office suite :)
But the bigger question is which approach is right? Maybe there isn’t a single right approach. Maybe that’s why Google is being Google and travelling both ways.
They’re building Gemini, Gemini Canvas as well as already owning an office suite and working towards integrating AI capabilities into their office editors.
Plenty of "creatives" I think are still going to be hands-on-mouse(stylus) in the coming decades.
I'm not sure that collaborative computing follows. Like when DropBox famously debuted and perhaps some were touting "cloud computing", Steve Jobs called it a "feature", not a platform. (Or words to that extent.) He was deflating the concept a bit too much in my opinion but perhaps the truth was somewhere in between.
This is unsupported puffery. You can call any X the next Y if you cherry-pick previous evolutions to simplify the narrative and then theorise beyond your data.
I get that it's fashionable to do blue-sky AI evangelism with the breathless tone, but I also expect at least some depth.
Crypto had no use besides gambling (which includes speculation) and black market currency.
LLMs have massive problems with externalities, but they have concrete and undeniable usefulness. So at the very least from that angle it's decidedly not the same as with crypto.
The hypsters will hype, that will always be the case.
The only use I have for them at work is to generate verbose texts to appease management that would not be equally happy with short concise and to the point answers.
How do you think people will write code and generate images 6 months to a year from now.
Claude code, nano banana, perplexity, copilots, comet will become the default tools
I've been in tech and have been coding for 25 years now. My workflow in the past 2 months uses coding agents for 95% of my work. Just yesterday I had my agent fix and rewrite nearly 2 weeks of a 2 member in about 30 minutes. The agent's code was a lot lot better.
There there is a huge bubble, but once that bursts new patterns will emerge, Just like how ecommerce and online banking emerged, after the doc com bust
>The history of computing can be viewed as a steady progression toward more intuitive human-computer interaction.
You can probably make the opposite case, as Dijkstra did in his piece of "On the foolishness of "natural language programming".
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
Computers, as machines, derive their power exactly by what they prohibit. They provide interfaces narrow enough, like modern mathematics, that make the expression of a whole lot of nonsense impossible which is what enables the automation of tasks in a correct manner. Going back to some sort of alchemy where you have to beg the computer with incantations to do things that may or may not be correct is actually going backwards in history. The fact that people think expressing themselves in a programming language as a burden, when the limitations are exactly what give it its power, says more about modern programming as a practice than anything else. As he jokes in the piece someone being glad they don't "need to" write SQL any more is like someone saying they avoided mathematical notation for the sake of clarity.
Whenever I read stuff like this, I tend to look not at the message ( because I already know what I think and have largely decided on what I think is reasonable ), but at the argumentation and language used. I do it for several reasons. It is a lot easier to find bandwagon people, shills and other undesirable content flagged, because, despite now having the ability to make the exact wording very different across media landscape, it is the exact same note across media landscape. But beyond that, it is also interesting to see shifts in words.
What I find increasingly interesting is that 'democratizing' is being used in a way that is sure to make this word become a pejorative and I can't help but wonder whether it is intentional.
I think AI is dangerous even if the creators and operators have only the best intentions. All that is needed is for them to be overconfident of their ability to stay in control as the AI increases in capability.
A sharpened stick is a tool that can be used either to spear fish or to blind someone.
Is the person who first made a sharpened a stick responsible if some people choose to do the latter?
This sort of argument is as old as humanity and is anti-tool, anti-progress, anti-technology and ultimately anti-intelligence. Not to mention entirely futile at this point.
Remember Microsoft insisting, in the late 90s, that the natural evolution of computer interfaces was that _voice_ would become the primary interface? (This was after they'd largely declared defeat on their _previous_ "this will change everything" future UX, Windows for Pen Computing).
For every _actual_ revolution in human-machine interaction, there are roughly a million things that pundits and/or vendors say will definitely be the next revolution.
Absolutely true!!!, there will always be a 1000 wrong ideas for 1 true revolution. Sometimes these pundits would be over ambitious in their thinking or off with their timelines. What MSFT said about voice is actually coming true now with AI Voice Agents as we speak..
There is already a growing number of people who use tools like super whisper to interact with agents.
At work I'm building a "chief of staff" kind of a voice agent where you simply verbally give it tasks and it goes and gets stuff done.
Also do have a look at the latest announcements from open AI on their realtime voice
MS had great tech demos of their voice control stuff in the late 90s. Given the history I will believe this is a thing when significant numbers of people (like, hundreds of millions, not the early adopter oddball class who buy VR headsets and palmpilots and other never-quite-made-it stuff) are using voice as their primary interface with computers, and not before.
About efficiency (energetic), correctness and safety (logical), seems way long from a forward step (by now).
[Lemma (: from Murphy's Laws of Computer Programming, wisdom from a more civilized era: Build a system that even a fool can use, and only a fool will want to use it.]
In fact, I now tend to see it as a strong shibboleth from people who don't actually value the thing being "democratized" - computing, art, music, and who think in terms of "barrier to entry" instead of terms of understanding and appreciating.
In the end, this bizarre drive just ends up cheapening our enjoyment and interactions. We get shallow music, soulless art, and miserable computer programs, because there's no active intelligence involved in their creation that truly understands what's being created.
Funny you say this. I’ve been thinking about it a lot lately. It really does seem like “democratization” is recuperative shorthand for “commodification”.
Accessibility tends to mean you get more stuff, and that the average quality goes down. But it does tend to also increase the amount of high-quality stuff, stuff tailored to your specific tastes, interesting and out-there stuff, if you can sort through it all. You can see this in indie video games and music already.
we don't get "shallow music, soulless art, and miserable computer programs" by making those things accessible or "democratizing". We get those because some capitalists decided to invest in some low quality stuff and hype them up, and prey on younger generations and "educate" them to have certain taste, turning everything into a race to the bottom
Another totally arbitrary narrative you could build from history is one of reducing size and increasing efficiency. We gone from room filling monsters to the raspberry pi zero.
It’s hard to make giant datacenters that require their own powerplants fit into that narrative. But I don’t know why I should prefer one narrative over another.
I can’t help hearing Karl Popper raging against historicism when I see people try to create a narrative and project it into the future as we move to some idea state.
I agree with the title, but for me the evolution is more higher level and based on Data. No AI without search engines and social networks, No search engines without WWW, no WWW withou TCP/IP and no TCP/IP without the computer.
It shouldn’t take a hosted cloud service and and an LLM to use a computer.
Or to even design a good, humane interface.
I can’t imagine why I would want a system that could happily delete my backups when I asked it to reschedule an appointment. Having to constantly review and confirm everything it is about to do is annoying. Only to have it do the wrong thing anyway is also annoying at best.
Idk, to me there is this weird fringe of people that want computers to be this friendly “machine turns on and smiles and says hello and you talk to it”. But that dream (as far as I’ve always understood it) stems from Steve Jobs quotes about selling computers and products.
I’ve always used the computers we evolved today and have never had the desire to seek a friendship where the other “being” was the computer. It’s always been a tool and writing full sentences/paragraphs to get back correct information doesn’t feel like the next evolution of computers where before I was dropping keywords and filtering myself.
The examples in the article are designed to support the points made but are not remotely accurate.
> Consider the difference:
> GUI era: “Open Photoshop → Create new file → Set dimensions to 1200x628 → Select rectangle tool → Draw rectangle from coordinates (0,0) to (1200,628) → Fill with color #3B5998…”
> AI era: “Create a Facebook cover image with our company logo and a modern blue background.”
The LLM prompt in the article looks simpler but really there is a lot of hidden prompt describing the output which is probably the same info as the GUI era example. Which blue will my LLM pick and how will it know without me telling. How will it know how big to scale the logo and to tilt it 3 degrees left? How does it even know what the “company logo” is?
LLMs might also collect the wrong data. Does my LLM use Facebook header image size specs from 2015 or 2022? Most “blogs” online might be how-to blogspam with outdated answers.
IMO LLMs are an attempt to filter blogspam from search and make knowledge gathering and scraping of walled-gardens easier, less how everyone will be using the computer.
What are your thoughts on the latest nano banana
Did Google ask Microsoft for right to use that name? Maybe they should have.
It wanted me to buy 2 credits to generate my demo prompt into an image. 0/10
I don't think computation is democratized when most of the barely usable models are in the hands of mega corporations.
Computers have been way more accessible in the DOS era, when travel agents could effortlessly handle console programs. In that era more people had some idea of the basics of computing, there was more diverse and open hardware.
Nowadays people know how to click to upload a video to YouTube, which means they are just sharecroppers. If the upload sequence is replaced by another middleman like "Open""AI" who will store and monetize your data, you are a sharecropper of "Open""AI" and YouTube.
it takes time. It took decades to go from mainframe to micro. It’s going to take a while for a model 3-4x ChatGPT 5 that you can run on your watch.
This touches an important debate on user interfaces. Chat/human language based AI vs GUI augmented AI.
OpenAI started with chat based AI but have since then realized "text only" doesn’t scale for serious business needs. We see openAI pivoting towards richer interfaces like "Canvas" that offers richer editing interface (GUI) with AI as an embedded collaborator.
There’s even news floating around that OpenAI is building a Google Docs / Microsoft Word competitor.
Now, take Microsoft. Microsoft owns some of the most serious business products ever like Office Suite. Microsoft however is working back towards a chatbot experience to an extent that they’ve even renamed their entire online office suite as Microsoft Co-pilot 365 - which makes little sense as a name for an office suite :)
But the bigger question is which approach is right? Maybe there isn’t a single right approach. Maybe that’s why Google is being Google and travelling both ways.
They’re building Gemini, Gemini Canvas as well as already owning an office suite and working towards integrating AI capabilities into their office editors.
We are living in interesting times!
As a user interface, sure, another … branch?
Plenty of "creatives" I think are still going to be hands-on-mouse(stylus) in the coming decades.
I'm not sure that collaborative computing follows. Like when DropBox famously debuted and perhaps some were touting "cloud computing", Steve Jobs called it a "feature", not a platform. (Or words to that extent.) He was deflating the concept a bit too much in my opinion but perhaps the truth was somewhere in between.
This is unsupported puffery. You can call any X the next Y if you cherry-pick previous evolutions to simplify the narrative and then theorise beyond your data.
I get that it's fashionable to do blue-sky AI evangelism with the breathless tone, but I also expect at least some depth.
I have AI fatigue.
It's extremely reminiscent of the way that crypto boosters talked about Web3 being the obvious and inevitable future of the internet.
Crypto had no use besides gambling (which includes speculation) and black market currency.
LLMs have massive problems with externalities, but they have concrete and undeniable usefulness. So at the very least from that angle it's decidedly not the same as with crypto.
The hypsters will hype, that will always be the case.
The only use I have for them at work is to generate verbose texts to appease management that would not be equally happy with short concise and to the point answers.
Basically a total waste of time.
That says more about you than the technology though.
Not my fault it doesn't anything more useful.
How do you think people will write code and generate images 6 months to a year from now. Claude code, nano banana, perplexity, copilots, comet will become the default tools
Same way they do now, since now is 6 months to a year from 6 months to a year ago, when people were saying the same things.
You gotta be living in a hype bubble to believe that.
I've been in tech and have been coding for 25 years now. My workflow in the past 2 months uses coding agents for 95% of my work. Just yesterday I had my agent fix and rewrite nearly 2 weeks of a 2 member in about 30 minutes. The agent's code was a lot lot better. There there is a huge bubble, but once that bursts new patterns will emerge, Just like how ecommerce and online banking emerged, after the doc com bust
>The history of computing can be viewed as a steady progression toward more intuitive human-computer interaction.
You can probably make the opposite case, as Dijkstra did in his piece of "On the foolishness of "natural language programming".
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
Computers, as machines, derive their power exactly by what they prohibit. They provide interfaces narrow enough, like modern mathematics, that make the expression of a whole lot of nonsense impossible which is what enables the automation of tasks in a correct manner. Going back to some sort of alchemy where you have to beg the computer with incantations to do things that may or may not be correct is actually going backwards in history. The fact that people think expressing themselves in a programming language as a burden, when the limitations are exactly what give it its power, says more about modern programming as a practice than anything else. As he jokes in the piece someone being glad they don't "need to" write SQL any more is like someone saying they avoided mathematical notation for the sake of clarity.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
Best comment ever!
Whenever I read stuff like this, I tend to look not at the message ( because I already know what I think and have largely decided on what I think is reasonable ), but at the argumentation and language used. I do it for several reasons. It is a lot easier to find bandwagon people, shills and other undesirable content flagged, because, despite now having the ability to make the exact wording very different across media landscape, it is the exact same note across media landscape. But beyond that, it is also interesting to see shifts in words.
What I find increasingly interesting is that 'democratizing' is being used in a way that is sure to make this word become a pejorative and I can't help but wonder whether it is intentional.
edit: added missing sentence fragment
>AI Is Just the Next Evolution of the Computer
My preferred framing is that the computer is a lot more dangerous than we thought.
My preferred framing is that people are a lot more dangerous and callous than we have long thought them to be.
Allowing AI to make life or death decisions is just the latest example of their dangerous and uncaring nature.
https://gizmodo.com/trump-medicare-advantage-plan-artificial...
I think AI is dangerous even if the creators and operators have only the best intentions. All that is needed is for them to be overconfident of their ability to stay in control as the AI increases in capability.
All that is needed is for them to be overconfident of their ability to stay in control as the AI increases in capability.
Current AI is like a 5 year old with a good memory.
I'm not too worried about losing control to something that that has trouble counting the "r's" in "strawberry".
I'm much more worried about people proposing to allow AI to make healthcare decisions.
I agree that there is no need to worry about a lack of controllability in the current crop of AIs.
Yet it's already enough to make people kill themselves and others.
No, this *tool* can't do anything by itself.
The problem is the way *people* choose to apply it.
Yes, the people who made the tool.
A sharpened stick is a tool that can be used either to spear fish or to blind someone.
Is the person who first made a sharpened a stick responsible if some people choose to do the latter?
This sort of argument is as old as humanity and is anti-tool, anti-progress, anti-technology and ultimately anti-intelligence. Not to mention entirely futile at this point.
Remember Microsoft insisting, in the late 90s, that the natural evolution of computer interfaces was that _voice_ would become the primary interface? (This was after they'd largely declared defeat on their _previous_ "this will change everything" future UX, Windows for Pen Computing).
For every _actual_ revolution in human-machine interaction, there are roughly a million things that pundits and/or vendors say will definitely be the next revolution.
Absolutely true!!!, there will always be a 1000 wrong ideas for 1 true revolution. Sometimes these pundits would be over ambitious in their thinking or off with their timelines. What MSFT said about voice is actually coming true now with AI Voice Agents as we speak..
> What MSFT said about voice is actually coming true now with AI Voice Agents as we speak..
... No, those are not the primary way that virtually anyone interacts with computers.
There is already a growing number of people who use tools like super whisper to interact with agents. At work I'm building a "chief of staff" kind of a voice agent where you simply verbally give it tasks and it goes and gets stuff done. Also do have a look at the latest announcements from open AI on their realtime voice
MS had great tech demos of their voice control stuff in the late 90s. Given the history I will believe this is a thing when significant numbers of people (like, hundreds of millions, not the early adopter oddball class who buy VR headsets and palmpilots and other never-quite-made-it stuff) are using voice as their primary interface with computers, and not before.
> Understanding Human Intent
That won‘t work with language alone. Natural languages are ambiguous.
Just take double negatives. Some use them to create a positive, some to emphasize the negative.
Skynet, don’t kill no people!
Not only ambiguous, but also recursive.
I'd argue human intent itself often [always?] is ambiguous
About efficiency (energetic), correctness and safety (logical), seems way long from a forward step (by now). [Lemma (: from Murphy's Laws of Computer Programming, wisdom from a more civilized era: Build a system that even a fool can use, and only a fool will want to use it.]
The SpaceX Falcon Heavy is just a next evolution of the Bicycle.
Yup thats a valid way to look at it
I've begun to hate the term "democratizing."
In fact, I now tend to see it as a strong shibboleth from people who don't actually value the thing being "democratized" - computing, art, music, and who think in terms of "barrier to entry" instead of terms of understanding and appreciating.
In the end, this bizarre drive just ends up cheapening our enjoyment and interactions. We get shallow music, soulless art, and miserable computer programs, because there's no active intelligence involved in their creation that truly understands what's being created.
Funny you say this. I’ve been thinking about it a lot lately. It really does seem like “democratization” is recuperative shorthand for “commodification”.
Accessibility tends to mean you get more stuff, and that the average quality goes down. But it does tend to also increase the amount of high-quality stuff, stuff tailored to your specific tastes, interesting and out-there stuff, if you can sort through it all. You can see this in indie video games and music already.
we don't get "shallow music, soulless art, and miserable computer programs" by making those things accessible or "democratizing". We get those because some capitalists decided to invest in some low quality stuff and hype them up, and prey on younger generations and "educate" them to have certain taste, turning everything into a race to the bottom
This article was written a couple of months back look at the example of Photoshop and connect it to the nano banana release last week
Another totally arbitrary narrative you could build from history is one of reducing size and increasing efficiency. We gone from room filling monsters to the raspberry pi zero.
It’s hard to make giant datacenters that require their own powerplants fit into that narrative. But I don’t know why I should prefer one narrative over another.
I can’t help hearing Karl Popper raging against historicism when I see people try to create a narrative and project it into the future as we move to some idea state.
[flagged]
[dead]
> AI is the natural next step in making computers more accessible and useful
Whatever you're smoking, must be really strong. Can i have some ? /s
[dead]
I agree with the title, but for me the evolution is more higher level and based on Data. No AI without search engines and social networks, No search engines without WWW, no WWW withou TCP/IP and no TCP/IP without the computer.