> The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces.
The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.
I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.
Ended up reading the Ink Terminal implementation of React
Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.
The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.
>The mythical, it's text, so it's accessible. There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible.
Nope, nobody believes that. Devs say that for text documents which is somethig else entirely -- and, with provisions, for terminal single command apps (like grep, cut, ls, and so on). Nobody said it for TUIs.
The more you look into these trendy TUIs the worse it gets -- it's like the developers took the accumulation of all the worst practices since the dawn of programming, and wrapped it all into one unwieldy, overweight, under-performant gelatinous blob that threatens to collapse under its own weight.
But what _is_ a "Text User Interface"? Google Images just returns what is being discussed here: "GUIs" that run in some kind of text mode. And to me, that's also what a TUI is.
A more textually oriented environment (like a normal Unix shell) is, in my experience, usually referred to as a CLI: Command Line Interface.
I did find an interesting hybrid in the Pi coding agent: it seems to leverage the normal terminal scrollback, while still enhancing it with things like transient input fields and status lines, so that it can display those without cluttering scrollback.
A shell is the environmental manager, the terminal is the display device, and the window is the container. Add in tabs, web panes and sticky notes + make it all agentic, you get Hyperia: https://hyperia.nuts.services
Actually, I think that is close to a good name for them: Terminal-based GUIs.
Some are pretty useful, for instance I like lazygit as a simple dashboard/panel for a small repo (or when making small changes to a larger one), some make me wonder what those who made them were smoking!
The less silly ones are handy when you are tinkering with a far away machine and want something a little more interactive than CLI commands and stuff connected by pipes and scripts but don't want to deal with the latency of GUI remoting. Some, though, are so badly thought out that they are slower than using a browser over long-distance X…
There are useful ones. Something like Midnight Commander can be way better than lots of manual copy and move commands. The kernel config one is way nicer than the stream style kernel configuration tool. Some of these newer ones are starting to feel more like “text mode bling” than useful.
My objection to TUI is I don’t think it’s clear enough for what’s happening here. I think you could easily argue that applies to most readline style stream CLI programs.
Would you call a fully 3D UI in VR, not a planar in the VR world but true 3D, a GUI? It is graphical by definition. But if you talked to someone about a GUI that’s not what they’d think you’re talking about without additional context.
That’s my objection. I think TUI implies way less than what these programs are doing. Yeah it can describe them but I don’t think it should be the word for them.
I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model. Composible utilities is something that is much less common in GUIs.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
For some basic stuff like vim it works fine. But for almost everything else I'd rather a regular CLI tool or a web interface. I suspect a lot of the popularity comes from people who want to feel like a hacker using 10 terminal windows, but actually want a GUI like experience.
Obviously people want GUIs. That's why TUIs should be compared to GUIs, not to CLIs. TUIs are nice since you get a lot of the benefits of a GUI, without having to leave the context of the terminal.
I feel like the better solution here (than trying to shoehorn a GUI into an interface meant for text) is to make terminal windows graphically-aware, like how things work in Plan 9.
Vim is special because 99% of what we do is editing text, and it is the text editor—the importance of that task overcomes the poor discoverability of a TUI. Most other programs should be CLI, so they can fit in the conventional command line toolbox.
For me, TUIs compensate for the fact that I can't get good remote GUI rendering on Linux. Yes, X11 tunneling exists, but the experience has always been abysmal for me for anything not hosted on a machine that sits on the same LAN as the client. For Wayland I don't even know if such a thing is possible since I don't think the architecture supports it.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
I have never tried it until now, and I hadn't looked into it. But I just tried `waypipe ssh` to a remote server I have for doing asynchronous Claude work in VMs, and it actually works pretty great! Maybe I'll switch to that for my emacs/magit setup, the lack of clipboard integration when running emacs in a terminal over ssh is enough of an argument for me.
Edit: yikes, pressing M-w caused emacs-pgtk to crash with a Wayland protocol error, so it isn't trivial and requires some configuring I guess.
Edit 2: Apparently I have to install wl-clipboard and write a bunch of emacs lisp to work around this. I don't think I have the patience for that, and I fear that such problems will be even harder to solve for applications which are not as flexible and programmable as emacs. So far I'll conclude that remote Wayland is not ready and stick to TUI.
Edit 3: No, the problem is probably mismatched waypipe versions on client and server. Still not fun.
I dunno, pre-LLM TUI's at least tended to be okay, and keyboard navigation was a first class citizen. Besides, if you were using a TUI instead of a GUI then you basically always ended up saving memory/battery life, and TUI programs are generally more portable than trying to run some ancient GUI program.
I typically prefer CLI myself but having a TUI to manage torrents for instance was much more ergonomic.
For almost every tui, a webui works better imo. Most torrent clients offer a web management ui and it's always going to be easier and more feature filled using a platform that was actually designed for it rather than hacking a gui in to the terminal.
A lot of the complaints in this thread seem like they're aimed more at recent vibecoded UIs than the concept of a TUI.
Like, okay, they are a big step back with accessibility, but they're flickering garbage because they were vibecoded in a weekend and the TS or Python library they're built on was similarly forced upon this world.
The command line shell has that benefit of piping text between programs. TUIs are runnable from the command line shell. -- So you can get many of the benefits of a GUI (e.g. discoverability) while sticking close to the terminal where you're doing things.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
For me it’s mostly
- the convenience of being in the terminal, where I live
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
2) Constraints imposed by the terminal make all the apps look and work approximately the same - in the outside world the standards developed for UX are ignored as a matter of routine just because they can be. TUIs are in an optimum of least surprise, so to speak.
For the Claude Code / OpenCode / Crush / etc new wave TUIs, it's not about composability or text streaming. It's basically a combination of a few tailwinds:
1. There's already a large-ish community of engineers who live in the terminal e.g. Vim/Neovim/tmux/zellij/etc users. Lots of engineering tasks are accomplished by running scripts in a terminal, so it makes sense for some people to just move as much of their work there as possible. This means there's a set of users you can address with dev tools that run in a terminal.
2. Cross-platform distribution among the platforms most of those people care about — macOS and Linux — is largely a solved problem via package managers. Distributing cross-platform native apps is fragmented at best.
3. Building modern TUIs has become a lot easier thanks to the demand+distribution wins above: there's a lot of appetite for building blocks, and so lots of good options have flourished like Ink for React, Bubble Tea for Go, etc.
4. General developer distaste for the most straightforward analogue to all of this for desktop GUIs: Electron. Deservedly or not it's associated with slow, bloated applications. And if you don't use Electron, doing cross-platform anything is going to be a much harder problem than just pushing out a quick TUI app.
Eventually successful products seem to eventually jump the gap, like Claude Code eventually spawning Claude Cowork and OpenCode adding OpenCode Web. But it's easier and faster to test product market fit for dev tools with a TUI. And plenty of your users will stay there, even after you launch something else.
These were using 66GB compared to what, few KB/MB In NCurses? I can run Nethack/Slashem under a 30 yeard of computer. React it's a joke, and there are ports of Ncurses to any OS.
Totally out of fashion today but think of TN3270. Rather than "streaming" they were forms based and heavily keyboard driven.
This could easily be mimicked by a GUI but keyboard shortcuts has become an afterthought.
I still today meet users missing those old workflows. But they express it as "old text interface" aka TUI. If you listen to them you realize they mean blazing fast and shortcut driven. When you work with data entry you care about speed - not animations.
Any beginner likes eye candy. The veteran has stopped caring.
>I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model.
Ever used Emacs? Or Vim? Or Mutt? Or Borland's old IDEs?
The power of the terminal is also in ubiquitness, trivial connection to a remote system, and lack of mountains of GUI cruft, that a TUI app can as well have.
I'd agree with this assessment. Moreover, if developers were to stick with the eminently satisfactory CUA (IBM's Common User Access) interface standard and further regularize that then things would be much easier. https://en.wikipedia.org/wiki/IBM_Common_User_Access
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
The hacker aesthetics of TUIs, especially around coding agents, is undeniable and the fact that you can mostly run them on any shell (even remote) is neat.
But the UX? If you the goal is not to read a single line of the code churned by these agents, and perhaps that is the point, then they are fine - type the prompt and cross your fingers.
Anything else that requires reading and changing needs an IDE of sorts. I am not saying you cannot have your workflow works with TUIs - plenty of people do. It is just not as good and flexible as full desktop applications.
...but without the web's accessibility options, without good text editing, with very basic customisation options, requiring trusted compute instead of working in a sandbox...
They're a long way from web apps, far worse on most axes.
Back in the 90s when most SAP systems switched from AS/400 terminals to Windows NT, people reported massive losses in productivity.
I've never worked on SAP, my mother did. And basically, she went from a fully tabular, function-key based oriented workflow, to holding a mouse, moving around and clicking a lot (tabbing and F keys were lost for many functions).
She showed me how she could go from ESC ESC F4 F3 TAB TAB and she was across the whole system a super speed. And this was a terminal, not the actual system!
The short of the story is this
Windows based application work best for discoverability and new users
Terminal based applications work best for faster, memory based navigation and power users.
That's a problem with the specific GUI, not GUI as a concept. Good GUI frameworks should be built for predictability and keyboard-driven fast paths, and have this included by default so you don't have to make these decisions for each app.
In fact, all successful applications for professionals/power users are built with fast paths in mind. Even Microsoft's ribbon which gets a lot of hate for some reason is an example of that, it's keyboard-driven, customizable, and discoverable at the same time.
In fact, just about everything in Windows (not apps, since those can be created by 3rd party developers who may or may not care, but the OS) can be operated by keyboard: login, start menu, settings, even ancient tools like Event Viewer.
Except, strangely, for setup - if you don’t load your laptop’s touchpad/trackpad driver during the early select a partition screen, you seem to get stuck on later screens like when you are required to connect to WiFi.
I don't think the issue is using declarative UI frameworks, it's that the rendering engines these frameworks are outputting to are not taking accessibility into account.
Totally. I had a colleague who was a pretty awesome programmer and was completely blind. When I first met him, he was working on a braille 3270 terminal. Those IBM terminals were capable of all sorts of stuff.
Are there any recommendations for testing accessibility for TUI applications? The article mentions speakup, but as far as I understand, you need a hardware synthesizer to use that. Is it also possible to use orca?
There is no cross platform standard that accomplishes the goals that authors turn to TUIs to solve. There is no widely distributed remotely accessible interface that pops up GUI windows from a shell context that works everywhere.
I'm sure there's a proposal for this somewhere, but I've always wondered if it wouldn't be most viable to just have a separate "reader mode" that replaces all the TUI elements with some sort of templated descriptive string of text, something like "Page one. Foo entry. ' bar'"
Seems a lot more viable than trying to get new standard escape codes and outputting those along with visual content that may be flickering erratically. Also probably gets too complex faster than those proposals with more intricate UIs, but IMO it's really hard to defend TUIs for anything but relatively simple programs as an in-between a CLI and a native application.
> I'm sure there's a proposal for this somewhere, but I've always wondered if it wouldn't be most viable to just have a separate "reader mode" that replaces all the TUI elements with some sort of templated descriptive string of text, something like "Page one. Foo entry. ' bar'"
We have a term for that, it's called a CLI. For example, ed and ex are the historical CLI counterparts to vi.
I suppose that would just be an interactive CLI. Usually I think of them as one-and-done invocations like sed. It's definitely not an ideal mode as evidenced by the current popularity of vim over ed, but it's better than a program being unusable.
> The real problem is that pretty much the whole stack has a terrible AX story.
> First, most GPU-rendered terminal emulators don't engage in system-provided accessibility APIs AT ALL. Because text is GPU-rendered, AX tooling can't "read" it, it just shows up as an image. This applies to Kitty, Alacritty, WezTerm. My own terminal Ghostty is AX-readable (on macOS), and so are others like iTerm2 and Terminal.app (which admittedly do it better than me, we have gaps to fill).
> Second, there are no terminal sequences or initiatives at all for TUIs to communicate AX information to the emulator, so the emulator itself can't do much more than display a blob of text to AX tooling. We need the equivalent of ARIA-style annotations but for terminal cells, runs, and regions. No such initiative exists. Even if TUIs do great things with the cursor, this is going to bite a lot of use cases.
> As an example of combining the above, I've been working on something with Ghostty where we integrate semantic prompt (OSC133) and AX APIs so that we can present each shell prompt, input, and command as structurally significant to AX tooling (rather than simply a text box where the cursor is somewhere else). This shows the importance of the relationship between terminal specs (OSC133), TUIs (which must emit OSC133), and terminal emulators (which must both understand OSC133 AND communicate it to AX APIs).
> The whole stack is rotten. And no one is earnestly trying to fix it (including me, I have limited time and I do my best but this is a WHOLE TOPIC that requires a huge amount of time and politicking the ecosystem and I don't have it, sorry).
Bonus: a simultaneously awesome and horrible reality is that AI is really helping to improve AX here. A lot of AI tooling uses/abuses AX APIs to make things happen. How is OpenAI reading your list of windows, typing into them, etc? Accessibility frameworks! So a lot more apps are taking AX integration a lot more seriously since its table stacks for AI using it... Sad it requires that but the glass half full is more software is doing that.
>> AX originated in reference to UX (User Experience) and stands for Accessibility Experience. AX is also sometimes used as a synonym for accessibility.
I have been saying that since vim and emac . Vim and emac are good because their keyboard driven input system, not because of TUI. Emac gui is proof of that, vim have none properly
For accesibility edbrowse https://github.com/cmb/edbrowse and anything CLI shines such as SIC + Bitlbee to talk with chat platforms, and maybe pjsua for SIP. These can be combined wth yasr (terminal reader)+speech-dispatcher as a TTS.
mc adheres to NC standard, so you already have an extremely accessible system. The whole UI knowledge can be transferred easily among mc, TotalCmd, or NC from 1988 or whatever. Not the case with most TUIs. I had to use aptitude and git's text mode merge tool recently, and both had terrible accessibility, not to mention entirely different designs too. I'm sure I'd get good at them once I read their manual but they were extremely non-intuitive and hard to explore.
I'm slowly working on this, trying to figure out what works as I add accessibility to TUIs. Having better structured information on screen feels so so so compelling to me.
I wish the terminal-wg was more active. There's a bunch of weird odd OSC's folks have tried to make for enhancing structure of the terminal, for various ways to emit more layout-coupled semantic info. Accessibility APIs are great but in most forms a huge chunk of their capabilities feel pretty disconnected from the actual drawing on the screen, are somewhat a parallel construct to what's on screen. Using OSC to layer in more information about what is being drawn feels righter.
But in general feels like, for all the TUI interest, not many folks are about and working together to actually figure out how to advance the terminal itself.
I am not sure it is helpful to shoehorn terminals into being a second web. The terminal's strength is its simplicity.
For example, I really dislike mouse support in TUIs. 100% of the times I used the mouse on a TUI, I wanted to copy a piece of text. If the TUI hijacks the mouse and does something different with it (e.g. vim switching into visual mode) that is just annoying.
Of course a11y is important. But it barely works on the web and we won't get perfect semantics on the terminal without a lot of work. I say the better option is to strip down the experience to the parts that work well.
As I said before [0], the same web developers that are the ones that ruined the web are now bringing their Java/Typescript, React mess into terminals where it is not needed.
React is software development cancer and it just entered metastasis. It can't be cured anymore, it will spread throughut the entire stack and kill it from the inside. We already have it on the web, on mobile, on Windows 11 and, now, it's coming for the terminal emulator.
> The reality is different. Most modern Text User Interfaces (TUIs) are often more hostile to accessibility than poorly coded graphical interfaces.
The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.
I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.
Ended up reading the Ink Terminal implementation of React
https://github.com/vadimdemedes/ink
Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.
The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.
>The mythical, it's text, so it's accessible. There is a persistent misconception among sighted developers: if an application runs in a terminal, it is inherently accessible.
Nope, nobody believes that. Devs say that for text documents which is somethig else entirely -- and, with provisions, for terminal single command apps (like grep, cut, ls, and so on). Nobody said it for TUIs.
The more you look into these trendy TUIs the worse it gets -- it's like the developers took the accumulation of all the worst practices since the dawn of programming, and wrapped it all into one unwieldy, overweight, under-performant gelatinous blob that threatens to collapse under its own weight.
They’re not terminal UIs.
They’re attempts at pretending to have Windows (etc.) GUIs in a terminal.
Same stuff people made for DOS when Windows wasn’t common or good enough yet.
I’m not surprised they’re a disaster. Or built without understanding the abilities of the terminal they’re running on.
If you don't want people calling these apps TUIs, what would you prefer people call them? And what does the term TUI refer to, if not this?
Text User Interface.
But what _is_ a "Text User Interface"? Google Images just returns what is being discussed here: "GUIs" that run in some kind of text mode. And to me, that's also what a TUI is.
A more textually oriented environment (like a normal Unix shell) is, in my experience, usually referred to as a CLI: Command Line Interface.
I did find an interesting hybrid in the Pi coding agent: it seems to leverage the normal terminal scrollback, while still enhancing it with things like transient input fields and status lines, so that it can display those without cluttering scrollback.
A shell is the environmental manager, the terminal is the display device, and the window is the container. Add in tabs, web panes and sticky notes + make it all agentic, you get Hyperia: https://hyperia.nuts.services
> They’re not terminal UIs.
Actually, I think that is close to a good name for them: Terminal-based GUIs.
Some are pretty useful, for instance I like lazygit as a simple dashboard/panel for a small repo (or when making small changes to a larger one), some make me wonder what those who made them were smoking!
The less silly ones are handy when you are tinkering with a far away machine and want something a little more interactive than CLI commands and stuff connected by pipes and scripts but don't want to deal with the latency of GUI remoting. Some, though, are so badly thought out that they are slower than using a browser over long-distance X…
There are useful ones. Something like Midnight Commander can be way better than lots of manual copy and move commands. The kernel config one is way nicer than the stream style kernel configuration tool. Some of these newer ones are starting to feel more like “text mode bling” than useful.
My objection to TUI is I don’t think it’s clear enough for what’s happening here. I think you could easily argue that applies to most readline style stream CLI programs.
Would you call a fully 3D UI in VR, not a planar in the VR world but true 3D, a GUI? It is graphical by definition. But if you talked to someone about a GUI that’s not what they’d think you’re talking about without additional context.
That’s my objection. I think TUI implies way less than what these programs are doing. Yeah it can describe them but I don’t think it should be the word for them.
>They’re not terminal UIs. They’re attempts at pretending to have Windows (etc.) GUIs in a terminal.
That's what a terminal UI is, and has been since Emacs was a thing.
I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model. Composible utilities is something that is much less common in GUIs.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
For some basic stuff like vim it works fine. But for almost everything else I'd rather a regular CLI tool or a web interface. I suspect a lot of the popularity comes from people who want to feel like a hacker using 10 terminal windows, but actually want a GUI like experience.
This. A lot of folks picked it up for that reason when they were young and now are terminal-all-the-things out of sheer inertia.
Obviously people want GUIs. That's why TUIs should be compared to GUIs, not to CLIs. TUIs are nice since you get a lot of the benefits of a GUI, without having to leave the context of the terminal.
I feel like the better solution here (than trying to shoehorn a GUI into an interface meant for text) is to make terminal windows graphically-aware, like how things work in Plan 9.
At that point what you want is just a tiling window manager rather than a terminal that supports GUIs.
Vim is special because 99% of what we do is editing text, and it is the text editor—the importance of that task overcomes the poor discoverability of a TUI. Most other programs should be CLI, so they can fit in the conventional command line toolbox.
For me, TUIs compensate for the fact that I can't get good remote GUI rendering on Linux. Yes, X11 tunneling exists, but the experience has always been abysmal for me for anything not hosted on a machine that sits on the same LAN as the client. For Wayland I don't even know if such a thing is possible since I don't think the architecture supports it.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
> For Wayland I don't even know if such a thing is possible since I don't think the architecture supports it.
Not only forwarding is trivial with Wayland, it also tends to provide better experience than X11 does.
I have never tried it until now, and I hadn't looked into it. But I just tried `waypipe ssh` to a remote server I have for doing asynchronous Claude work in VMs, and it actually works pretty great! Maybe I'll switch to that for my emacs/magit setup, the lack of clipboard integration when running emacs in a terminal over ssh is enough of an argument for me.
Edit: yikes, pressing M-w caused emacs-pgtk to crash with a Wayland protocol error, so it isn't trivial and requires some configuring I guess.
Edit 2: Apparently I have to install wl-clipboard and write a bunch of emacs lisp to work around this. I don't think I have the patience for that, and I fear that such problems will be even harder to solve for applications which are not as flexible and programmable as emacs. So far I'll conclude that remote Wayland is not ready and stick to TUI.
Edit 3: No, the problem is probably mismatched waypipe versions on client and server. Still not fun.
I dunno, pre-LLM TUI's at least tended to be okay, and keyboard navigation was a first class citizen. Besides, if you were using a TUI instead of a GUI then you basically always ended up saving memory/battery life, and TUI programs are generally more portable than trying to run some ancient GUI program.
I typically prefer CLI myself but having a TUI to manage torrents for instance was much more ergonomic.
For almost every tui, a webui works better imo. Most torrent clients offer a web management ui and it's always going to be easier and more feature filled using a platform that was actually designed for it rather than hacking a gui in to the terminal.
A lot of the complaints in this thread seem like they're aimed more at recent vibecoded UIs than the concept of a TUI.
Like, okay, they are a big step back with accessibility, but they're flickering garbage because they were vibecoded in a weekend and the TS or Python library they're built on was similarly forced upon this world.
The command line shell has that benefit of piping text between programs. TUIs are runnable from the command line shell. -- So you can get many of the benefits of a GUI (e.g. discoverability) while sticking close to the terminal where you're doing things.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
They are very useful when working on remote servers, VMs and containers. Much much more convenient and robust than, say, X forwarding.
I like them because they’re easy to run in a container / sandbox.
For me it’s mostly - the convenience of being in the terminal, where I live
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
1) TUIs work over ssh without any extra steps.
2) Constraints imposed by the terminal make all the apps look and work approximately the same - in the outside world the standards developed for UX are ignored as a matter of routine just because they can be. TUIs are in an optimum of least surprise, so to speak.
For the Claude Code / OpenCode / Crush / etc new wave TUIs, it's not about composability or text streaming. It's basically a combination of a few tailwinds:
1. There's already a large-ish community of engineers who live in the terminal e.g. Vim/Neovim/tmux/zellij/etc users. Lots of engineering tasks are accomplished by running scripts in a terminal, so it makes sense for some people to just move as much of their work there as possible. This means there's a set of users you can address with dev tools that run in a terminal.
2. Cross-platform distribution among the platforms most of those people care about — macOS and Linux — is largely a solved problem via package managers. Distributing cross-platform native apps is fragmented at best.
3. Building modern TUIs has become a lot easier thanks to the demand+distribution wins above: there's a lot of appetite for building blocks, and so lots of good options have flourished like Ink for React, Bubble Tea for Go, etc.
4. General developer distaste for the most straightforward analogue to all of this for desktop GUIs: Electron. Deservedly or not it's associated with slow, bloated applications. And if you don't use Electron, doing cross-platform anything is going to be a much harder problem than just pushing out a quick TUI app.
Eventually successful products seem to eventually jump the gap, like Claude Code eventually spawning Claude Cowork and OpenCode adding OpenCode Web. But it's easier and faster to test product market fit for dev tools with a TUI. And plenty of your users will stay there, even after you launch something else.
These were using 66GB compared to what, few KB/MB In NCurses? I can run Nethack/Slashem under a 30 yeard of computer. React it's a joke, and there are ports of Ncurses to any OS.
My experience as a developer (with a preference for simplicity):
- CLI by default
- if I need a GUI, but no access to the local system: web
- if I need a (restricted) GUI with access to the local system: TUI
- else: either start a local web server, or, if nothing else works, go for a GUI toolkit
Totally out of fashion today but think of TN3270. Rather than "streaming" they were forms based and heavily keyboard driven. This could easily be mimicked by a GUI but keyboard shortcuts has become an afterthought.
I still today meet users missing those old workflows. But they express it as "old text interface" aka TUI. If you listen to them you realize they mean blazing fast and shortcut driven. When you work with data entry you care about speed - not animations.
Any beginner likes eye candy. The veteran has stopped caring.
>I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model.
Ever used Emacs? Or Vim? Or Mutt? Or Borland's old IDEs?
The power of the terminal is also in ubiquitness, trivial connection to a remote system, and lack of mountains of GUI cruft, that a TUI app can as well have.
I'd agree with this assessment. Moreover, if developers were to stick with the eminently satisfactory CUA (IBM's Common User Access) interface standard and further regularize that then things would be much easier. https://en.wikipedia.org/wiki/IBM_Common_User_Access
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
The hacker aesthetics of TUIs, especially around coding agents, is undeniable and the fact that you can mostly run them on any shell (even remote) is neat.
But the UX? If you the goal is not to read a single line of the code churned by these agents, and perhaps that is the point, then they are fine - type the prompt and cross your fingers.
Anything else that requires reading and changing needs an IDE of sorts. I am not saying you cannot have your workflow works with TUIs - plenty of people do. It is just not as good and flexible as full desktop applications.
TUIs were supposed to be the simple option. now they're just web apps wearing a terminal costume
...but without the web's accessibility options, without good text editing, with very basic customisation options, requiring trusted compute instead of working in a sandbox...
They're a long way from web apps, far worse on most axes.
This is a well documented issue, TUI vs. windows.
Back in the 90s when most SAP systems switched from AS/400 terminals to Windows NT, people reported massive losses in productivity.
I've never worked on SAP, my mother did. And basically, she went from a fully tabular, function-key based oriented workflow, to holding a mouse, moving around and clicking a lot (tabbing and F keys were lost for many functions).
She showed me how she could go from ESC ESC F4 F3 TAB TAB and she was across the whole system a super speed. And this was a terminal, not the actual system!
The short of the story is this
Windows based application work best for discoverability and new users
Terminal based applications work best for faster, memory based navigation and power users.
That's a problem with the specific GUI, not GUI as a concept. Good GUI frameworks should be built for predictability and keyboard-driven fast paths, and have this included by default so you don't have to make these decisions for each app.
In fact, all successful applications for professionals/power users are built with fast paths in mind. Even Microsoft's ribbon which gets a lot of hate for some reason is an example of that, it's keyboard-driven, customizable, and discoverable at the same time.
In fact, just about everything in Windows (not apps, since those can be created by 3rd party developers who may or may not care, but the OS) can be operated by keyboard: login, start menu, settings, even ancient tools like Event Viewer.
Except, strangely, for setup - if you don’t load your laptop’s touchpad/trackpad driver during the early select a partition screen, you seem to get stuck on later screens like when you are required to connect to WiFi.
I don't think the issue is using declarative UI frameworks, it's that the rendering engines these frameworks are outputting to are not taking accessibility into account.
Does a terminal even have any accessibility support tho?
Yes. Well, for cli yes.
The article mentions several TUI programs that rendering in an accessible way for screen readers.
Oh well I did't read the article as is tradition
Totally. I had a colleague who was a pretty awesome programmer and was completely blind. When I first met him, he was working on a braille 3270 terminal. Those IBM terminals were capable of all sorts of stuff.
I think it’s clear from the article declarative UI could be done, with correct implementation and some options to disable noise.
Clearly no one put thought into any of that. It was just “make terminal, but pretty”.
Are there any recommendations for testing accessibility for TUI applications? The article mentions speakup, but as far as I understand, you need a hardware synthesizer to use that. Is it also possible to use orca?
Hey Claude how do I get around signed binaries and App Store hell created by the likes of Apple and Microsoft?
Maybe that’s why we have them? Tbh I don’t mind them, beats some space inefficient bubbly ui showing virtually no info per screen.
There is no cross platform standard that accomplishes the goals that authors turn to TUIs to solve. There is no widely distributed remotely accessible interface that pops up GUI windows from a shell context that works everywhere.
Sure there is. It's called HTML.
Yes it's gronky.
But so are VT100 escape codes.
I'm sure there's a proposal for this somewhere, but I've always wondered if it wouldn't be most viable to just have a separate "reader mode" that replaces all the TUI elements with some sort of templated descriptive string of text, something like "Page one. Foo entry. ' bar'"
Seems a lot more viable than trying to get new standard escape codes and outputting those along with visual content that may be flickering erratically. Also probably gets too complex faster than those proposals with more intricate UIs, but IMO it's really hard to defend TUIs for anything but relatively simple programs as an in-between a CLI and a native application.
> I'm sure there's a proposal for this somewhere, but I've always wondered if it wouldn't be most viable to just have a separate "reader mode" that replaces all the TUI elements with some sort of templated descriptive string of text, something like "Page one. Foo entry. ' bar'"
We have a term for that, it's called a CLI. For example, ed and ex are the historical CLI counterparts to vi.
I suppose that would just be an interactive CLI. Usually I think of them as one-and-done invocations like sed. It's definitely not an ideal mode as evidenced by the current popularity of vim over ed, but it's better than a program being unusable.
I wouldn't have assumed a TUI is accessible just because it's on the terminal. I guess the author encounters people who do.
I am surprised, though, that something like "turning off the cursor" enhances the accessibility.
Mitchell Hashimoto has a great response on lobste.rs https://lobste.rs/s/ifbdw1/text_mode_lie_why_modern_tuis_are...
> It isn't fair to blame TUIs.
> The real problem is that pretty much the whole stack has a terrible AX story.
> First, most GPU-rendered terminal emulators don't engage in system-provided accessibility APIs AT ALL. Because text is GPU-rendered, AX tooling can't "read" it, it just shows up as an image. This applies to Kitty, Alacritty, WezTerm. My own terminal Ghostty is AX-readable (on macOS), and so are others like iTerm2 and Terminal.app (which admittedly do it better than me, we have gaps to fill).
> Second, there are no terminal sequences or initiatives at all for TUIs to communicate AX information to the emulator, so the emulator itself can't do much more than display a blob of text to AX tooling. We need the equivalent of ARIA-style annotations but for terminal cells, runs, and regions. No such initiative exists. Even if TUIs do great things with the cursor, this is going to bite a lot of use cases.
> As an example of combining the above, I've been working on something with Ghostty where we integrate semantic prompt (OSC133) and AX APIs so that we can present each shell prompt, input, and command as structurally significant to AX tooling (rather than simply a text box where the cursor is somewhere else). This shows the importance of the relationship between terminal specs (OSC133), TUIs (which must emit OSC133), and terminal emulators (which must both understand OSC133 AND communicate it to AX APIs).
> The whole stack is rotten. And no one is earnestly trying to fix it (including me, I have limited time and I do my best but this is a WHOLE TOPIC that requires a huge amount of time and politicking the ecosystem and I don't have it, sorry).
Bonus: a simultaneously awesome and horrible reality is that AI is really helping to improve AX here. A lot of AI tooling uses/abuses AX APIs to make things happen. How is OpenAI reading your list of windows, typing into them, etc? Accessibility frameworks! So a lot more apps are taking AX integration a lot more seriously since its table stacks for AI using it... Sad it requires that but the glass half full is more software is doing that.
What is AX in this context?
>> AX originated in reference to UX (User Experience) and stands for Accessibility Experience. AX is also sometimes used as a synonym for accessibility.
The language of accessibility – Staffnet | ETH Zurich https://ethz.ch/staffnet/en/service/communication/digital-ac...
accessibility. Also regularly seen as a11y.
Accessibility, if I am not mistaken
I have been saying that since vim and emac . Vim and emac are good because their keyboard driven input system, not because of TUI. Emac gui is proof of that, vim have none properly
"TUI" is for people who cant learn text commands: looks pretty, easy to use, not flexible and not powerful. just use a GUI already.
For accesibility edbrowse https://github.com/cmb/edbrowse and anything CLI shines such as SIC + Bitlbee to talk with chat platforms, and maybe pjsua for SIP. These can be combined wth yasr (terminal reader)+speech-dispatcher as a TTS.
NCurses can be hooked with ease; IDK about the rest.
I agree with some of the sentiment here, but why is it presented in an AI slop format? It's really a self-defeating message.
I use mc as a file manager. I have no idea what you are talking about.
admittedly mc is far from being a "modern" TUI
mc adheres to NC standard, so you already have an extremely accessible system. The whole UI knowledge can be transferred easily among mc, TotalCmd, or NC from 1988 or whatever. Not the case with most TUIs. I had to use aptitude and git's text mode merge tool recently, and both had terrible accessibility, not to mention entirely different designs too. I'm sure I'd get good at them once I read their manual but they were extremely non-intuitive and hard to explore.
The "T" in "TUI" means "Terminal" not "Text".
I'm slowly working on this, trying to figure out what works as I add accessibility to TUIs. Having better structured information on screen feels so so so compelling to me.
I wish the terminal-wg was more active. There's a bunch of weird odd OSC's folks have tried to make for enhancing structure of the terminal, for various ways to emit more layout-coupled semantic info. Accessibility APIs are great but in most forms a huge chunk of their capabilities feel pretty disconnected from the actual drawing on the screen, are somewhat a parallel construct to what's on screen. Using OSC to layer in more information about what is being drawn feels righter.
Two examples, collapsible regions, semantic prompt regions, https://gitlab.freedesktop.org/terminal-wg/specifications/-/... https://gitlab.freedesktop.org/terminal-wg/specifications/-/...
But in general feels like, for all the TUI interest, not many folks are about and working together to actually figure out how to advance the terminal itself.
I am not sure it is helpful to shoehorn terminals into being a second web. The terminal's strength is its simplicity.
For example, I really dislike mouse support in TUIs. 100% of the times I used the mouse on a TUI, I wanted to copy a piece of text. If the TUI hijacks the mouse and does something different with it (e.g. vim switching into visual mode) that is just annoying.
Of course a11y is important. But it barely works on the web and we won't get perfect semantics on the terminal without a lot of work. I say the better option is to strip down the experience to the parts that work well.
> If the TUI hijacks the mouse and does something different with it (e.g. vim switching into visual mode) that is just annoying.
tmux is great for this. Just press Ctrl+[ and the cursor can select whatever text is in the window.
As I said before [0], the same web developers that are the ones that ruined the web are now bringing their Java/Typescript, React mess into terminals where it is not needed.
[0] https://news.ycombinator.com/item?id=47364817
You’re a bit late to the game if you think that’s new. That has been a thing for the past 10y or so
7 years later and nothing has changed [0] and just more of the same web slop now fully taking over its next target.
[0] https://news.ycombinator.com/item?id=20503158
React is software development cancer and it just entered metastasis. It can't be cured anymore, it will spread throughut the entire stack and kill it from the inside. We already have it on the web, on mobile, on Windows 11 and, now, it's coming for the terminal emulator.
What do you think React is?
It's true though. I accidently put a React component in my Postgres schema and all the data was dropped. It really poisoned the whole stack.
Javascript it's already on some ARM processors as a dedicated instruction.
Maybe someone should come up with AI for blind people. TUAIs :-)