bluejellybean 16 days ago

I say this as someone who has been heavily using the command line for the last decade, even if you "know" how to use a CLI decently well, go read this if you haven't. From only a couple minutes of reading I found not one, but two new tidbits that I never even considered looking up. This information will completely change my daily levels of frustration when using a CLI. Very, very high ROI link.

  • jrib 16 days ago

    What were the two tidbits for you?

    Some notable ones for me:

    * curl cheat.sh/command will give a brief "cheat sheet" with common examples of how to use a shell command.

    * `sort -h` can sort the output of `du -h`

    * https://www.gnu.org/software/datamash/

    * https://catonmat.net/ldd-arbitrary-code-execution

    • dspillett 16 days ago

      > `sort -h` can sort the output of `du -h`

      Not read the article yet, but this is something that is new to me but probably shouldn't be. Hopefully I'll remember it next time it might be useful!

      Also scanning the sort documentation for other bits I'd not remembered/known, I notice --parallel=n – I'd just assumed sort was single-threaded, where it is not only capable of multicore sorting but does so by default. Useful to know when deciding when to do things concurrently by other means.

k3vinw 16 days ago

I’ve been using the command line for almost 3 decades. This is really great! I found it covers basically everything I ever had to care about. Strikes a good balance in level of detail. Well done!

d4rkp4ttern 16 days ago

Working at the Command Line is a superpower especially when combined with auto-suggestions from tools such as zsh + oh-my-zsh.

I would be completely lost if I didn’t have the auto suggestions as my “second brain” !

One problem though — to run a variant of an existing previous command, I find myself editing super long command lines, I.e bring up a previous command by accepting the auto-suggestion, then Ctl-b thru it and make an edit to run a new variant. This made me wonder: is there some tool that presents a TUI type of 2-dimensional interface that lets me navigate through the various options using arrow keys etc ?

  • nequo 16 days ago

    Not the TUI you are asking for but TIL about Ctrl-X Ctrl-E from this link which starts up your text editor to edit the command.

  • mejutoco 16 days ago

    I like zsh, and I want to mention the fish shell auto-completion.

    I start typing anything contained in a command in history (not necessarily the beginning) and can flip through the searches with up and down arrows. It is nicely highlighted, and the default, so I do not need to spend too much time fiddling with settings.

  • evulhotdog 15 days ago

    I can’t directly answer your question, but in iTerm2, I am able to option+click to where I want to put the cursor, and it will put your cursor exactly there. This allows you to edit annoying long command with more ease and (kind of) a GUI feature in a sense.

  • MonkeyClub 15 days ago

    > is there some tool that presents a TUI type of 2-dimensional interface

    In an Emacs shell, that's the default behaviour - then you can edit a previous command, and by pressing Enter it gets copied to the command line and executed.

  • d4rkp4ttern 16 days ago

    A trick I should mention - when setting up a new machine, copy the .zsh-history file, then your brain gets ported to the new machine as well :)

chrisweekly 14 days ago

Please, can anyone provide guidance for making Win10 CLI UX tolerable? After more than 2 decades on macOS, very comfortable w/ customized zsh in iTerm, I'm now unavoidably working in Windows and hating it. Sad to discover my vague perception of Windows as a 2nd-class citizen (or 3rd-world country) is all too accurate. Running git-bash in Terminal, surfacing any meaningful git status in my $PS1 incurs multisecond latency. Surely there's a better way. Right?

  • sillyapple 11 days ago

    For a good PowerShell experience I use https://starship.rs/ which includes git info. I use the new windows terminal with the font/colors/etc set just so. For a package manager I like using https://scoop.sh/ and for anything missing there chocolatey usually has it. Good luck, there's more good stuff out there but it's hard to find.

    • chrisweekly 11 days ago

      Cool, thanks. I might give starship a try.

  • jon_adler 14 days ago

    I prefer macOS however my work machine is Win10. To improve matters, I use Cmder for my terminal with WSL2/Ubuntu. It isn’t perfect however it isn’t awful either.

Calzifer 15 days ago

Since the list mentions grep -o (--only-matching) and regex. Here is my preferred trick to extract a specific part of a line and circumvent the lack of capture groups in grep.

Imagine you have a line containing

  prefix 1234 suffix
and want to only grep the number but need to match prefix and/or suffix to not grep another unwanted number.

Can be solved with

  grep --only-matching --perl-regexp 'prefix \K\d+(?= suffix)'
The magic is '\K' which kind of resets the 'matched buffer'. [1] So anything before \K is matched but not outputted with -o (or without -o anything before \K would not be highlighted).

And for the suffix: (?=) is a positive lookahead [2] which checks if, in this case, ' suffix' is found after the number but also does not include it in the 'matched buffer'.

So the output for

  echo "prefix 1234 suffix" | grep --only-matching --perl-regexp 'prefix \K\d+(?= suffix)'
is only

  1234
PS: instead of \K the prefix could also be wrapped in a positive lookbehind (?<=prefix )

[1] https://perldoc.perl.org/perlrebackslash#%5CK

[2] https://www.rexegg.com/regex-lookarounds.html

  • tatref 15 days ago

    Isn't simpler to use perl?

    Perl -pe 's/.(capture this)./$1/'

    That way, you can capture multiple groups, and change the format in the output. You can also do simple calculations if you capture numbers

    • tryauuum 15 days ago

      for me no, anything perl-related is not simple

      I also tried putting this question in GPT3:

      > Isn't simpler to use perl?

      > It is simpler to use Perl, but it is not as efficient as using C or C++.

      • jraph 15 days ago

        Nice one, I'll try to counter this garbage GPT3 answer:

        - I don't think the regex engine in Perl is implemented in Perl. It's probably implemented in C/C++, like grep. libpcre is in C anyway.

        - Even if grep is/were more efficient, You might have consumed more time and energy thinking, typing and running "grep --only-matching --perl-regexp 'prefix \K\d+(?= suffix)'" than the suggested perl solution

        - I might have consumed even more energy typing this reply. My computer is there, waiting for me typing, doing not much.

        • tryauuum 15 days ago

          shorter version of the grep command is

              grep -oP 'prefix \K\d+(?= suffix)'
          
          but yeah, I know it's easier to me because I did read man page at some point (while carefully avoiding perl manuals)

          actually even that is not true, perldoc on regular expressions really helps even grep users

          • jraph 15 days ago

            In all honesty, I'm more likely to remember this form than the Perl one, and once it's in the shell history, any will do anyway :-)

AndyKluger 16 days ago

I look forward to reading this, but

> Bash is powerful and always available

As someone who often works with Alpine Linux, this is an annoyingly popular myth.

  • gazby 15 days ago

    As someone who often works with Bash, Alpine Linux is an annoyingly popular distribution.

henrik_w 15 days ago

In connection with history, you can use !$ for the last argument, but you can also use escape-dot. I use that quite a bit (and escape-dot is slightly easier to type than !$).

Also worth pointing out that you can modify the command in the history list before running it, by typing !xxx:p (adding :p instead of !xxx that just re-runs the command). Then I use arrow-up and then modify it before running it.

https://henrikwarne.com/2018/08/11/my-favorite-command-line-...

  • layer8 15 days ago

    Escape-dot has the benefits that (a) you immediately see what you’re getting and (b) you can repeat it to get the last argument of earlier commands.

    • henrik_w 15 days ago

      Yes, (a) is good, and (b) I didn't know about - cool, and thanks!!

teddyh 16 days ago

> tail -f (or even better, less +F),

While “less +F” is useful, it will also eat all of your memory if left running.

  • Calzifer 16 days ago

    The less variant is good if you only want to follow for a short time or check for new output once since it can be enabled/disabled in less itself with 'F' and Ctrl+c and then you can again scroll and search in less.

    And regards tail. I think most users want tail -F instead of tail -f in most cases. Lowercase f follows the file descriptor while uppercase F follows the file name.

    So with -f you can still follow a file if it is moved or deleted (but I can't imagine many cases where you want to follow a deleted file and someone continues writing to the deleted file).

    With -F you can follow a file which does not exist yet (and will start following when it's created) or when a logfile is rotated continue following the new logfile.

vaughan 16 days ago

It's interesting how dev tools went from text-based 80s to GUI 90s and then back to text-based. As well as the terminal, think about Markdown vs WYSIWIG, Visual Studio/XCode vs Ruby on Rails, YAML config files vs XML with visual editor, etc.

Terminal is good because its unconstrained. But most of the time I would prefer a GUI interface.

We just need to blend the two approaches better and always have a terminal fallback.

  • layer8 15 days ago

    Not sure if it really “went back to”, but an important reason is that Microsoft dropped the ball on consistent and power-user-friendly Windows GUI, and the year of the Linux desktop remains perpetually in the future.

  • sublinear 16 days ago

    Strongly disagree. GUIs are just a distraction. The same CLI knowledge can be used to write scripts for common tasks.

    GUIs don't add any value beyond visually showing some structure, but CLIs can do just the same and often do.

    Do you have any examples of tools where the GUI is more useful than the CLI, and could you explain why that is?

    • dtgriscom 16 days ago

      > GUIs are just a distraction.

      That's an unnecessarily polarizing statement.

      No one paradigm serves all needs. I use GUIs if the tool needed is complex, and I'm not familiar with it; a well-done GUI is much more discoverable than a CLI.

      I also use GUIs if the output is more than one-dimensional. Image editors is an obvious case, but how about spreadsheets? And, I love SmartGit because it shows the log and diffs in a much more intuitive and interactive way than even tig.

      Note that I'm a bash-boy from way back, and spend time every day using it, interactively and in scripts. CLIs are great when they match my tasks. But they aren't the be-all and end-all.

      Beyond the differences in task, I'm sure there are people whose needs are consistently best-served with CLIs (you sound like one). Just like there are people who consistently go to GUIs for their tools.

      So: just because you view the world of tools in a certain way doesn't mean everyone else should as well.

      • vaughan 16 days ago

        Git is a good example because most tools will have a command-palette that prints out the git commands they are running to retrieve data to render.

        Otherwise they are using libgit2.

        It's interesting to think about the difference between an API as a library vs CLI / REPL. Often when I am building a library, for testing I usually would like almost every function to be runnable as a CLI.

        Anytime someone is doing some piping (joining commands) or shell-scripting, it usually could also be its own script with a CLI. Many applications could also just be shell scripts piping stuff together which I think is actually what the unix people envisaged. Starts making you ask: why are some things CLIs and not everything. Lots to think about...

      • ngcc_hk 15 days ago

        Yes. Once again And not Or. even playing game you use keyboard a lot. Not just the graphic input, even you live in a graphic world. So is editing etc. both.

    • yakubin 16 days ago

      Debuggers. A GUI debugger can show you a watch window with some variables and you can see it change in real time as you step through code, without you having to explicitly run the print command on each step for each variable, and without leaving a mess of historical values on your screen. Thanks to that, observing state as it changes costs less effort. Instead of manually asking all the time, you passively observe. It means greater productivity for the user of the debugger. And it goes beyond the watch window: source preview, disassembly, registers, hexdump of memory…

      And obviously: editors. Unless you’re using ed, everybody’s using GUI or TUI editors. And TUI is just a poor man’s GUI. All the benefits of CLIs are gone, while the things GUIs are good at are degraded.

      Not to mention anything related to graphics, photography, video…

      • vaughan 16 days ago

        I think what we need is unix/shell-like GUI.

        CLIs all have the same consistent user interface. Positional arguments, flags, and text output. On unix-like envs they also have pipes. This is why people like them. This is what makes people productive in them.

        Sure, modern GUIs have a few shared UX paradigms, but largely they are all different.

        I wonder how the unix philosophy applies to GUIs? Or what the early developers thought about it. How would piping work in a GUI.

      • vaughan 16 days ago

        Debugger is also an interesting example. In IntelliJ, when using the debugger I've often use the command line interface too. Sometimes I have been debugging something and I wished I could use a Node.js script to automate some stuff. Or I wished I could pipe the output through a Node script. The way the debugger is implemented in IntelliJ makes this a little difficult. Certainly not as easy as piping. I think this is because it uses the gdb machine-interface API which is different to the text command one.

        For source control, IntelliJ does actually print all the git commands it is running which is nice.

    • ogogmad 16 days ago

      Advantages:

      - Drag and drop

      - Discoverability of features

      - Higher density of information

      - Changes can be seen instantly

      - The use of a mouse or stylus when it's natural

      How are you going to use Photoshop or Illustrator from the CLI? (Or Gimp and Inkscape.)

      • mellavora 15 days ago

        cp is faster than drag-and-drop, especially when operating on multiple files (i.e. cp myphotos/2022-11-* someplace/)... vs drag and drop, where you need to open both locations in windows, select the files in one, and drag it to the other. Then probably close one of the windows.

        Command-line is also discoverable, just not by the 'click on this' mentality, you have to be more curious. Which might be the better way to learn.

        You might be right about higher density of information, but do you get to choose the information? vs i.e. ls, ls --color, ls -lht, ... How quickly can you change between different representations? i.e. find big directories by du -h | sort -h

        Changes can be seen instantly in the CLI, I don't understand what you meant by this.

        The use of a keyboard when it is natural.

        imagemagick

        With the added advantage that all of the above, because it is text-based, is: recordable, repeatable, searchable, scriptable.

        GUI, not so much.

    • jerpint 16 days ago

      I used to be of this opinion. Now I think more “GUIs are great once you’ve mastered the CLI and know what underlying operations you want to execute”. A GUI I discovered recently that I really like is docker-desktop. I used to do everything from the CLI. The gui gives me a much better overview of everything. If I need to dive in to the CLI, I know exactly where to go.

      • vaughan 16 days ago

        The problem is that when you need to perform a task on some data being rendered by the GUI that is not supported by the GUI. Usually bulk tasks. Like, for all your docker containers, run a certain command.

        A compromise is that GUIs should print the commands that can be run to get the output they are rendering and to perform the actions they are doing. Like a little command palette at the bottom of the window.

        Then the user can always break out into terminal.

    • massysett 16 days ago

      > Do you have any examples of tools where the GUI is more useful than the CLI, and could you explain why that is?

      Browsing photographs.

    • vaughan 16 days ago

      The other day I was doing some `make` stuff. I was passing in a bunch of env vars and flags. I wanted to tweak them between each run. I would have preferred to have check boxes to enable/disable certain flags. Rather than copying into

      Then in the output, I have a bunch of make tasks and child make files running. I care about the output for each makefile as it runs, but then would prefer it to collapse when its finished. Otherwise its too difficult to see each step of the build process. A terminal cannot do this. XCode does it kind of well.

      At the end of build too, when it reports errors, I'd like to jump up to the errors.

      Almost every command output I would prefer to view as a GUI table or as a webpage.

      The problem is that then, instead of just printing output, now I am dealing with a huge mess of frontend frameworks and build toolchains.

    • ovao 16 days ago

      One example that comes to mind is a GUI for interactive rebasing, which lets you re-order commits with a drag-and-drop interface. I’m thinking specifically of the one that (I think) is included with the Git ReFlow VS Code extension.

      True, a CLI tool could be made to mimic the same thing the GUI variant does in most respects, but at that point you’ve simply re-implemented a GUI, just with all the interface restrictions imposed by the shell.

      I’d agree that apps in general should promote some level of scriptability, and letting users drop down into the CLI is a great option for that. I’d just make an argument for giving CLI users an option to “rise up” to a GUI where it makes sense.

      • suprfnk 16 days ago

        > One example that comes to mind is a GUI for interactive rebasing, which lets you re-order commits with a drag-and-drop interface.

        With `git rebase -i <some_commit>` on the command line, you get a list of one commit per line which are trivial to re-order in a text editor. It's probably faster than a GUI too, if you're at least a bit efficient in a text editor.

        ---

        This is not to say that there aren't valid uses for a GUI, I think there are, but re-ordering commits is not one of them, I'd say.

    • dahart 15 days ago

      > Do you have any examples of tools where the GUI is more useful than the CLI, and could you explain why that is?

      Dunno if you’re just having some Thanksgiving fun, since the question of GUI vs CLI is mostly settled and moot, and this debate is irrelevant, but I’ll take your comment literally and respond as though you are serious.

      So are you taking about shells, or any programs at all? Are you talking about programmers or all software users? What makes a GUI, exactly? (Are text menus CLI or GUI? Is CLI defined by REPL? Is Logo a CLI or GUI? What about vi, nano, or ddd?)

      Speaking as someone with a lot of love for the shell, and few decades of experience with it, it seems to me like your question assumes a rather extreme position that doesn’t seem to be well supported by most people’s usage, even if we’re talking only about programmers. If the CLI is strictly better then why do people tend to prefer GUIs? That question needs a serious answer, ala Chesterton’s Fence, before dismissing GUIs. Make sure to thoughtfully consider learning curve, effort, discoverability, prior knowledge requirement, overall task and workflow efficiency, etc.

      Web browsing, such as visiting HN and commenting, is much much better in a GUI browser than via manually typed curl or wget commands. What if you had to send POST requests to get your comment up? What if people had to send GET requests to know your comment existed, and then another one to retrieve it? We wouldn’t even be chatting if this was a CLI, right?

      More or less all artist tools are better as GUIs, from Photoshop to video editing to 3d modeling to animation. If the output is graphical, then there’s no way around a graphical interface. Using the CLI for this isn’t just tedious and expensive, it’s far less efficient and effective.

      Text editors are not CLI REPLs, even vi and nano. Spreadsheets are all GUI. Desktop OSes are GUIs. Smartphones are GUIs. (Imagine making calls via CLI!) Programming GUI IDEs can be extremely effective, especially when refactoring and debugging.

      There’s also a gray area of text based GUIs in the console window, like nano and GBD’s TUI mode, just for two examples. Even in CLI-land these things are easier to use and more efficient for some tasks than a pure REPL with text commands.

      Could you maybe explain why you claimed a GUI has no value? Are you aware of the history of debate on this topic, and of academic research on this topic?

    • anthk 16 days ago

      Diffs.

  • bitwize 16 days ago

    This is because as Linux started taking over the server space, it brought with it a culture of "stone knives and bearskins" tooling from old-school Unix -- and most applications these days are web applications.

    One of the major disappointments of recent years is seeing Microsoft backpedal from promoting Windows as the premier development platform and embracing (to extend or extinguish?) Linux with its command lije bullshit. Windows in the late 90s was an entire environment based on composable components, linked together via discoverable interfaces. It had a software composition story far superior to Unix pipes. We should be able to build entire web and cloud infrastructures by wiring components together visually, and using visual inspection tools to debug problems as they happen in the pipeline. Not monkey about with YAML and JSON.

    • vaughan 16 days ago

      > We should be able to build entire web and cloud infrastructures by wiring components together visually

      100%. I think this is the next wave of development. Back to the future.

      > Not monkey about with YAML and JSON.

      The key difference this time needs to be two-way sync between code and visual.

      Having a serialized format (YAML/JSON) for wiring at some level is important though, but it should be easily modifiable by humans.

      In the last wave, we left this two-way syncing behind. An example of this is XCode's NIB files and Interface Builder. NIB weren't designed for humans to modify so everything had to be done through IB which made certain things a pain, and created a lot of VCS churn.

      I've been thinking about whether we can achieve a two-way syncing (text <-> diagram) visual programming interface by interacting with the AST of a (subset of) an imperative language and using data flow tracing (via code instrumentation).

      I wonder what the minimal data structures needed are to represent most of configuration programming. Such as state machines, streams, graphs (dependency tracking).

    • necovek 16 days ago

      Imagined from the same fold as COM (which you are likely talking about) was CORBA, an international standard for "object" interoperability.

      Guess what came out of that?

      It's one of those systems that's perfect (as in perfectly abstracting away everything) but notoriously and impractically complex. COM is only slightly less so.

      The downfall of all the object-oriented approaches is that we don't really think about objects, but instead of actions ("functions") we perform, which is much simpler as well. Basically, you don't see bread and say bread->eat() (or at least your doctor will tell you not to :)) but instead look for something to eat() and then stick bread in it once you find it (eat(bread)).

      • vaughan 16 days ago

            person->eat(bread)
        • necovek 15 days ago

          That's a good point.

          But as soon as you've got someone else who needs to eat (cat->eat(), dog->eat()...), it is still better to go with a functional approach:

            feed(person, bread)
          
          Basically, my abstraction was bad for that case, but it is so much easier to move from eat(bread) to eat(person, bread) and then rename that to "feed" than to introduce object inheritance/composition and think about the commonality between dog and person if all you want to do is feed them.

          Sure, there are cases where purely OOP approach is not burdensome, and may even be easier, but functional approach, as an explicit superset of possibilities, will usually be simpler and more understandable.

    • bradrn 16 days ago

      > Windows in the late 90s was an entire environment based on composable components, linked together via discoverable interfaces. It had a software composition story far superior to Unix pipes.

      I’ve never heard anything about this (probably because it was before my time); could you elaborate?

      • zwkrt 16 days ago

        I think parent is talking about the COM interface.[1] powershell still has a legacy of object oriented manipulation as opposed to text/line based manipulation. I’m too much living in Unix world to give more insight as to how well it functioned though.

        https://en.m.wikipedia.org/wiki/Component_Object_Model

      • muststopmyths 16 days ago

        Sounds like COM. Comparing a programming framework to pipes seems a bit over the top though. I don't know anyone who'd seriously advocate building a large application solely from small programs piping their output to other small programs.

        I've already complained previously about the new Microsoft though, so I have sympathy with the underlying sentiment :)

teddyh 16 days ago

> Use grep . * to quickly examine the contents of all files in a directory

I prefer to use grep '' * since grep . * omits empty lines.

yekm 16 days ago

Nice and short collection of useful commands. I am really surprised that there is only one very brief mention of GNU parallel.

deafpolygon 16 days ago

Nice write-up. Useful information even if you've been on the CLI for centuries.

NikkiA 15 days ago

I find the suggestion of using `apropos` somewhat weird, since `man -k` has been functionally identical, easier to type, and easier to remember (for me at least) in the 45 years I've been using unix systems. And I think I've only come across one esoteric system (which was some unix on a PDP) that differed in any way between the two.

  • MonkeyClub 15 days ago

    I think the idea behind preferring `apropos` instead of `man -k` is that you can get to `apropos` with `apr<TAB>`, while the other invocation is longer (and for sloppy typists, the dash may prove problematic).

tryauuum 15 days ago

vimdiff can be quite cool when combined with <() bash feature (which presents command output as file).

e.g. see color difference of installed packages on local and remote server

    vimdiff <( dpkg -l | grep -w ii | awk '{print $2}' | sort -V ) <( ssh REMOTE_SERVER dpkg -l | grep -w ii | awk '{print $2}' | sort -V )
jmclnx 16 days ago

> Learn basic Bash

In a way, true, but if you want to work in the corporate world, should be "learn ksh". Very little difference between them, and will force you to write portable scripts. Using bashisms will make things a bit harder for you.

Most proprietary software companies like to buy uses ksh. I support a few of them now where I work, all uses ksh.

  • computerfriend 15 days ago

    > Most proprietary software companies like to buy uses ksh. I support a few of them now where I work, all uses ksh.

    I've never known companies to buy large software products written in a shell language, but on top of that haver rarely if ever come across ksh in a professional setting.

    I can imagine this being the case in some industry niche, but don't think "most" is appropriate.

  • nequo 16 days ago

    > Most proprietary software companies like to buy uses ksh. I support a few of them now where I work, all uses ksh.

    Why ksh over bash? Is this because they are not running Debian/Ubuntu/Fedora/RHEL?

  • layer8 15 days ago

    > Most proprietary software companies like to buy

    What does this refer to?

    • jmclnx 13 days ago

      For one, many SAP scripts I have seen uses ksh on their servers. That is because they want to be compatible between Linux and commercial UNIX.