Schemesh is intended as an interactive shell and REPL:
it supports line editing, autocompletion, searchable history, aliases, builtins,
a customizable prompt, and automatic loading of `~/.config/schemesh/repl_init.ss`.
Most importantly, it has job control (CTRL+Z, `fg`, `bg` etc.) and recognizes and extends
Unix shell syntax for starting, redirecting and composing jobs.
An example:
find (lisp-expression-returning-some-string) -type f | xargs ls 2>/dev/null >> ls.log &
> Scsh, in the current release, is primarily designed for the writing of shell scripts -- programming. It is not a very comfortable system for interactive command use: the current release lacks job control, command-line editing, a terse, convenient command syntax, and it does not read in an initialisation file analogous to .login or .profile
I really like how you don’t sacrifice complete command-line first shell feel, and escaping into a sane language with real datastructures is literally one character away.
Rather than the tclsh way of saying “we’ll just make the Lisp seem really shelly” which is a dud to anyone who is not a lisper.
Now, it’d be really cool if schemesh had a TUI library at the maturity level of Ratatui.
So... it sacrifices sub-shell syntax with parentheses being hijacked for Scheme. Have you also lost $(...) shell interpolation as the saner alternative to `...`?
It does not sacrifice sub-shell syntax: it is fully supported,
I just had to rename it from ( ... ) to [ ... ] to avoid conflicts with ( ... ) that switches to lisp syntax
Also, both $(...) and `...` shell interpolation are fully supported.
The only thing I intentionally sacrificed is shell flow control: schemesh shell syntax does not have the builtins 'case' 'for' 'if' 'while' etc.
In practically all examples I tried, escaping to Scheme for loops and conditional code works better: it avoids the usual pitfalls related to shell string splitting, and usually results in more readable code too, at least for a Lisper
Note: there's also some additional parsing logic to distinguish between sub-shell syntax [ ... ] and wildcard patterns that use [ ... ] as well
SCSH is a shell embedded in Scheme, i.e. it's a Scheme library that lets you easily create unix processes in Scheme. Schemesh is Scheme embedded in a shell, i.e. it's a shell that lets you easily call Scheme code.
For example, if you type:
ls
in schemesh you will execute the "ls" command and get a directory listing, whereas in scsh you will get the value of a variable named "ls".
[UPDATE] Also, as cosmos0072 notes in a sibling comment, schemesh has shell-like features like line editing, autocompletion, searchable history, aliases, builtins, etc.
Rash is excellent. I use it in production for a process that copies large parquet files on the daily by shelling out to the aws command, then processes them by shelling out to another command and then pushes the processed data over HTTP to another server using a Racket HTTP client[1]. My only complaint would be that the docs could use some cleaning up.
The docs are terrible, and I've always meant to go back and improve them... but it's never quite been a priority and I've never actually summoned the motivation to do it...
But it's always nice to hear when someone uses it and likes it despite that!
Rash and schemesh start from similar ideas: create a shell scriptable in some dialect of Lisp.
Rash has several limitations, sometimes due to design choices, that schemesh solves:
1. no job control
2. multi-line editing is limited
3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
> 3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
It's possible I misunderstand what you mean because I'm not sure what piping to less is supposed to accomplish here, but this is not true. The following program works just fine:
Yes, Rash has a variety of limitations. Let me give some more context to these:
>1. no job control
Racket is missing a feature in its rktio library needed to do job control with its process API, which Rash uses. At one point I added one or two other minor features needed for job control, but I ran out of steam and never finished the final one. It's a small feature, even, though now I don't remember much of the context. I hope I wrote enough notes to go back and finish this some day.
>2. multi-line editing is limited
I always intended to write a nice line editor that would do this properly. But, again, I never got around to it. I would still like to, and I will probably take a serious look at your line editor some time.
The design was intended as something to use interactively as well as for scripting. But since I never improved the line editing situation, even I only use it for scripting. After documentation issues, this is the most pressing thing that I would fix.
>3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
As mentioned is not correct, you can recursively switch between shell and lisp.
>4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
This one is a design flaw. I've meant to go back and fix it (eg. just retrofitting a new pipe operator that returns the subprocess pipeline segment as an object rather than its ports or outputs), but, of course, haven't gotten around to it.
>5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
Yep.
>6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
So actually `#lang` is not supported at all in the REPL. It's neither supported in the Racket REPL nor the rash repl. In practice, what `#lang` does is (1) set the reader for a module, and (2) set the base import for the module, IE what symbol definitions are available. With the REPL you have to do this more manually. The repl in Racket is sort of second class in various ways, in part due to the “the top level is hopeless” problems for macros. (Search for that phrase and you can find many issues with repls and macros discussed over the years in the Racket mailing list.) Defining a new `#lang` in Racket includes various pieces about setting up modules specifically, and since the top level repl is not a module, it would need some different support that is currently not there, and would need to be retrofitted for various `#lang`s. But you can start a repl with an arbitrary reader, and use `eval` with arbitrary modules loaded or symbols defined. My intention with a rash line editor would also have been to make some infrastructure for better language-specific repls in racket generally. But, well, obviously I never actually did it. If I do make time for rash repl improvements in the near future, it will just as likely be ways for using it more nicely with emacs rather than actually writing a new line editor... we'll see.
I'm always sad when I think about how I've left Rash to languish. In grad school I was always stressed about publication (which I ultimately did poorly at), which sapped a lot of my desire and energy to actually get into the code and make improvements. Since graduating and going into industry, and with kids, I've rarely felt like I have the time or energy after all of my other responsibilities to spend time on hobby projects. Some day I would like to get back into it, fix its issues, polish it up, document it properly, etc. Alas, maybe some day.
speaking of shell pipelines, what is the "right" way of implementing pipes?
- Elixir: data |> process(12) puts data as the FIRST arg of process (before 12).
- Gleam: data |> process(12, _) puts data as the "hole" arg ("_") of process.
So far so good, but these approaches are mainly just more convenient function calls - i.e., they don't have fancy error checking in them. Then you have Haskell:
- Haskell: >>= "binds" actions to guarantee execution order (even for actions that don't depend on the previous action's output!). This is more fancy because it uses monads to encapsulate the computations at each step, and can shortcircuit on errors.
I’m not sure that |> operators are the right analogy, but fwiw:
Clojure does either first or last position depending on the operator, and it offers lightweight lambdas similar to your second option
The natural choice for a language like Haskell is final position: the rhs of the |> will be partially applied an |> has type a -> (a -> b) -> b
In R, things go in the last slot I think but most arguments on the right hand side would be passed as keywords so the ‘last’ slot would often be the first argument.
The whole point of Unix pipes is that execution is parallel so I’m not totally sure I get your point about guaranteeing execution order.
I think you're conflating "chaining function calls together" (aka "threading function calls") with unix pipelines, which are all about running separate programs in parallel & connecting their io streams together (with the kernel regulating the flow of data between them).
Threading functions together is basically about being able to write
Author here - thanks for linking - Unfortunately I didn't have time to continue on this for a long while. I am certainly happy if people get inspiration from any ideas and continue on with it.
I'd just like to share my joy in using the Emacs shell (Eshell), which I find to be a wonderful fusion of the Unix shell and a Lisp REPL. You can enter commands with or without parentheses. For example:
Eshell comes with several built-in commands implemented as Elisp functions. For example:
~ $ which cd echo ls which
eshell/cd is a byte-compiled Lisp function in ‘em-dirs.el’.
eshell/echo is a byte-compiled Lisp function in ‘em-basic.el’.
eshell/ls is a byte-compiled Lisp function in ‘em-ls.el’.
eshell/which is a byte-compiled Lisp function in ‘esh-cmd.el’.
~ $ ls -l /etc/h*
-rw-r--r-- 1 root wheel 446 2025-02-15 20:43 /etc/hosts
-rw-r--r-- 1 root wheel 0 2024-10-01 2024 /etc/ hosts.equiv
~ $ (eshell/ls "-l" (eshell-extended-glob "/etc/h*"))
-rw-r--r-- 1 root wheel 446 2025-02-15 20:43 /etc/hosts
-rw-r--r-- 1 root wheel 0 2024-10-01 2024 /etc/hosts.equiv
Of course, you can still run external commands like usual:
Since TRAMP is an integral part of Emacs, you can switch between the local shell and remote shells transparently with simple 'cd' commands. For example:
~ $ echo local > /tmp/foo.txt
~ $ echo remote > /ssh:susam@susam.net:/tmp/foo.txt
~ $ cd /tmp/
/tmp $ cat foo.txt
local
/tmp $ hostname
mac.local
~ $ cd /ssh:susam@susam.net:/tmp/
/ssh:susam@susam.net:/tmp $ hostname
susam.net
/ssh:susam@susam.net:/tmp $ cat foo.txt
remote
In the second command, I redirected a file to a remote file system with the usual '>' redirection operator.
Notice how, in the sixth command, I switched from my local shell to a remote shell with a simple 'cd' command. With Eshell and TRAMP, working across multiple remote systems becomes transparent, seamless, and effortless! Best of all, I still have the full power of Emacs at my fingertips, making Eshell an incredibly smooth and powerful experience!
When eshell runs a pipeline of external programs, does it fork+exec them all in parallel and connect them with the requisite file descriptors? Or does it run each program sequentially, grabbing its output in its entirety before passing it onto the next program in the pipeline?
I thought it was the latter (but it's been a while since I looked at it).
Tcl's exec gets it right. R Keene's pipethread extension for tcl gets it even more right.
Just perusing the schemesh docs (haven't tried it yet), it looks like he got it right, as well
> A command invocation followed by an ampersand (&) will be run in the background. Eshell has no job control, so you can not suspend or background the current process, or bring a background process into the foreground. That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs.
For things like this, we would have to switch to something like M-x shell or even M-x ansi-term. In Emacs, we have an assortment of shells and terminal implementations. As a long time Emacs user, I know when to use which, so it does not bother me. However, I can imagine how this might feel cumbersome for newer Emacs users.
In fact, this is one of the reasons I think your project is fantastic. It offers some of the Eshell-like experience, and more, to non-Emacs users, which is very compelling!
I think it comes down to that emacs itself has facilities for managing the background processes, and eshell in general tends to defer back to the surrounding editor for a lot of functionality.
So if there's an "emacs" way of doing things, generally eshell delegates to that, instead of rolling its own.
> That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs
I haven't used Eshell much, but this makes a simple "command &" arguably much saner than in a traditional Unix shell.
I imagine that a new feature would be accepted only if someone can make it play nice with existing features. And in case of job control, I have a bad feeling about the complexity involved.
I think the tramp feature is particularly useful. Bash is sticky because it will be the default on any box. Learning a second syntax or a set of subtle differences from bash can become necessary when you are using shells on many boxes. Tramp means you get edged on remote boxes (sort-of, I suppose)
I don't know, I kind of like having a boundary that separates my programming language from the shell, like Picolisp in/out/call, Elixir System.shell, PHP shell_exec, racket/port and so on.
And I do an awful lot of shelling out, usually as a poor person's FFI or concurrency, or to just interact with a chain of shell pipes.
I tried doing one of these too! With Chicken Scheme.
I got pretty far, I abandoned the project when the computer I was working on was destroyed, and I hadn't committed and pushed the majority of the work.
I'm glad that someone is actually doing it though; a Scheme shell always seemed like it could have a lot of potential for scripting.
[UPDATE] There is also a function (sh-redirect job redirection-args ...) - it can add arbitrary redirections to a job, including pipes, but it's quite low-level and verbose to use
Could this be abstracted enough with the right macros to make a subset of useful lisp commands play well with the shell? It could be a powerful way to extend the shell for interactive use.
I was thinking of a Lisp/Scheme-like frankenshell for a while. A REPL language (and especially a shell) should focus on ergonomics first - we're commanding the computer to do stuff here and now, not writing elaborate programs (usually).
In my opinion, the outmost parens (even when invoking Lisp functions), as well as all the elaborate glue function names, kinda kill it for interactive use. If you think about it, it's leaking the implementation details into the syntax, and makes for poor idioms. Not very Lispy.
My idea is something like:
>>> + 1 2 3
6
(And you would never know if it's /bin/+ or (define (+ ...)))
>>> seq 1 10 | sum
Let's assume seq is an executable, and sum is a Scheme function. Each "token" seq produces (by default delimited by all whitespace, maybe you could override the rules for a local context, parameterize?) is buffered by the shell, and at the end the whole thing turned into a list of strings. The result is passed to sum as a parameter. (Of course this would break if sum expects a list of integers, but it could also parse the strings as it goes.)
The other way around would also work. If seq produces a list of integers, it's turned into a list of strings and fed into sum as input lines.
The shell could scan $PATH and create a simple function wrapper for each executable.
Now to avoid unnecessary buffering or type conversion, a typed variant of Scheme could be used, possibly with multiple dispatch (per argument/return type). E.g. if the next function in the pipeline accepts an input port or a lazy string iterator, the preceding shell command wrapper could return an output port.
The tricky case with syntax is what to do with tokens like "-9", "3.14", etc. The lexer could store both the parsed value (if it is valid), and the original string. Depending on the context, it could be resolved to either, but retain strong (dynamic) typing when interacting with a Scheme function, so "3.14.15" wouldn't work if a typed function only accepts numbers.
Porting to a different Scheme implementation requires some effort:
schemesh needs a good, bidirectional C FFI and an (eval) that allows any Scheme form, including definitions.
For creating a single `schemesh` executable with the usual shell-compatible options and arguments, the Scheme implementation also needs to be linkable as a library from C:
Chez Scheme provides a `kernel.o` or `libkernel.a` library that you can link into C code, then call the C functions Sscheme_init(), Sregister_boot_file() and finally Scall0(some_scheme_repl_procedure) or Sscheme_start()
I'm curious as to how this differs from an older project that seems to solve the same problem:
https://github.com/scheme/scsh
This is a common question :)
Schemesh is intended as an interactive shell and REPL: it supports line editing, autocompletion, searchable history, aliases, builtins, a customizable prompt, and automatic loading of `~/.config/schemesh/repl_init.ss`.
Most importantly, it has job control (CTRL+Z, `fg`, `bg` etc.) and recognizes and extends Unix shell syntax for starting, redirecting and composing jobs. An example:
find (lisp-expression-returning-some-string) -type f | xargs ls 2>/dev/null >> ls.log &
Scsh has none of the above features. As stated in https://scsh.net/docu/html/man-Z-H-2.html#node_sec_1.4
> Scsh, in the current release, is primarily designed for the writing of shell scripts -- programming. It is not a very comfortable system for interactive command use: the current release lacks job control, command-line editing, a terse, convenient command syntax, and it does not read in an initialisation file analogous to .login or .profile
I really like how you don’t sacrifice complete command-line first shell feel, and escaping into a sane language with real datastructures is literally one character away.
Rather than the tclsh way of saying “we’ll just make the Lisp seem really shelly” which is a dud to anyone who is not a lisper.
Now, it’d be really cool if schemesh had a TUI library at the maturity level of Ratatui.
So... it sacrifices sub-shell syntax with parentheses being hijacked for Scheme. Have you also lost $(...) shell interpolation as the saner alternative to `...`?
It does not sacrifice sub-shell syntax: it is fully supported, I just had to rename it from ( ... ) to [ ... ] to avoid conflicts with ( ... ) that switches to lisp syntax
Also, both $(...) and `...` shell interpolation are fully supported.
The only thing I intentionally sacrificed is shell flow control: schemesh shell syntax does not have the builtins 'case' 'for' 'if' 'while' etc.
In practically all examples I tried, escaping to Scheme for loops and conditional code works better: it avoids the usual pitfalls related to shell string splitting, and usually results in more readable code too, at least for a Lisper
Note: there's also some additional parsing logic to distinguish between sub-shell syntax [ ... ] and wildcard patterns that use [ ... ] as well
Then [ shadows the sh executable aliased from ‘test’ so that you can no longer do
But have to writeThat's true.
In shells, "test" and "[" are often used after "if", as for example
Schemesh does not have a shell builtin "if", you switch to Scheme for that: Thus the need for "test" and its alias "[" is reduced.Also, "test" implements a mini-language full of one-letter operators: `-f FILE` `COND1 -a COND2` `COND1 -o COND2` etc.
I really don't miss it, as I find the equivalent in Scheme to be more readable - and of course more general
etc.> This is a common question :)
It'd be really great if you could put your answer in the readme. It was the first question that came to my mind when looking at your project.
I'm looking forward to trying out schemesh!
SCSH is a shell embedded in Scheme, i.e. it's a Scheme library that lets you easily create unix processes in Scheme. Schemesh is Scheme embedded in a shell, i.e. it's a shell that lets you easily call Scheme code.
For example, if you type:
in schemesh you will execute the "ls" command and get a directory listing, whereas in scsh you will get the value of a variable named "ls".[UPDATE] Also, as cosmos0072 notes in a sibling comment, schemesh has shell-like features like line editing, autocompletion, searchable history, aliases, builtins, etc.
As an author of an alternative shell myself, I really have to commend the effort and design of this one.
It’s one of the more impressive and genuinely interesting shells I’ve seen for a while.
How does this project compare to RaSH: Racket sell?
Rash for me seemed like the perfect blend of lisp (racket) and external commands.
https://youtu.be/yXcwK3XNU3Y?si=v0LuWWkqfoHvkaHl
Rash is excellent. I use it in production for a process that copies large parquet files on the daily by shelling out to the aws command, then processes them by shelling out to another command and then pushes the processed data over HTTP to another server using a Racket HTTP client[1]. My only complaint would be that the docs could use some cleaning up.
[1]: https://docs.racket-lang.org/http-easy/index.html
The docs are terrible, and I've always meant to go back and improve them... but it's never quite been a priority and I've never actually summoned the motivation to do it...
But it's always nice to hear when someone uses it and likes it despite that!
Rash and schemesh start from similar ideas: create a shell scriptable in some dialect of Lisp.
Rash has several limitations, sometimes due to design choices, that schemesh solves:
1. no job control
2. multi-line editing is limited
3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
> 3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
It's possible I misunderstand what you mean because I'm not sure what piping to less is supposed to accomplish here, but this is not true. The following program works just fine:
Nice! then my point 3. above is wrong and should be deleted
Yes, Rash has a variety of limitations. Let me give some more context to these:
>1. no job control
Racket is missing a feature in its rktio library needed to do job control with its process API, which Rash uses. At one point I added one or two other minor features needed for job control, but I ran out of steam and never finished the final one. It's a small feature, even, though now I don't remember much of the context. I hope I wrote enough notes to go back and finish this some day.
>2. multi-line editing is limited
I always intended to write a nice line editor that would do this properly. But, again, I never got around to it. I would still like to, and I will probably take a serious look at your line editor some time.
The design was intended as something to use interactively as well as for scripting. But since I never improved the line editing situation, even I only use it for scripting. After documentation issues, this is the most pressing thing that I would fix.
>3. from what I understand, shell syntax is available only at REPL top level. Once you switch to Lisp syntax with `(`, you can return to shell syntax only with `)`. Thus means you cannot embed shell syntax inside Lisp syntax, i.e. you cannot do `(define j {find -type f | less})`
As mentioned is not correct, you can recursively switch between shell and lisp.
>4. shell commands are Lisp functions, not Lisp objects. Inspecting and redirecting them after they have been created is difficult
This one is a design flaw. I've meant to go back and fix it (eg. just retrofitting a new pipe operator that returns the subprocess pipeline segment as an object rather than its ports or outputs), but, of course, haven't gotten around to it.
>5. Rash is written in Racket, which has larger RAM footprint than schemesh running on vanilla Chez Scheme: at startup, ~160MB vs. ~32MB
Yep.
>6. Racket/Rash support for multi-language at REPL is limited: once you do `#lang racket`, you cannot go back to `#lang rash`
So actually `#lang` is not supported at all in the REPL. It's neither supported in the Racket REPL nor the rash repl. In practice, what `#lang` does is (1) set the reader for a module, and (2) set the base import for the module, IE what symbol definitions are available. With the REPL you have to do this more manually. The repl in Racket is sort of second class in various ways, in part due to the “the top level is hopeless” problems for macros. (Search for that phrase and you can find many issues with repls and macros discussed over the years in the Racket mailing list.) Defining a new `#lang` in Racket includes various pieces about setting up modules specifically, and since the top level repl is not a module, it would need some different support that is currently not there, and would need to be retrofitted for various `#lang`s. But you can start a repl with an arbitrary reader, and use `eval` with arbitrary modules loaded or symbols defined. My intention with a rash line editor would also have been to make some infrastructure for better language-specific repls in racket generally. But, well, obviously I never actually did it. If I do make time for rash repl improvements in the near future, it will just as likely be ways for using it more nicely with emacs rather than actually writing a new line editor... we'll see.
I'm always sad when I think about how I've left Rash to languish. In grad school I was always stressed about publication (which I ultimately did poorly at), which sapped a lot of my desire and energy to actually get into the code and make improvements. Since graduating and going into industry, and with kids, I've rarely felt like I have the time or energy after all of my other responsibilities to spend time on hobby projects. Some day I would like to get back into it, fix its issues, polish it up, document it properly, etc. Alas, maybe some day.
LISPy REPLs are awesome.
Babashka is another amazing tool for interacting with a shell with clojure (or a very close dialect thereof).
https://babashka.org
speaking of shell pipelines, what is the "right" way of implementing pipes?
- Elixir: data |> process(12) puts data as the FIRST arg of process (before 12).
- Gleam: data |> process(12, _) puts data as the "hole" arg ("_") of process.
So far so good, but these approaches are mainly just more convenient function calls - i.e., they don't have fancy error checking in them. Then you have Haskell:
- Haskell: >>= "binds" actions to guarantee execution order (even for actions that don't depend on the previous action's output!). This is more fancy because it uses monads to encapsulate the computations at each step, and can shortcircuit on errors.
I’m not sure that |> operators are the right analogy, but fwiw:
Clojure does either first or last position depending on the operator, and it offers lightweight lambdas similar to your second option
The natural choice for a language like Haskell is final position: the rhs of the |> will be partially applied an |> has type a -> (a -> b) -> b
In R, things go in the last slot I think but most arguments on the right hand side would be passed as keywords so the ‘last’ slot would often be the first argument.
The whole point of Unix pipes is that execution is parallel so I’m not totally sure I get your point about guaranteeing execution order.
I think you're conflating "chaining function calls together" (aka "threading function calls") with unix pipelines, which are all about running separate programs in parallel & connecting their io streams together (with the kernel regulating the flow of data between them).
Threading functions together is basically about being able to write
rather than:So many shells, so little time.
For Janet fans, there's janetsh (https://github.com/andrewchambers/janetsh). Seems very elegant indeed.
Author here - thanks for linking - Unfortunately I didn't have time to continue on this for a long while. I am certainly happy if people get inspiration from any ideas and continue on with it.
This is the first time I'm curious to potentially actually learn Lisp. Using Lisp as a shell language is where it's at.
Where's my book The Little Schemer? :D
By the way, that's a regionally cool name. I read it at first as "shemesh", that means "Sun" in Hebrew.
Lisp makes total sense as a shell scripting language to me.
Being able to switch back/forth without leaving the prompt is nice.
Excellent project! Thanks for sharing it here!
I'd just like to share my joy in using the Emacs shell (Eshell), which I find to be a wonderful fusion of the Unix shell and a Lisp REPL. You can enter commands with or without parentheses. For example:
Or Eshell comes with several built-in commands implemented as Elisp functions. For example: Of course, you can still run external commands like usual: Since TRAMP is an integral part of Emacs, you can switch between the local shell and remote shells transparently with simple 'cd' commands. For example: In the second command, I redirected a file to a remote file system with the usual '>' redirection operator.Notice how, in the sixth command, I switched from my local shell to a remote shell with a simple 'cd' command. With Eshell and TRAMP, working across multiple remote systems becomes transparent, seamless, and effortless! Best of all, I still have the full power of Emacs at my fingertips, making Eshell an incredibly smooth and powerful experience!
When eshell runs a pipeline of external programs, does it fork+exec them all in parallel and connect them with the requisite file descriptors? Or does it run each program sequentially, grabbing its output in its entirety before passing it onto the next program in the pipeline?
I thought it was the latter (but it's been a while since I looked at it).
Tcl's exec gets it right. R Keene's pipethread extension for tcl gets it even more right.
Just perusing the schemesh docs (haven't tried it yet), it looks like he got it right, as well
Eshell looks really powerful :)
Does it also have job control, and jobs as first-class objects?
In schemesh, you can do things like
and alsoUnfortunately, Eshell does not have job control. Quoting from <https://www.gnu.org/software/emacs/manual/html_mono/eshell.h...>:
> A command invocation followed by an ampersand (&) will be run in the background. Eshell has no job control, so you can not suspend or background the current process, or bring a background process into the foreground. That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs.
For things like this, we would have to switch to something like M-x shell or even M-x ansi-term. In Emacs, we have an assortment of shells and terminal implementations. As a long time Emacs user, I know when to use which, so it does not bother me. However, I can imagine how this might feel cumbersome for newer Emacs users.
In fact, this is one of the reasons I think your project is fantastic. It offers some of the Eshell-like experience, and more, to non-Emacs users, which is very compelling!
> Unfortunately, Eshell does not have job control.
I wonder why, after all these years, nobody has added it?
If someone (not me) made a patch that did, would the GNU Emacs maintainers accept it?
I think it comes down to that emacs itself has facilities for managing the background processes, and eshell in general tends to defer back to the surrounding editor for a lot of functionality.
So if there's an "emacs" way of doing things, generally eshell delegates to that, instead of rolling its own.
Note this part:
> That said, background processes invoked from Eshell can be controlled the same way as any other background process in Emacs
I haven't used Eshell much, but this makes a simple "command &" arguably much saner than in a traditional Unix shell.
I imagine that a new feature would be accepted only if someone can make it play nice with existing features. And in case of job control, I have a bad feeling about the complexity involved.
I think the tramp feature is particularly useful. Bash is sticky because it will be the default on any box. Learning a second syntax or a set of subtle differences from bash can become necessary when you are using shells on many boxes. Tramp means you get edged on remote boxes (sort-of, I suppose)
eshell is great. I sometimes wish eshell also existed outside of emacs.
I don't know, I kind of like having a boundary that separates my programming language from the shell, like Picolisp in/out/call, Elixir System.shell, PHP shell_exec, racket/port and so on.
And I do an awful lot of shelling out, usually as a poor person's FFI or concurrency, or to just interact with a chain of shell pipes.
I tried doing one of these too! With Chicken Scheme.
I got pretty far, I abandoned the project when the computer I was working on was destroyed, and I hadn't committed and pushed the majority of the work.
I'm glad that someone is actually doing it though; a Scheme shell always seemed like it could have a lot of potential for scripting.
This is amazing, did you think about packing this for guix?
https://guix-hpc.gitlabpages.inria.fr/guix-packager/ try copy-pasting in here yourself and see what happens :)
Can this shell do this (in some form):
(lisp expression) | <unix command> | (lisp expression)
[REWRITTEN FOR CLARITY]
Yes, although the current syntax is cumbersome - I am thinking how to improve it.
The first part is easy. If you want to run something like
the current solution is The second part, i.e. feeding a command's output into a Scheme function, is more cumbersome.If you want to run
the current solution requires (sh-run/string job), namely: If instead you have a (lisp-expr2...) that reads from an integer file descriptor passed as argument - not a Scheme I/O port - you can write [UPDATE] There is also a function (sh-redirect job redirection-args ...) - it can add arbitrary redirections to a job, including pipes, but it's quite low-level and verbose to useI found another, possibly simpler solution.
The functions (sh-fd-stdin) (sh-fd-stdout) and (sh-fd-stderr) return the integer file descriptors that a schemesh builtin should use to perform I/O.
With them, you can do
It should work :)Could this be abstracted enough with the right macros to make a subset of useful lisp commands play well with the shell? It could be a powerful way to extend the shell for interactive use.
Yes, that's definitely feasible.
I am currently working on it, the macro will be named (shell-expr) and replace the current experimental (shell-test)
[UPDATE] (shell-expr) is ready and kicking :)
Now you can write
An example of such expressions is: For writing to sh-fd-stdout, For reading from sh-fd-stdin,I was thinking of a Lisp/Scheme-like frankenshell for a while. A REPL language (and especially a shell) should focus on ergonomics first - we're commanding the computer to do stuff here and now, not writing elaborate programs (usually).
In my opinion, the outmost parens (even when invoking Lisp functions), as well as all the elaborate glue function names, kinda kill it for interactive use. If you think about it, it's leaking the implementation details into the syntax, and makes for poor idioms. Not very Lispy.
My idea is something like:
(And you would never know if it's /bin/+ or (define (+ ...))) Let's assume seq is an executable, and sum is a Scheme function. Each "token" seq produces (by default delimited by all whitespace, maybe you could override the rules for a local context, parameterize?) is buffered by the shell, and at the end the whole thing turned into a list of strings. The result is passed to sum as a parameter. (Of course this would break if sum expects a list of integers, but it could also parse the strings as it goes.)The other way around would also work. If seq produces a list of integers, it's turned into a list of strings and fed into sum as input lines.
The shell could scan $PATH and create a simple function wrapper for each executable.
Now to avoid unnecessary buffering or type conversion, a typed variant of Scheme could be used, possibly with multiple dispatch (per argument/return type). E.g. if the next function in the pipeline accepts an input port or a lazy string iterator, the preceding shell command wrapper could return an output port.
The tricky case with syntax is what to do with tokens like "-9", "3.14", etc. The lexer could store both the parsed value (if it is valid), and the original string. Depending on the context, it could be resolved to either, but retain strong (dynamic) typing when interacting with a Scheme function, so "3.14.15" wouldn't work if a typed function only accepts numbers.
Reminds me of Tcl a bit.
Does it support other scheme implementations? Or at least any plans to do so?
Porting to a different Scheme implementation requires some effort: schemesh needs a good, bidirectional C FFI and an (eval) that allows any Scheme form, including definitions.
For creating a single `schemesh` executable with the usual shell-compatible options and arguments, the Scheme implementation also needs to be linkable as a library from C:
Chez Scheme provides a `kernel.o` or `libkernel.a` library that you can link into C code, then call the C functions Sscheme_init(), Sregister_boot_file() and finally Scall0(some_scheme_repl_procedure) or Sscheme_start()
I am curious if this could also be implemented as a command named (
Similar to [ and [[
You mean a command or builtin `(` inside a traditional shell as bash or zsh?
It would be quite limited:
internal status would not persist between invocations,
and it would only be able to exchange unstructured data (a stream of bytes) with the shell