In case you want to run Justfiles in places where you can't install the Just binary (for whatever reason), I wrote a compiler that transforms Justfiles into portable shell scripts that have byte-for-byte identical output in most cases.
Fantastic! This solves my big fear around getting used to such a tool.
My work primarily involves 'nix boxes that have to be very locked down and will be left in a place basically untouched for 20 years after I finish setting them up. Getting a reliable new binary of any sort on them is quite difficult, not least because we need to plan for things other far future people might be able to discover for troubleshooting down the line.
We love just and are using it in all projects now. So great. Our typical justfile has around ~20 rules. Here is an example rule (and helper) to illustrate how we use it in ci:
This example is a bit contrived, more typically we would have a rule like "just lint" and you might call it from "just ci".
One of the best features is that just always runs from the project root directory. Little things like that add up after you've spent years wrestling with bash scripts.
> Little things like that add up after you've spent years wrestling with bash scripts.
Can you please explain what you mean here? I looked at the GitHub examples and wondered why this would be preferable to Bash aliases and functions. I must be missing something.
Bash has a thousand pitfalls, and as you accumulate layers of scripting they start compounding. Little things like “what the hell directory is this command actually running from”, parsing input parameters, quoting rules, exit statuses, pipelining, etc.
Tools like just provide a very consistent and simple base to start with, and you can always still call a separate script, or drop directly into inline shell scripting.
Of course. Is that news to you? Not a snark, I am genuinely surprised, assuming that you asked seriously.
I moved to ZSH some years ago but even that is not good enough. I thought of using Fish at one point but just said "frak this" and started writing Golang for anything that's more than 20-30 lines of bash/zsh scripting. Or requires their weird list / array syntaxes for iterating over stuff. Can't ever remember that with a gun to my head.
Shell scripts can be used safely if you know how to. Have solid error handling, exit on error (set -e), write tests (BATS) and a few other things to make sure it doesn't break. You are not gonna get the same performance with just or whatever new tooling there is just to run commands on your system
> Shell scripts can be used safely if you know how to
That's the contention point though -- I learned and relearned shell scripting no less than 7 separate times and it always slips away because it's not something I practice every day. Ultimately I concluded it's not worth it because you mostly have to memorize super weird syntax and strange exceptions to rules. At one point I was just like "screw this" and went for Golang.
> You are not gonna get the same performance with just or whatever new tooling there is just to run commands on your system
That's very debatable, I'd bet my Go programs process various things either faster or with the same speed. But even if they are slower that's often not important because most scripts I ever wrote were throwaway. Those that stuck around I have polished and re-polished, including with the measures you enumerated.
That’s a big if. I worked on a shell based tool for a couple years and eventually accumulated the know-how and toolset to write reliable code; but nobody else could contribute as the learning curve was too great.
I switched to Ruby for all new tools and never looked back. Performance is rarely a concern in this territory, and you can always offload heavy work to another process.
Performance in your shell script is a new one. Can you cite a real world example where that would ever matter? My shell scripts just initiate build/export/deploy programs. They take milliseconds to run and then the programs they start take minutes. The perf of those milliseconds couldn't be more negligible.
For me, the niceties are in the built in functions[0]. Commands to manipulate paths(!!), get cpu counts, mess with environment variables, string processing, hashing, etc. All the gyrations a more sophisticated script is going to eventually require. Instead of having to hack on it in shell, you get cross-platform utilities which are not going to blow up because of something as wild as a space or quote mark.
This best explains what I must be missing. Saying, “shell scripts are bad,” doesn’t tell me anything. Thanks for giving me a concept to explore. I’ll have another look with this in mind.
Nah. This looks nothing more than a wrapper for bash scripts. I can easily write helper scripts which does exactly what you described above. I don't understand the need of using a whole different tooling when I can run scripts natively on my machine(s)
I love the look of `just` and have been meaning to try it out, but this feels like one of those examples where Make's dependency management shines—it lets you specify that many of these commands only need to run when particular files change:
And as time goes on, I always end up wanting to parallelize the commands that can be parallelized (citest, lint, eslint), so I'll turn `make ci` (or `just ci`) into its own little script.
I've been using just at work and in personal projects for almost a year, and I like it a lot. In particular, its self documentation with `just --list` makes onboarding new folks easy. It's also just a nicer syntax than make.
Agreed. Is it that different than Make with `.PHONY` targets? Yes — it is Designed To Do Exactly What It Does, And It Does It Well. That counts for something in my book.
All my Justfiles start with this prelude to enable positional arguments, and a "default" target to print all the possible commands when you run `just` with no target name:
# this setting will allow passing arguments through to tasks, see the docs here
# https://just.systems/man/en/chapter_24.html#positional-arguments
set positional-arguments
# print all available commands by default
default:
@just --list
in mise you wouldn't need that preamble. `set positional-arguments` is just how it behaves normally and `mise run` doesn't just show available commands—it's also a selector UI
I don't have my work laptop to hand to compare, but I usually run "just" to get a list of commands and what they do, rather than "just --list". Hope that saves you 7 key presses going forwards.
The same applies to make without arguments though, make what? Grammar / word meaning aside, unknown / missing commands printing the help file or suggestions is a good pattern.
I think it's less grammatically ambiguous with make. It implicitly means "make <the project>". For most projects that's pretty well defined (and also grammatically correct since 'make' is a verb and 'just' is not).
But even so it would have been a better design for `make` to list top level targets or something.
This is one of the most important pieces of software in my development stack that "just" gets out of the way and does what it's supposed to do. Also has excellent Windows[1] support so I can take it everywhere!
> I get that your project is Windows-only, but many projects aren't.
Nit: At this point you're better off starting a separate comment thread since you yourself already know that what you are about to talk about is not what my comment is talking about.
> Wait, by "has excellent Windows support" you mean you have to set it to use Powershell or hope `sh` is installed on
I don't get what the problem is here? Do you protest against shebangs too? Why does a build script for a Windows only app need to use sh instead of powershell? I think you're interpreting "excellent windows support" to mean cross platform, and that's not what it means.
> So not only do you need just installed, which is yet another dependency,
Yeah if you want to use some software, your computer needs that software. That's not a dependency. So we're talking zero dependencies, or one of you absolutely need sh.
You can use the usual cmd (I do). You're not limited to Powershell. Also, you do understand that if a tool has first class support for Windows, that does mean it prioritizes Windows tools, right? Imagine I made a command runner, and said it has "excellent Linux support", and then someone comes along and complains that you have to install Powershell on Linux to use Windows recipes.
You can have Windows only recipes and Linux only recipes.
Furthermore, if you have bash installed on Windows (e.g. via git bash), you can put a shebang in your recipes to use bash.
We develop in Windows and deploy in Linux. Most of our recipes work in both OS's - either we use bash or Python for the recipe. The few that don't - we just mark as Windows only or Linux only so they're not available in the wrong OS.
> So not only do you need just installed, which is yet another dependency,
You do realize that Windows by default comes with almost no development tools, right? So yes, you do actually need to install things to get work done. The horror.
I'll also note that while you complain about just, you provide no alternative.
You can keep your commands simple enough so that they can be executed by both `sh` and `cmd.exe`. If you need anything more complex than invoking other programs, `&&`, `|` and `>`, it's time to rewrite your build script in a real programming language anyway.
I'm not a fan. It works well for what it is, but what it is is an additional language to know in a place where you probably already have one lying around.
Also, like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further. It's nice that everybody is on the same page about which verbs are available, but those verbs likely change filesystem state among your .gitignored files. And since they're starting from an unknown state you end up with each Just command prefixed by other commands which prepare to run the actual command, so now you're sort of freestyling a package manager around each command in an ad-hoc way when maybe it's automation that deserves to be handled without depending on unspecified state in the project dir.
None of this is Just's fault. This is people using Just poorly. But I do think it (and make) sort of place you on a slippery slope. Wherever possible I'd prefer to reframe whatever needs doing as a build and use something like nix which is less friendly up front, but less surprising later on because you know you're not depending on the outputs of some command that was run once and forgotten about--suddenly a problem because the new guy can't get it to work and nobody else remembers why it works on theirs.
I find declarative build systems end up pretty frustrating in practice. What I want from a build often isn't the artifacts, but the side effects of producing the artifacts like build output or compilation time. You get this "for free" from an imperative tool, but represents a significant feature in a declarative system that's usually implemented badly if it's implemented at all. The problem gets worse the smarter your tool is.
Logs emitted during the build, or test results, or metrics captured during the build (such as how long it took)... these can all themselves be build outputs.
I've got one where "deploying" means updating a few version strings and image reverences in a different repo. The "build" clones that repo and makes the changes in the necessary spots and makes a commit. Yes, the side effect I want is that the commit gets pushed--which requires my ssh key which is not a build input--but I sort of prefer doing that bit by hand.
The developer time required to learn and properly use nix makes it unattractive to most teams.
The benefits don't outweigh the costs of adoption.
Instead of debugging code, the team would have to spend significant time maintaining the build system for the build systems sake.
Don't get me wrong, I want something nix-like in my toolbox.
I want to love nix.
But I wouldn't dare to argue my team to commit to the world of pain that comes with it.
There's a good reason that nix didn't see wide adoption in the industry.
In my experience, Nix is very high leverage. My company has ~5 nix gurus, but Nix is invisibly used by hundreds of engineers. Most engineers know we use Nix and that's about it.
Similar experience for me. In my company adopting nix paid off in weeks with no prior experience. Very happy with it almost 10 years later and at much larger scale. The difference between things working reliability or not is too big to overstate.
I tried using Nix but stopped for two very practical reasons: it's very slow and it's extremely disk heavy. Install a couple of things and suddenly your nix store weighs at 100 GB.
use only stable nix. override nixpkgs for inputs you add. after first build, use offline and no-substitute flags on reuse, alias such command. use nixdirenv.
read and setup store/gc settings work for you. do not use nixenv nor nix profile.
Interesting. For me it's generally much faster than other package managers. The evaluation takes some time, but copying derivations from a cache to the Nix store is so much faster than traditional package management.
I wonder if you somehow ended up eval'ing many versions of nixpkgs?
your nix store weighs at 100 GB
¯\_(ツ)_/¯ outside very constrained devices, who cares? I just checked my NixOS dev VM that I have used for months now and cannot remember when I last garbage collected. It's 188GiB, but I have many different versions of CUDA, Torch, etc. (the project I'm currently working on entails building some kernels for many different build configurations), and I run nixos-unstable, where a lot of stuff changes, so generations are pretty unique.
A 2TB NVMe SSD is just over 100 Euro. Caring about 100GiB seems to be optimizing for the wrong things.
I completely agree on embedded machines though. Just deploy it by copying the system closure, garbage collecting anything but the previous closure for backup, it'll be pretty much the same size as any other Linux system.
> For me it's generally much faster than other package managers.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
> outside very constrained devices, who cares?
Seriously, are we going to shame people who can't afford to buy lots of storage?? My smaller laptop has only 250GB, but that's freaking plenty if I stick with apt. But I can barely run Nix on it.
> Seriously, are we going to shame people who can't afford to buy lots of storage??
It's not just storage, though - storage may be cheap but once your machine is at capacity (the physical space in laptops is an important constraint) you have to replace perfectly good hardware to accommodate absurdly space-hungry software (looking at you, Vivado).
Also, don't forget that not everyone has always-available, fast, reliable, cost-free internet. By rural standards my connection's very good, but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time.
Digital wastefulness is a problem, and I do think we need to take it more seriously.
but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time
Except that Nix does not download 100 GiB under unless you are installing a gazillion packages. First, Nix downloads compressed output paths. Second, it's not like Nix packages are substantially larger than Debian, Ubuntu, or Fedora packages. The extra storage space comes from (1) Nix keeping multiple generations to allow you to roll back to previous versions of the system -- if you break something, you can always roll back; (2) people using multiple different versions of nixpkgs, which could lead to having multiple versions of system libraries.
(1) is a feature of Nix/NixOS, if you want to use less space, you can trade off the ability to roll back for space. You could always garbage collect everything except the current generation and it would be similar to other distributions. For (2), avoid using multiple nixpkgs versions.
I generally like keeping around a lot of generations, etc. so I don't mind my history of NixOS systems keeping 100-200 GiB. But if you care about space, garbage collect and it won't take up that amount of space.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
Pretty much all popular package managers. APT/dpkg, DNF/rpm, pacman, etc.
I have just updated one machine to the latest unstable. It updated 333 packages, a substantial part of that system. It took 1 minute and 50 seconds, most of it downloading. So, not sure how it takes a good part of an hour for you.
Seriously, are we going to shame people who can't afford to buy lots of storage??
I'm not shaming anyone. Just saying that 1 or 2 TB is pretty normal nowadays (outside Mac, because Apple makes you pay for it). At any rate, you can make the size pretty similar to any other distribution. It's not like glibc or GNOME takes up substantially more disk space on Nix.
If you end up using 100 GiB of storage, you are either keeping a lot of system generations around or you somehow have different nixpkgs versions in your system's closure, ending up with duplicate versions of glibc, etc. If the former is the case, set up automatic garbage collection and the space use will be far less. E.g. on one machine I have only three NixOS unstable generations and the system is 18 GiB (which includes a bunch of machine learning models, etc.). It would probably be substantially less on NixOS stable, since there are less differences between generations (e.g. I have qemu, webkitgtk, etc. three times).
Total size of installation is roughly comparable between NixOS and, say, Ubuntu.
My laptop's Nix closure of 1 generation is 33 GB. My desktop Ubuntu has 27 GB (20 GB /usr + 7 GB in /var, where snaps and flatpaks are stored).
Indeed the disk usage of Nix comes from multiple generations. Every time there is a new version of glibc, gcc, or anything that "the world" depends on, it's another 33 GB download. Storing the old generation is entirely optional. The maximum disk space needed is 2 generations.
Updating Ubuntu to a new LTS version almost always costs me multiple hours, caused by interleaved questions on how to merge changed config files in /etc (which unfortunately one cannot seem to batch), apt installation being rather slow, and during recent years, the update generally breaking in some way that requires a major investigation (e.g. the updater itself dies, or afterwards I have no graphics). On NixOS, these problems do not exist, and the time to update is usually < 30 minutes.
In my experience Nix is a force multiplier. But you need someone on the team who has plenty of Nix experience, because you inevitably need to write your own derivations and smoothen over issues that you might encounter in nixpkgs.
We use Nix with Cachix in the team I currently work in. We use a lot of ML packages/kernels, which are nearly impossible to manage in Python venvs (long build times because we have to patch some dependencies, version incompatibilities, etc.). Now you can set up a development environment in seconds. The nicest thing is when we switch between branches we automatically have the state of the world needed for that branch (direnv yay).
It was some work to set up, but it saves so much time now.
How do you do the initial setup? I'm concerned with anything that happens before activating the dev shell.
Right now I wrote a bash script to check for nix, direnv, git, gpg, etc. But it feels a bit clumsy, compared to the flake that contains the dev shell.
For my own system I set up home manager. But I don't want to make the use of home manager a requirement, as it can be quite opinionated. (e.g. setting up direnv will be done by generating a .zshrc, which can be limiting to some)
For our particular project you only need to install Nix and then run nix develop, but I'd indeed recommend to use direnv. For me it's not an issue, since I run NixOS on development VMs, but a colleague who was not using Nix before (I think) also wrote a bash script to set up an AWS VM with the NixOS AMI and then rolls out a minimal NixOS configuration.
I think for people who don't want to dive into Nix much, doing an imperative install (nix profile install) of the necessary packages is also fine. You could even make your own small meta-package that depends on everything that is needed. Then they could do a nix profile install yourflake#yourmetapackage and have all the tools they need. But I agree direnv is a bit harder, since you'll have to put something in the shell rc/profile.
The imperative install is as many lines of code as the flake itself. That’s what’s bothering me. But a meta package would be a step in the right direction.
The point is that there's often no way way to express "I want side effects" in declarative tools, and the number of side effects that might be useful is vast.
For example, sometimes I profiling the build times to see where I should focus effort.
Sometimes I want to see it to quickly check for issues where adding some dependency header causes build times to explode 100% in downstream dependencies during cold builds.
Another common occurrence for is trying to debug a platform, toolchain, or standard library issue and the build system either doesn't detect changes in those components or only makes the components readily accessible in an internal cache that's subject to invalidation issues. You'll usually get the wrong artifact or test results in those cases.
Some other systems (e.g. bazel/blaze comes to mind) actively try to hide side effects like stdout.
In all of these cases, the only way to actually get these side effects is to reach into the tool's internals by blowing away caches/output folders or reading live log files. That's a failure of the build tool.
> The point is that there's often no way way to express "I want side effects" in declarative tools, and the number of side effects that might be useful is vast.
Shake (https://shakebuild.com/) is pretty good about letting you specify that a specific step produces multiple artifacts.
I suspect Nix can do the same?
> Some other systems (e.g. bazel/blaze comes to mind) actively try to hide side effects like stdout.
Yes, blaze isn't all that great. You can tell, because Google folks check in generated artifacts into their repositories, instead of wrestling with getting blaze to build them.
Generally my aim with both Nix and Bazel are that, while they are the source of truth, day-to-day development and debugging occurs using language-native tools. So the only touch point for local development is when you are modifying the dependency graph in some way.
It's definitely more work (you need to maintain compatibility with two different build systems), but worth it for exactly these reasons.
I haven’t used it, but it sounds like make’s —-assume-new flag does exactly what you want for the first part. It lets you rebuild everything that would result from a changed file, including all side effects, without needing to first update the file.
Really? It's the one part of the traditional c build system I actually still use. Easy to write, easy to debug, relatively small—what's the issue? I hear people complain about make incessantly but people rarely have substantial criticism to offer. Is it the syntax? Reliance on the filesystem? Inconsistencies between implementations?
As an actual builder it has limitations, such as not having (built in) the ability to know if it can still do an incremental build after changing some build option. That can result in inconsistent builds.
The main problem is that you often require more logic than makes sense to write in make, but it kind of has a language built into it so people try to use it. But as a language it's terrible (no scoping, many missing features). So people end up implementing their build logic in a bastard combination of make and shell which is very opaque and difficult to debug.
For example, I was recently trying to figure out how the OpenWRT makefiles are doing something, and it was really painful, because with make having no scoping any part of the system could end up affecting the piece you are looking at. There is a lot of dropping into shell to get stuff done, and a lot of the targets are themselves expanded variables, which makes it really opaque. Really a lot of it is not gaining from being written in make, they could do with rewriting large parts in a real language. But it would be a huge job. And that's where a lot of makefile systems end up
That's why you get tools like ninja where they decided not to allow any logic at all.
That's actually not too much of a problem in practice: almost everyone just uses Gnu Make.
> Easy to write, easy to debug [...]
Alas, Make becomes hard to write and really hard to debug past a certain complexity threshold. And you reach that complexity threshold very quickly.
> Is it the syntax?
Yes, the syntax of Make is awful, and I'm not even talking about ergonomics. Thanks to Make's abysmal syntax, special characters in your files make it barf completely. And by 'special' I mean something as mundane as spaces.
I agree, but `Just` as an incremental improvement is a much easier sell to teams than asking them to think about their builds completely differently and rewrite everything to fit that.
Offering a cave man a flashlight is probably more helpful than offering them a lightbulb and asking them to wire up the cave to power it :D
I did that for 10+ years and got fed up with having to remember which names I gave to my scripts that month. I gradually evolved my views and that got reflected with the names of the scripts.
`just` helped me finally move away from that. Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
Absolutely nothing wrong with a directory full of scripts but I lost patience for having to scan what each does and moved away from them.
> Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
How is that different from having a scripts dir, and a script called `check` or `test`?
Tab completion. `just -l<tab>` shows all the commands and their descriptions.
Aside from that, it has lots of built-in ergonomics like consistent argument parsing, functions to say what OS you’re on, an easy way to hide helper functions, the ability to execute a justfile in a great-grandparent directory, etc.
You can totally do any of those things with shell scripts. I prefer letting someone else invent all the bells and whistles there so I don’t have to.
I am a bit confused. If you have your scripts in `scripts/`, doing `scripts/TAB` will also auto-complete! The other things seem like really minor benefits to me, not trying to say you should also feel the same, just giving my opinion.
Scripts/tab won’t show you the documentation of each script explaining what it’s for.
My genuine advice is to download it and play with it for an hour. If you don’t like it, you’ve learned a little about a tool you’re bound to come across sometime. If you do like it, now you’ve added another tool to your palette. Either way you learn something useful.
In my case I prefer all these utility scripts to be in one file because 90% of them are 1-2 lines anyway. Zero point dedicating a directory with several 5-line files.
I’ve used make for many things over the years. I’m competent with it. Make is such a breath of fresh air for the uses that don’t involve actually incrementally building software. It’s sooo less verbose. It’s hard to describe the feel of a thing, but imagine learning to program with Java and then finding Python. If you’re building a giant app developed by thousands of people, maybe Java’s complexity starts to show a benefit. If you just want to quickly script something up, Python gets the job done with a tenth the boilerplate.
There’s room for both. Neither replaces the other. But it turns out many of my projects need tools closer to Python/Just than Java/Make.
I believe I already addressed that this is purely a matter of taste and convenience, not sure why you are not reading my comment and are asking for more.
And it was already said: if you like it more, use it. Nobody is holding a gun to your head. And I even explained that I used that in the past and moved away from it.
I also haven't seen in your previous response how Just is better than a subdir with shell scripts named according to a convention.
AFAICT, the productivity improvements you described came exclusively from using a consistent naming convention, not from Just. And since everyone's dev env supports subdirectories with shell scripts already, why not simply use that instead of requiring Just?
I got a down arrow on my comment that's your parent a minute before you responded. Coincidence, or you prefer to press it because you are not satisfied that I'm not your personal documentation agent?
Finally and additionally as a response: because it's also all in one place. I don't want 10+ scripts. For the third time: I used bespoke scripts and found them not good enough compared to Just, now for even more reasons clearly spelled out. Sigh.
I didn't downvote you, though I found your answer unhelpful. (I've now received 2 downvotes.)
10+ scripts with standard names ("clean", "test", "build", etc.) in a subdir added to $PATH seems to me to be easier to manage -- if the scripts are independent of each other. If they do have dependencies on each other, but the dependencies are "treelike" (meaning that for every target you might want to run, all of its transitive deps are reached via a unique path), it's still easier (than either make or Just) to have separate scripts, and turn each dep into a plain invocation at the top of each script. It's only when that approach starts to invoke deps multiple times (because it has become non-treelike) that either make or Just starts to offer an advantage.
I think if you look at this with clear eyes, you'll see that 100% of the value you feel you're getting from Just is actually coming from the naming convention that Just nudged you towards.
I like having individual files too as they can be independently managed by source control, linted, etc. And I've certainly been known to have a Makefile that's simply:
all:
@ls -1 tasks
% :: tasks/%
@./$<
And then fill my `tasks/` directory with individual executables.
And I have not touched your comment btw. I rarely downvote these days and I have to be really pissed to do so. I was not pissed earlier, more like a little frustrated as you seemed to ask without reading, as if demanding a complete answer without willing to piece together the info given in several other comments.
So... you were talking about global scripts. I was not. I was talking about per-project directory with scripts because very often projects have their little quirks that make all their scripts frustratingly 99% identical but never 100%. I danced this tango dozens of times -- not exaggerating, I am a contractor (though I hope to finally stop, currently looking for a proper long-term job with good culture fit) and worked on many projects -- and ultimately got extremely frustrated.
At one point I did attempt to make those universal scripts you speak of. The even more maddening thing is that they worked for part of the projects... and didn't work for others. It was a rough 60/40 split. So you end up maintaining even more of them. So I gave up.
Very soon before that I found `just` and very quickly recognized the benefits: project-local commands / scripts, centralized location (just one file), ability to delegate to parent Justfile (i.e. you can have a dedicated folder for Golang projects and that one can contain a Justfile with e.g. `just lint` task that calls `go vet` and `staticcheck` etc., without having to copy-paste that into every Golang project Justfile file, though I actually prefer that nowadays -- better to have completely self-contained tooling after all but still, for super dev-specific stuff that does not belong in version control the parent Justfile workflow is quite a good fit), and a very easy syntax that still allows for doing stuff that will make you pull your hair out if you attempt them with pure sh or bash and if you haven't memorized their specifics over the course of a lifetime (which is something I attempted but gave up on because it was more or less memorization of exceptions of the exceptions).
Now, to address this:
> I also haven't seen in your previous response how Just is better than a subdir with shell scripts named according to a convention.
I am not impressed by conventions that are not enforced with a spiked club. Which means: we the people forget stuff easily. I suffered from that too. Conventions don't mean much when you misspell the script filename or put `-f` instead of `-e` in the `set` call at the top of the script. :)
I prefer loud failures and not silent mess-ups.
My position is informed by a lot of negative past experiences. Does not mean that my priorities are universal or unconditionally better. Not at all. It means that everything I got through in my career made me appreciate `just` and it was a near-perfect fit for my needs.
> I think if you look at this with clear eyes, you'll see that 100% of the value you feel you're getting from Just is actually coming from the naming convention that Just nudged you towards.
Sure, it encouraged me to finally settle on a naming convention but I've done this before as well. I still prefer the singular file approach + ability to delegate to parent files.
The less files in total the better. I have found this rule to make me more productive.
If you have gotten this far: nowhere did I claim objective improvements. I had discussions in the past (might have been in other `just` threads even!) with curmudgeons who loudly proclaimed "skill issue!" on my non-preference towards make's bash-isms and weird rules. So for them `make` + other scripts (even Perl / Python ones) are working just fine and the rest are "kids running after shiny toys".
I don't mind them thinking that. I have my motivation and, as said above, it's well-motivated given my past and my way of work and mental preferences.
Thanks for going into more depth. I wasn't aware that Just could delegate like that, which does sound useful. And I certainly agree that bash and make are absolutely Byzantine at this point -- footguns on footguns. There's much value in using a tool that is powerful enough to do what you need, but not much more -- since that makes it much easier to reason about what a given instance/invocation of that tool could possibly be doing, without spending hours (years?) down in the detail.
And it sounds like Just is that tool for you! I'll probably keep using make, now that I've spent so much time wrestling with its many idiosyncrasies, but you never know.
> And I certainly agree that bash and make are absolutely Byzantine at this point -- footguns on footguns.
Yeah, that's my problem. Not like I don't have memory in my brain, not like I can't learn make and bash -- I did so several times almost from scratch but as I am not using them every day, the memories always fade. It's best to relearn something without footguns than one with. Hence I am using `just`. It's straightforward and very easy to catch up with even if you forget it. Not so with make and bash.
If you are very invested in them and are feeling at home with them, great for you -- I am not claiming unquestionable and countless benefits. I am claiming it works well for my brain and my workflow, and most of all -- the frequency with which I have to do scripting.
My comments are frustrated because I believe a response was already given to the question you asked. I'll be grateful if you at least don't misrepresent, even if it's difficult to find a common language. If you don't believe that I responded adequately then just ask a more detailed question.
But sure, here's one more reason for you, as said in a sibling subthread: I can have all my project's commands in one file.
Also it pays off to know what `just` does. As several other people were told (not only by me) in the bigger thread, it's an aggregating task runner, more or less. Not a dependency manager.
Honestly you're coming off a bit shit here, I don't read the other person's responses as defensive or insecure at all, so I suspect you're saying that to be rude.
(For those who haven't used it, fzf is a fuzzy-searchable menu for the command line. You pipe lines of input to it, and it shows them in a menu. You start typing and it fuzzy searches the menu and selects the best match. Then you press Enter to pipe that out, or Tab for multi-select. It's fantastic.)
I have convenience functions in my profile script that pipe different things to fzf...scripts, paths in the current directory to copy to the clipboard, etc. It's indispensable.
Bonus: progressive enhancement. If someone doesn't have fzf/those convenience functions, it's just a directory with shell scripts, so they don't have to do anything special to use them.
That works too. I've done both and I currently use Just because it collects the entrypoints to the project into a single file. This can provide an advantage where there's a bit of interdependence across your entrypoints.
E.g: You have a docker container, you might be `run`ning it, `exec`ing it etc. from the same compose-file. So Just gives you the ability to link those shared commands within the same file. Once the entrypoints get too numerous you can either break them into scripts (I do this partially depending on the level of behavioral complexity in the script) or partition your justfiles and import them into a single master.
Well, for one, your recipes can be in another language (e.g. Python).
You can build complex recipes out of simpler ones. Sure, you could do that by creating a new shell script that calls other shell scripts, but then you're reinventing just.
You don't need to be in the directory to run those scripts.
I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
> You don't need to be in the directory to run those scripts.
There's already an easy way to solve this: $PATH.
> I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
Well, I think it's the same question, rather than a better question. And the answer is yes, if all you need from make, now and in the future, is a set of .PHONY targets, then by all means just use shell scripts. make is used because often you need slightly more than this -- or you may do so tomorrow, and don't want to change the syntax you use to accomplish tasks.
> There's already an easy way to solve this: $PATH.
I have 10 projects. Each with their own set of shell scripts. You want me (and all other developers) to pollute the $PATH with 10 directories?
And then you have a namespace problem. I usually have a "test" recipe in my justfiles. The analog would be a test.sh file. But with your solution, it will have to be projA-test.sh and projB-test.sh.
And if I dump them all into the $PATH, how do I quickly see the scripts relevant to a particular project?
I tend to work on different projects in different terminal sessions so I don't find this a problem, but OK, I can see the benefit of making the tasks a command line executes dependent on the current directory. (There are tools that can auto-adjust $PATH for you like this, but that would be a weak argument against Just (unless you're using them already) since it would mean swapping Just for that-other-tool.)
If you use git and don't need multiple "layers" of Justfiles (i.e., if you have all your scripts in a scripts folder at the top level of your repo), then in bash you can get what you want with:
Although I'm coming off as a strong just evangelist, I do want to point out that if someone already has a workflow with just scripts, it's totally OK to continue with that. Personally, I think using just is simpler for those who already don't have that workflow.
Likewise, if you are using make as a command runner and already know make well enough - by all means continue! In my experience, though, someone who doesn't know make will be much more likely to learn just than make.
I tend to sneak justfiles into the projects I work on. They usually don't have any good automation (no make, perhaps some scripts with a doc/md file explaining which script is for what). I sneak the justfile in the repository, and when it's mature, start showing teammates how I use it. They typically then switch to it. I don't think they would switch to it if it were a Makefile.
I ask because cmd.exe has DOSKEY, which is basically a very slightly souped up version of bash's alias. I think it wouldn't be hard to use DOSKEY to replace CD and PUSHD with macros that run some command to update %PATH% and then change directory as usual.
just isn't something magical that will make scripts meant for Linux work in Windows, you know. Some people do actual development in Windows and have Windows scripts.
It's a different approach, none is better or worse, people simply have preferences.
And all other features aside, it seems to be able to call commands from any subdirectory in a project, which is actually nice compared with a normal shell. I mean, you can replicate this with some lines of shellscripting, but not everyone seems to maintain an elaborated $BIN of personal tools.
1. The language is extremely simple and is consistent.
2. I agree on having to move away from imperative and go for declarative (if the latter was what you had in mind) -- any ideas for a better tool that does that and is just as easy to learn?
3. RE: cobbling together stuff with and around `just` is relatively trivial to fix f.ex. I have my own `just` recipes to bring up the entire set of dev dependencies for the project at hand, and then to tear them down. It's a very small investment and you get a lot of ROI.
4. RE: Nix, nah, if that's your sales pitch for it and against `just` then I'll just strongly disagree. Nix is a mess, has confusing cutesy naming terminology, has a big learning curve and a terrible language. All of that would be fine, mind you, and I could muscle through it easily but the moment I received several cryptic error messages that absolutely did not tell me what I did wrong and I had to go to forums and get yelled at, is the moment I gave up. `just` is simply much easier and I am not worried about not having Nix-like environments for my projects. Docker + compose work very well for this.
Finally, your point about an obscure single command that people forget about in the future applies to literally any and all task runners and dependency managers, Nix included. That's not a valid criticism towards `just` IMO.
1. It's a fine language but I have all kinds of "works on my machine" problems with it because it has no associated dependency manager. Other languages solve this with lockfiles and such, and it's likely that you're already doing that with one of those same languages in the same project. So just... Use the main language for whatever it is.
2. No, nothing's so easy, but you can get more if you're willing to work for it, and I think the juice is worth the squeeze.
3. For runtime state, I find that using just as a wrapper around Tilt or docker-compose or k3d or whatever just hides the perfectly adequate interfaces that those tools have. The wrapper discourages deeper tinkering with those tools. It's not a particularly difficult layer of abstraction to pierce, but it doesn't buy you enough to justify having an additional layer at all.
4. In the case I'm thinking of, the whole team was working happily because they had used a Just recipe to download a file from a different repo, and then somebody removed the recipe, but everyone (except the new guy) had the file from months ago, which worked. Nix wouldn't have let us accidentally get into a broken state and not know it. It would have broken as soon as we removed the derivation for the necessary file. I sent him the file through slack and then he was able to work, and only discovered later how it got there on my machine. That kind of uncertainty leads to expensive problems eventually.
1. I don't follow. I work with Elixir, Golang and Rust and I use their dependency managers just fine. F.ex. I have `just deps` that does `mix deps.get` in Elixir and `go get -u ./... && go mod tidy && go mod vendor` in Golang. Furthermore, `just` does not claim to do dependency management. So what do you mean here?
2. Sure but I am not paid for it. Nobody will look at me with admiration if I delay an important milestone with 2 weeks (or, more likely, 2 years) to invent such a tool. :/ So not sure I get you here either.
3. We're veering into bikeshedding here and I will not argue; use whatever interface works best for you. I personally love having `just up` / `just down` / `just start` / `just stop` for my development dependencies of any project project. No more one big shared Postgres instance that if I screw it up (and homebrew did that a number of times!) I'll have to dig through TimeMachine for DB backups. I wisened up eventually and started making scheduled exhaustive backups of each DB... and then said to myself "forget it" and just started using separate containers for each project. For my work I found wrapping the tools worth it for not having to remember their bespoke full command lines. I standardized my tasks and I can enter almost any directory and run the same `just ...` commands and get what I expect as a result. To me that's valuable. But again, use whatever is convenient for you. No argument from me.
4. I don't disagree here and I am kind of 50/50 because on the one hand this is failure of process + lack of proper dev/ops tooling (f.ex. deleting this or that should raise alarms i.e. every such repository should have CI that makes sure everything important stays in place). On the other hand if Nix or anything else spares you from having to install those guard rails then sure, then it's a good fit for you. For my work and hobbies Nix is a net negative and I gave it more than a fair chance and I had enough of opinionated diva-like tools whose message is "learn everything about me to love me, baby". No thanks. But that's just a single example. Again, if there are tools that spare you from screwing up something accidentally, I usually vote strongly in favor of them.
People like Just when they're the one who is writing the recipes, because those recipes implicitly depend on whatever they have installed at the time of writing so everything is easy, but then other people come to the project and it has a culture of "IDK I just use the Just recipe," except that recipe doesn't work unless you've been around since it was written and have all of the right versions of things. For instance I've got all these errors like:
> This application uses version go1.20 of the source-processing packages but runs version go1.23 of 'go list'. It may fail to process source files that rely on newer language features. If so, rebuild the application using a newer version of Go.
They don't seem to be hurting anything but I'm not really sure how to reason about them since somebody packaged the commands together but didn't specify anything about the environment. The Justfile entry tells me that it's running some script in $FOO_DOWNLOAD_DIR but I've got some sleuthing to do to figure out where that dir actually is and how its contents were populated and what it has to do with `go list`.
This is of course bad practice, but Just is the rug under which it is hidden and made to look like good practice. It's good that Just doesn't claim to manage dependencies, since it doesn't, but this action could instead be a go program in which case go would be handling those dependencies for me.
I don't disagree. Your example is a good demonstration why Nix -- or a much more thorough Justfile -- would be needed.
In my case I also supply the `.tool-versions` file so that only mandates the other dev to have Just and asdf / mise (for installing exactly the right versions of tools).
I also tried having full Dockerized development environment but that proved to be too much of a hassle.
But yep, in your scenario it seems like the other guys did sloppy work. Sadly 99% of everything can be misused by people who don't practice their craft well.
(EDIT: Golang programs should really be made to work with the latest version, all being said and done. Another example of sloppy work, if you don't mind me saying.)
I never get this criticism. Nix is a pretty nice, small, functional programming language and lazy evaluation makes it really powerful (see e.g. using fixed-points for overlays). I wonder if this criticism comes from people who have never done any functional programming?
When I first started with Nix six years ago, the language was one of the things I immediately liked a lot. What I didn't like was the lack of documentation for all the functions, hooks, etc. in nixpkgs, though it certainly got better with time.
I did say I could learn it and I do FP for 8.5 years now. It's not that.
It's the obscure error messages, mostly. And as you said, documentation even to this day leaves stuff to be desired, thought that might be better nowadays, no idea and I don't plan to revisit still.
Maybe because it's been many years since I used C or C++ for anything serious, but I don't get that impression from using make in the first place. I haven't seen it used for setting up a build environment per se, so there aren't any "packages" for it to manage. When I've written a Makefile, I saw it as describing the structure of cache files used by the project's build process. And it felt much more declarative than the actual code. At the leaves, you don't tell it to check file timestamps; you tell it which files' timestamps need to be up to date, and let it infer which timestamps need to be compared and what the results of those comparisons need to be in order to trigger a rule. Similarly, a rule feels composed of other rules, more than it feels implemented by invoking them.
> like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further.
Um, what? `make` is arguably the most common declarative tool in existence ...
Whenever people complain about Make in detail, it's almost always either because they're violating Paul's Rules of Makefiles or because they're actually complaining about autotools (or occasionally cmake).
It's quite easy to accidentally write makefiles that build something different when you run them a second time, or when some server that used to be reliable suddenly goes down. Or when the user upgrades something that you wouldn't think is related.
It does no validation of inputs. So suppose you're bisecting your way towards the cause of a failure related to the compiler version. Ideally there would be a commit which changed the compiler version, so your bisect would find a nice neat boundary in version history where the problem began. Make, by contrast, is just picking up whatever it finds on the PATH and hoping for the best. So the best you can do is exclude the code as the source of the bug and start scratching your head about the environment.
That willingness to just pick up whatever it finds and make changes wherever it wants with no regard to whether dependency created by these state changes is made explicit and transparent to the user is what I mean by "imperative".
Make isn't, at all, declarative. It's almost entirely based on you writing out what to invoke, as opposed to what should exist and having the build system "figure that out".
That is, in make you say `$(CC) -c foo.c -o foo.o`, which is telling you, ultimately, how to compile the thing, while declarative build systems (bazel/nix/etc.) you say "this is a cc_binary" or "this is a cc_library" and you let it figure the rest out for you.
If your executable is named "foo" and there is a "foo.c" somewhere, your Makefile only needs to contain "foo:" and make will figure out how to build it using its default rules. If you have more than one file (ex: foo.c and bar.c), just write "foo: bar.c".
Modern build systems are more advanced and have better defaults, but the general idea is the same. They are all declarative. An imperative build system would be like a shell script.
If you don't want to buy into the whole Nix philosophy, you can also use something like 'shake' (https://shakebuild.com/) to build your own buildsystem-like command line tooling.
I love just. The main benefit for me at work is that it's much easier to convince others to use, unlike make.
I like make just fine, and it's useful to learn, but it's also a very opaque language to someone who may not even have very much shell experience. I've frequently found Makefiles scattered around a repo – which do still work, to be clear – with no known ownership, the knowledge of their creation lost with the person who wrote them, and subsequently left.
I'm hoping for this effect, as more and more I work with people who don't consider `make` the default (or, more often, have never heard of it).
But I think the hard part -- for any build system -- is achieving the ubiquity `make` had back in the day. You could "just" type "make" and you'd either build the project, or get fast feedback on how much that project cared about developers.
I've used Just at a workplace on a project I didn't start. It seemed slightly simpler than make when putting together task dependencies. But I couldn't figure out what justifies using it over make.
For me, it's a fit-for-purpose issue. Make is great when you're creating artifacts and want to rebuild based on changes. Just is a task runner, so while there's a notion of dependent tasks, there's no notion of dependent artifacts. If you're using a lot of .PHONY targets in a Makefile, you're mostly using it as a task runner -- it works, but it's not ergonomic.
I like that just will search upward for the nearest justfile, and run the command with its directory as the working directory (optional -- https://just.systems/man/en/attributes.html -- with fallback available -- https://just.systems/man/en/fallback-to-parent-justfiles.htm...). For example, I might use something like `just devserver` or `just testfe` to trigger commands, or `just upload` to push some assets -- these commands work from anywhere within the project.
My life wouldn't be that different if I just had to use Make (and I still use Make for some tasks), but I like having a language-agnostic, more ergonomic task runner.
Just a quick note for interested readers: you don't need to explicitly mark things as .PHONY in make, unless your Makefile lives next to files/folders with the same name as your targets. So unless you had some file called "install" in the same folder, you wouldn't need to have something like ".PHONY: install".
make is a build system and has a lot of complexity in it to make it optimal (or at least attempt to) for that use case.
just is a "command runner" and functionally the equivalent of packing up a folder full of short scripts into a single file with a little bit of sugar on top. (E.g., by default every script is executed with the CWD being the folder the justfile is in so you don't need to go search for that stackoverflow answer about getting the script's folder and paste that in the top of every script.)
If you use just as a build system, you're going to end up reimplementing half of make. If you try and use make as a command runner, you end up fighting it in many ways because you're not "building" things.
I've generally found the most value in just in situations where shell is a good way to implement whatever I"m doing but it's grown large enough that it could benefit from some greater organization.
Being able to write your recipes in another language.
Not having to be in the directory where the Makefile resides.
Being able to call a recipe after the current recipe with && syntax.
Overall lower mental burden than make. make is very complex. just is very simple. If you know neither of the two, you'll get going much faster with just.
> You can disable this behavior for specific targets using make’s built-in .PHONY target name, but the syntax is verbose and can be hard to remember.
I think this is overstating things a bit. I first read `.PHONY` in a Makefile while i was a teenager and i figured out what it does just by looking at it in practice.
Makefiles do have some weirdness (e.g. tab being part of the syntax) but `.PHONY` is not one of them.
Make is installed on Windows, if you install Microsoft's C/C++ dev stack (typically via installing Visual Studio). They just use nmake instead of GNU make. They also include Cmake these days, as it's the common cross platform option.
> You wouldn't use GNU Make (the thing that comes for "default" on Linux) with Python either.
But people do use Make all the time for Python projects - as a command runner. Pelican projects, for example, come with a Makefile to start the server, publish, etc.
The whole point of this submission is that many, many people use Makefiles not for incremental builds, but as a convenient place to store commonly used commands. And just is a better and simpler tool than make for that. If you're on Windows, it's a pain to install make, compared to installing just.
The manual states that "just is a command runner, not a build system," and mentions "no need for .PHONY recipes!" This seems to suggest that there's no way to prevent Just from rebuilding targets, even if they are up-to-date. For me, one of the key advantages of using Make is its support for incremental builds, which is a major distinction from using a plain shell script to run some commands.
Maybe it’s the stacks I’m using, but I’ve always had incremental happen with language-native tooling like `go` or `cargo`. So for me at least, having lazy eval features like that would be an unnecessary increase in scope and complexity. With Just, I can just throw together different commands and it just works cross platform. I love it.
I much prefer that than the other way, ie letting language tooling become command runners (looking at you npm). That’s the worst of both worlds.
> I’ve always had incremental happen with language-native tooling like `go` or `cargo`
That makes sense, but for me, Make is incredibly useful for incremental file processing outside of programming. I've written tiny Makefiles that use glob patterns to batch-convert thousands of SVGs into PNGs and WebPs, but only for the modified SVG files. I've used Make to batch-convert modified LaTeX files to PDFs and render modified Blender projects into WebM videos for the web. Rendering videos is very time-consuming, so only rendering modified video files is a huge win.
And then you go ahead and complain that it is poor at building.
If you need a build tool, don't use just. Use make or something else. The purpose of just is to stop putting non-build stuff in Makefiles. And of course, it has a nice set of features that make doesn't.
My first sentence was me quoting the Just manual and my second sentence was my observation about what that suggests. I wasn't asserting whether it's true or not, just sharing my interpretation, as I'm not familiar with Just.
> And then you go ahead and complain that it is poor at building.
I did not "complain" I stated that incremental builds, regardless of whether Just has them or not, is one feature I personally like about Make.
Going by the responses I received, Just does not appear to support incremental builds and a simple acknowledgement, minus the vitriol, would have sufficed.
For me is not needing to chain a lot of commands with && to ensure that it fails with the first command that fails. With just, if one of the commands of the recipe fails, it stops.
I saw many projects like this a while ago, and, although they all seemed great, I kept wondering why do I need such a complex thing just to save/run a bunch of scripts?
I ended up building my own script runner, fj.sh [1]. It's dead simple, you write your scripts using regular shell functions that accept arguments, and add your script files to your repos. Run with "fj myfunc myarg ...". Installation is basically downloading an executable shell script (fj.sh) and adding it to your PATH. Uninstall by removing it. That's all.
I'm not saying 'just' is bad—it is an awesome, very powerful tool, but you don't always need that much power, so keep an eye on your use case, as always.
'Just' too was simple at the beginning [1], but with time and usage things always become more complex that some script you do for your own specific use-case.
Can anyone with experience with just and tools like npm/yarn explain if there are any benefits to use just instead of codifying commands into the "scripts" field of the package.json? Commands can also be enumerated. How often would I benefit from just's other features?
We don't use Just, but we have a Makefile that doesn't take advantage of any of Make's dependency features just to easily be able to run several commands in sequence.
JSON is just a really bad format for script configuration—you either have to string commands together on one big line with && or you have to pair package.json with some other strategy for organizing commands. That may end up being a `scripts` directory with a file per script, it could be that you use a framework that bakes all the complexity into shorter wrapper commands (a la vite), or you could use something like Just to sequence them.
It's not perfect but it gets the job done. Sometimes it's ugly but in the end it forces me to break commands down into subcommands, which can increase clarity.
But sometimes you do have to write a collection of script files for complex multi-line scripts. I assumed I would still do that with just? Is the idea for these to all live in a single just file? I like having larger programs separated as individual files. All good points, though. I like make too, but it can definitely be needlessly verbose. My main thing would be not wanting to need users to have another binary installed locally. Can just live in my repository?
> `just-install` will install a local, platform-specific binary as part of the npm install command. This removes the need for every developer to install just independently using one of the processes mentioned above.
After digging into it more, it seems `just` requires `sh` to function, adding friction for Windows developers. I don't develop on Windows but that friction does reduce portability.
There's ambiguity in which package to use on Node. Both `just-install` and `rust-just` are recommended in the docs, with no disambiguation. `just-install` is maintained by another party and adds an attack surface I'm not sure I'm comfortable with given my current needs. The other recommended package, `rust-just` is also maintained by another party, has bad SEO and recommends being installed as a global dependency.
All of this just adds too much friction if one is already using a package.json. My monorepos frequently contain codebases in multiple languages and so far a package.json and workspaces workflow has met my needs.
I appreciate everyone for answering my questions and giving advice.
Actually, it was package.json scripts that pushed me toward just! I wanted that stuff in non-node projects (python/ruby/~), I wanted more complicated scripts, I wanted more logging output, I wanted comments... For whatever reason every project seems to have 10-20 little commands (often interdependent) and just makes that a breeze.
"yarn/npm install" has an artifact in the project directory, so here's one point for "make" instead of "just":
start: node_modules
yarn run start
test: node_modules
yarn run test
node_modules: package.json yarn.lock
yarn install
touch $@
You can clone the repo and "make test", and it'll include "yarn install" automatically - then on subsequent "make test", it'll skip it because "node_modules" is already up-to-date. And then include it again later if someone updated the packages. The "touch" is so the last-modified timestamp on "node_modules" is updated even if "yarn install" doesn't add/remove anything, so make knows it succeeded.
"yarn install" is usually pretty fast when it has nothing to do, so I can see why people may not bother and just have it run every time, but patterns like this can be used for quite a bit. This way heavier commands don't need to be run repeatedly and devs don't need to know all the individual commands to run in sequence.
package.json is specific to node projects, just can be used for anything. Why learn the quirks of something you can only use with a single programming language? I'm also a fan of the shebang recipes: https://just.systems/man/en/shebang-recipes.html
I place package.json files into non-node projects all the time just for some organizational benefits like workspaces and scripts. As a web-first engineer this doesn't particularly bother me. I'll check out shebang recipes, thanks!
Unlike Just which clearly states it is not a build system [1], Task can be told about expected files so tasks can be skipped to avoid unnecessary work [2]. So if your task is to build software, IMO make and the others like Task would be better.
If your tasks only care about the success code from a process, and/or are a Rust fan instead of Go, then Just should be fine. Otherwise, for specific use-cases like CI, you are likely already coding in a proprietary YAML/JSON/XML format.
We use Docker Compose for our dev environment and were trying to do something like (notice the extra dash dash for separating the arguments out):
task poetry -- add requests django
It was not working as we expected for some of the users due to the argument dash dash stuff - they were forgetting due to muscle memory but the following does:
just poetry add requests django
under the hood it was just calling (the equivalent):
docker compose run --rm --build poetry poetry "$@"
Just arguments are more ergonomic.
This is how just does it:
poetry +command:
docker compose run --rm --build poetry poetry {{command}}
Passing parameters kinda sucks, someone else made a comparison in another thread about named parameters and how easy it is to pass and define them in Just. Love taskfile otherwise
Personally I disagree, I think `--` is very intuitive.
Maybe it isn't super common knowledge, but `--` is in line with the POSIX argument parsing convention[0] and is used by many (most?) GNU/BSD tools and many other tools such as `kubectl`. This StackOverflow thread[1] also has some information about it.
I unironically like the YAML format. It's very readable, imho, and most people (at least in the web space) already know it. It's better than the way just does attributes and descriptions.
On the other hand, what irks me is how parameters are fiddly to pass along. You have to define environment variables, instead of jusst passing them directly in the call.
I'm surprised nobody mentioned Rake yet. Having the full capability of Ruby and whatever gem you want makes it a dream for these kind of tasks. Absolutely love it.
That's what I dropped in to say. I've used most of them, and I think Rake is my favorite.
Pretty much all of the others are shell command runners with a couple of extra bits bolted on. Well and good most of the time, but it's another language to learn, and you're mostly SOL if it doesn't support something you want to do nicely.
With Rake, you get the same basic ability to do pre-set shell commands as the others, a single one or a sequence. But you also have the full power of Ruby, a full-fledged programming language, if you want to do anything more complex.
I was looking for this comment, because rake is great. One big thing is it never felt good imposing Ruby on a (say) a JS project (and I'm not sure of the current state of macos default ruby), so next time this comes up, I will be taking a look at just.
I've used make for years, even partially wrote my own make interpreter once, I hate it as much as anybody else.
But I don't feel confident investing in a new tool that has widespread industry adoption.
I wish there was a 'better make' that tries to replace make the same way Zig wants to replace C, where they have great interop and make it easy to rewrite code into the new language.
"just" is not a better make, it is, well, just a command runner.
Make is designed, well, to make stuff, it is a build system. But now, it is showing its age as a build system, and other, more advanced systems have taken over, these are the "better make" [1]. But it turns out that make is flexible and can be used for other things, namely running commands, and it has been rather popular for this. Problem is, make is still a build system at its core, and it has some quirks that make it less than ideal as a simple command runner, notably the ".PHONY" target. Just is like make, but it is explicitly not a build system, which allows it to do away with most of these quirks.
So is it a "better make"? As a build system, no, it is intentionally a "worse make", but as "just a command runner", then it is indeed a "better make", and I am not aware of a similar project.
What tends to happen is that some new language comes out. somebody decides they don't like the fact that make is actually more like prolog than anything else. They don't like prolog and just want to run some shell commands.
They then decide to demonstrate the productivity of their new language. they implement a build system in and mostly for that language.
People use it, a new language comes out and the cycle repeats.
Don't waste your time. Actually learn make and just be ok with the fact that it does look a lot like shell/bash, but it isn't.
Replacing make is like trying to replace the word "cool" in the English language. People have tried. It never succeeds
I started writing my tasks in mise (https://mise.jdx.dev/tasks/) instead of just, but I found that others didn’t want to install it. Something about mise being an all-in-one tool—combining asdf/direnv/virtualenv/global npm/task management—made installing it just for the task feature off-putting. At least that's my theory. So, I’m back to using just. I am happy that there isn't a ton of pushback on adding a justfile here and there. Maybe it’s the name—‘just’ feels lightweight and is known to be fast, so people are cool with it.
I'd be surprised if you weren't correct. Perhaps I could improve this a bit with the docs, but ultimately mise is complex and that will put people off no matter how good it is.
I think this is all fine though. I'm hard at work improving mise and will continue to do so for the foreseeable future. If someone is hesitant, I'd rather they wait a year until more kinks have been worked out, docs have been improved, feature gaps are closed, etc. I think this is especially true for tasks which only came out of experimental a few weeks ago.
Or people can just not use it. It's not like this is a business where I make more money when I have more DAU or anything. I just want to build a good tool for building sake after all.
I'm starting to use `mise` for tooling management and task running on greenfield projects, myself. Anything you feel `just` does better with regards to running tasks?
The biggest advantage just has is that it's been around longer, in mise tasks only came out of experimental like a month ago. mise tasks themselves are stable, but there are still experimental things and some portions that need to be used more—like windows. That said, most of the stuff that needs polish are features just doesn't even have.
here's my unashamedly biased thoughts on why I like mise tasks compared to just:
* tool integration - this is the obvious benefit. If you run `mise run test` on CI or wherever it'll setup your toolchains and wire them up automatically
* parallel tasks - I saw this as table-stakes so it's been there since the very beginning
* flags+options - mise tasks are integrated with usage (https://usage.jdx.dev) which provides _very_ comprehensive CLI argument support. We're talking way more than things like flags and default options, as an example, you can even have mise tasks give you custom completion support so you can complete `mise run server --app=<tab><tab>`
* toml syntax - it's more verbose, but I think it's more obvious and easier to learn
* file sources/outputs - I suspect just doesn't want to implement this because it would make it more of a "build tool" and less of a "task runner". I chose to despite having the same position that mise tasks is also not a "build tool". Still, I think even in the world of running tasks you want to only run things if certain files changed often.
* `mise watch` - this is mostly just a simple wrapper around `watchexec -- mise run ...` for now, but it's an area of the codebase I plan to focus on sometime in the next few months. Still, even as a simple wrapper it's a nice convenience.
* "file tasks" - in mise you can define tasks just by being executable and in a directory like "./tasks". This is great for complex scripts since you don't also need to add them to mise.toml.
I have not used just very much, but I did go through the docs and there are a handful of things I like that it definitely does better:
* help customization - it looks like you can split tasks into separate sections which is nice, I don't have that
* invoking multiple recipes - I don't love how this is done in mise with `mise run task1 ::: task2` but I _also_ wanted to make it easy to pass arguments. At least for now, the ":::" won out in the design—but I don't like it. Probably too late to change it anyhow.
* [no-cd] flag - both just and mise run tasks in the directory they're defined, but I prefer how this is overridden in just vs mise.
* expression/substitutions - mise uses tera for templating, which is very flexible, but it requires a bit more verbosity. I like that in just you can just use backticks or reference vars with minimal syntax. Same thing with things like joining paths and coalescing. I have all of this, but the syntax is definitely more verbose in mise. Arguably though, mise's verbosity might be easier to read since it's more obvious what you're saying.
* confirmation - I love that in just you can just add `[confirm]` to get a confirmation dialog for the task. I'm sure we'll get around to this at some point, mise already has confirmation dialogs so it shouldn't be hard to add. The tricky part will be getting it to work right when running a bunch of stuff in parallel.
* task output - I haven't used just that much so I can't actually say that it's "better", but having more control over how tasks are output is definitely a weak part of mise right now and is in need of more functionality like in just how you can add/remove "@" to echo out the command that's running
I want to call out one very silly thing that from reading these github issues sounds crazy. It sounds like both just and taskfile have the same behavior with `.env` files. In just and taskfile, variables defined in .env are ignored if they're already defined. I don't think anyone would want that—nobody has asked for mise to behave that way—and it doesn't appear either tool even allows you to change it!
Hi Jeff, thanks for creating mise! I am gearing up to migrate from asdf, very excited to check it out. Not totally sure we can adopt mise for tasks (we use just) but willing to give it a whirl. Putting run commands into toml sounds like it might be challenging, I wonder if there's syntactic sugar that would help.
most people just put simple tasks into toml (like `npm run test` or something), for anything complex, file tasks are much better: https://mise.jdx.dev/tasks/file-tasks.html
file tasks are basically just a directory of bash (or whatever shebang) scripts, but special comments give them extra functionality like dependencies or defining flags/options.
In the Python ecosystem there has been quite a bit of debate around workflow tools (Hatch, PDM, flit, Poetry etc.) I tried out Poetry starting in probably 2018 or so and eventually realized how much I hated it: it was lagging behind on standards and the install/uninstall process was a moving target. But more than that, it... was an all-in-one tool, with its own definition of "all", almost all of which was irrelevant to me and which I was simply ignoring. I never ended up trying other options because I realized I would still have that same experience - although their various definitions of "all" are not identical.
I very much see the need in the Python ecosystem for a fully integrated user-level tool - something that sets up environments and allows people to use dependencies in their own one-off scripts. Pipx is almost there, if you build some wrappers around it to deal with the fact that it artificially refuses to "install" what it considers "libraries" (i.e. packages that don't define any explicit entry points). But it still is a bit rough around the edges, and more importantly is still based on Pip which has many faults. (I don't blame the design of `venv` for very much if anything, even if it's not quite how I would do things if we could start completely fresh; but it could use some nicer wrappers.)
But for development I've always thought it makes more sense to take a "Unix way" approach. Developers need the user tool for the basic mechanics of setting up packages, and then an actual toolchain built around that, with the chance to select individual tools according to their needs and preferences.
From my perspective, Just would be more useful if it had some ability to skip steps where the input hasn't changed.
Like maybe a Justfile's recipe could produce a "<task>.complete" kind of file, and could decide whether to re-run the task based on whether the task's inputs (or its dependencies' inputs).
Also if that sounds like a useful feature, consider using Make.
> Also if that sounds like a useful feature, consider using Make
Just not having that feature is _the_ defining difference in design between the two. If just were to ever add that it would likely kill its appeal. Not having that is what keeps the logic of a just invocation simple and what keeps Justfiles from devolving into the mess that Makefiles tend to with entangled build targets.
Make solves that problem. The problem that I have is that all of the tools I use day to day do their own dependency tracking and re-run tracking. Say I want to deploy a dotnet app to a k8s cluster - none of helm, docker, dotnet build, dotnet test expose their dependency tracking in a way that is straightforward to use with make. The most straightforward way to do it is to just run the commands anyway, IME.
One reward you get for allowing yourself to become brainwashed by Bazel is you get a pretty nice task runner in every project that you've brought into the fold.
I've been using babashka tasks [0] for a while. It has a nice api to run shell commands but it's all clojure based.
Am I missing out on justfiles? It seems to be quite popular among rust/nix circles but I'm afraid it's going to be yet another instance of Greenspun's tenth rule.
I love `just` and have adopted it universally in all my projects. For what it does, it gets the job done fantastically.
That being said, I found myself needing a tool that builds a DAG of dependent tasks and automatically figures out what can be ran in parallel and what cannot -- obviously you have to spell out all tasks and who depends on what first.
Anybody knows such a tool?
EDIT: Apparently people did not get the hint that I believe `make` is an over-engineered pile of metric tons of legacy and I'll sooner slash my wrists than to learn it in full.
I did mean something ergonomic and easy to read and write. And no I'll never view `make` as such. I tried. Many times. I have better things to do in my life than to memorize exceptions of the exceptions.
Can you explain that one a little bit more to me, please?
I don't get the first two lines of your example well. They seem to show the dependency but which one is the default task, or how do you ask for a task to be ran?
You write the file and ALL steps are run in topological order so that a job never runs until its dependencies have run. i.e., in a tool I'll have `build.frof` as a separate frof file than `download-dependencies.frof`, perhaps. (If your preference is that those belong in the same file I'd be down to have PRs that support that! Should be very easy, I'm happy to try implementing this if there's interest.)
So for a file with those contents called `mygraph.frof`, you can (after installing) run `frof mygraph.frof` to kick off the jobs in the current shell (inheriting env vars etc).
here they'll probably be executed simultaneously, since they both have zero dependencies and the machine can run multiple jobs at the same time. (can be disabled with `--max_jobs=1` or `-p=1`).
Here's another illustrative example:
A -> B
B -> C
Z -> C
In this situation, frof will schedule `Z` to run in a parallel thread ASAP, so it will likely run alongside A... and if Z takes longer to run than A, Z will continue running when A stops and B starts. But C will wait for all other jobs to finish before it can schedule.
Nice, thanks a lot. Unfortunately I am quite swamped recently so I will definitely cannot help you with feature requests and testing but I have bookmarked frof and absolutely will be giving it a try.
Just one thing I would dislike... Python. How easy it is to run frof without having to fiddle with venvs and such?
Besides Make, I guess Bazel kind of fits the bill? It was very "Googly" last time I checked it out, but I think that was a decade ago and right when it was released, so it might be more fitting for not-Google nowadays.
Imagine that instead of a make target listing its dependencies, you had to pull them out into a separately maintained BUILD file.
That’s not quite true, but it feels like it sometimes. Bazel is nice about seeing exactly what you need to rebuild if you touch a file. It’s very, very complex though.
In code terms, think of it as a framework that you have to embed your project into, not a Makefile or such that you’d drop into a project. That doesn’t make it bad and it has its niceties. You’ve gotta be prepared to pay for them with sweat equity.
Not a bad idea, thanks. I did this a few times as well but when I analyzed the ROI I figured that just writing a simple-ish Golang program is just less confusing and more consistent in its totality when you ask yourself "do I really have to use Make and Python and, and, and...?".
So yeah, thanks for bringing visibility to this pretty decent compromising approach. It worked for me for a while but eventually I just went all-in to either use `just`, some _very_ short bash/zsh scripts, or jump all the way to Golang.
They are right, though, aren't they? I mean .. if you want something "modern", go ahead and learn Bazel. Make is quite a bit easier to learn, I'd say, and you don't need much (also no shell/bash) to express your DAG dependencies.
I'll agree on the DAG bit but I'll never use `make` again and I tried for no less than 10 years (on and off, not 24/7, otherwise I would have learned it long ago indeed).
I stay away from `make` almost religiously. Its complications _always_ find a way to creep into your file one day. Always. :(
So while they are technically correct and it's my fault for not saying I don't want `make` in the comment up-thread, I don't think my comment deserved the down arrows but oh well, I'll live through it.
I disagree, fundamentally, that using a hyperbolic metaphor "trivializes" the underlying concept used.
Regardless, I have found over the last several years that attempting to scold people for not measuring up to your standards - ones they never signed up to uphold - without a serious attempt to justify them is strongly counterproductive.
Especially when it's couched in language that will readily be interpreted as snarky. One of the reasons people dislike the phrase "it's not my job to educate you" so much is that it takes for granted the presumption that the underlying idea is a subject of education, i.e., an objective fact rather than an article of someone's worldview. Prefacing a claim with "FYI" (i.e., "for your information") has the same issue. Taste, in the metaphorical sense used here, is definitionally not objective, and thus it is not possible in principle to "inform" others of what is or is not in good taste - only of some other community's standards for taste.
Can anyone give me an example where I can actually replace my bash scripts with just? I don't see a point in using it if I can simply write a bash script (at least their examples are very easily replaceable)
I think of you're ok with "just writing a bash script" the tool is not for you
Everything I do with Just can pretty easily be done with Bash. But, doing it with bash is yucky. Doing with with Just is comfortable. If you don't have the yuck factor then I'd say you can just stick with Bash!
Many reasons are already mentioned in other comments. I'd add the following nice-to-have. Sometimes you'd find it easier/preferable to run some scripts with some other shell.
You can set the shell for some commands, for example:
```
set shell := [ "python3", "-c"]
# I can run python!
[no-cd]
foo-bar:
@import sys; major, minor = sys.version_info[:2];
assert (major, minor) >= (3, 7), "This script requires at least Python 3.7. Please link \"python3\" to Python 3.7 or higher and try again."
```
And the API for your commands stays consistent for a very little effort. Of course you can achieve all this with just bash scripts but I find it faster and easier to provide a good devex this way.
I used this until AI became good enough. Now, for most purpose, I can just declare what I want to be done/executed and get perfect bash for it. I have a relatively complex Makefile that build graphql schemas and sets them up. It'd have been a no-go given how weird bash syntax is; but now I can get it generated and working from pretty much the first try.
There is lots of bash around and it's a very simple language, so AI models are pretty good at it.
My favorite entry in this space is Argc. I like it because the only “new syntax” it introduces is metadata comments, and the rest is pure bash. The maintainer is also best-in-class in terms of responsiveness.
IMHO you'd right to be sceptical because for me, it is only a slightly more ergonomic way to organise and run shell scripts. It's difficult to make the case it's much better but I found it interesting how "just being a bit nicer" for a common activity can be a really valuable quality of life improvement.
- easier - core benefit is making it nicer to implement multiple commands with arguments without inventing something equivalent in shell
- convenient - with "fallback" just will search up the folder tree to find the just command so I don't need to be in the right folder. I have justfiles at multiple levels in a project hierarchy and my cwd works as context to pick the right command
- polyglot - can use different languages as needed
- predictable - it's so nice when I return to a project and I have recipes for setting up my env, various types of build and test. The consequence of being a little more ergonomic means I capture more useful command lines that, for whatever reason, I would not have made into shell scripts because of the added friction.
I don't know if this fixes the issues but some big problems with shell:
* Very bad UX on Windows
* Quoting is a disaster. I mean, the whole language is a disaster but quoting is an especially big wart. Make also has this issue; you literally can't use it with files containing things like spaces or colons.
* Shell scripts tend to start simple and reasonable and grow seamlessly into something that absolutely should not be a shell script.
My favourite solution is Deno. Zero faff to set up, easy to install, supports third party dependencies without metadata files or messing with environments, and you get to use a real programming language. Easily the best scripting tool for infrastructure tasks at the moment.
Unfortunately I'm forced to use Python at work which is nowhere near as good as Deno, but still beats the pants off shell scripting.
It surprises me a bit that, of all things that are a mess in shell, your comment mentions quoting. It’s one of the few things that absolutely make sense for me in shell scripting. Do you have an example for me where quoting feels messy to you?
> My favourite solution is Deno. Easily the best scripting tool for infrastructure tasks at the moment.
I don’t think there’s an objectively best technology for everyone.
For example, how long-term are your infrastructure tasks? What are the chances your scripts are still going to work in 2 years? 5 years? 10 years?
Suppose you’re in a large enterprise embedded project which needs to work for 10 years or more, and the project uses shell scripts for infra tasks. Would you recommend to migrate those to Deno or Python?
Turn on shellcheck and you'll realise that nobody could get it right without tool assistance. In programming languages with "standard" quoting (Python, JavaScript, Rust, Go, C, etc.) you don't even really need to think about it.
> What are the chances your scripts are still going to work in 2 years? 5 years? 10 years?
100% because I'll maintain them.
> Suppose you’re in a large enterprise embedded project which needs to work for 10 years or more, and the project uses shell scripts for infra tasks. Would you recommend to migrate those to Deno or Python?
Absolutely yes. In fact the longer you expect it to last the stronger my recommendation would be. A shell script with 10 years of tech debt is a scary prospect.
What happens if you leave the project? Are your teammates going to maintain the scripts? What happens when one day, the Deno package gets updated and the script blows up? What if Deno becomes proprietary and closed source?
> A shell script with 10 years of tech debt is a scary prospect.
Several well-known executables on some Linux distros are really 20-year-old shell scripts. I haven't really seen them accumulate much tech debt.
>of all things that are a mess in shell, your comment mentions quoting. It’s one of the few things that absolutely make sense for me in shell scripting. Do you have an example for me where quoting feels messy to you?
1. The distinction that shell languages make between single-quotes and double quotes is unintuitive and not seen in other languages - wherein either they are interchangeable (like Python) or denote a completely separate type (like C and several others influenced by it).
2. I can't backslash-escape a single-quote within a single-quoted string. Single-quoting disables backslash-escapes that were already working outside of strings. I've lost count of the times I tried to input a command and was surprised to get a > continuation prompt because the shell thought I was still inside quotes, and then not had any good idea of how to fix my error on the previous line.
3. I can use backslash escapes in a double-quoted string, but then I'm also stuck with variable interpolations. It's difficult to produce a string that contains a literal double quote, literal dollar sign, literal at sign and literal double quote consecutively. Yes, by itself I can wrap that sequence in single quotes, but that doesn't generalize to contexts where I need more layers of quoting.
4. Really nothing generalizes very well to when you need multiple layers of quoting.
5. Not directly an issue with quoting, but there's implicit concatenation between quoted and non-quoted tokens if there's no space between the quote and the other part. This leads to many situations where you think you've gotten it right but you haven't, and don't notice until you either try to iterate on your script or carefully examine the output.
6. But you have to rely on that confusing behaviour if you need a single-quoted string that contains a literal single quote.
It's taken me quite a bit of practice to become able to do anything moderately complex, and I still have to check my notes sometimes. But really the underlying problem is that writing these things creates a demand to have some kind of internal structure within the string, so that parts of it can be further processed. It would be far nicer, for a start, if "interpolate values into the string" were an explicit operator rather than a magical property of double-quoted strings. But the main reason I end up using Python to orchestrate command-line tasks is just so I can have actual tuples or lists of strings and manipulate them on that level instead of at a textual level.
When this tool becomes as available out of the box as POSIX sh (i.e. practically everywhere, including embedded systems and containers), then this reversed argument will make some sense. I'm willing to bet anything that POSIX sh will still be with us 50+ years from now, and 'just' will be long forgotten by then. You really should have a stronger argument for introducing another dependency into your build process (and onto your developers) than "it has a slightly simpler syntax compared to the industry standard".
It's great that POSIX sh is available everywhere except where it isn't (Windows).
In all SW teams I've been in except one, sh was available, and people preferred writing things in something else (usually Python/Perl). I have had an order of magnitude more success convincing teammates to use just than convincing them to use sh.
It may be ubiquitous, but it's useless if you can't convince non-shell gurus to use it.
It's not "slightly simpler", it's massively simpler. Shell scripts are pretty much the worst syntax in existence (barring esolangs that go out of their way to be weird).
Fewer tools to manage. It seems like this could also be replaced by some aliases in a .bashrc file.
I don't like adding extra dependencies and complicating things if they aren't adding significant benefit. What am I missing here? It seems like an alias with extra steps.
I use Invoke-Build[1] everywhere and I highly recommend it. It's cross-platform, uses PowerShell so we have serious programming language in the background and is extremely simple yet powerful: dependencies, integrated help, good defaults for error handling and starting directory, vs code support, DOT charts of task dependencies, incremental task, persistent builds, parallel stuff etc. See example usage here [2]
I use it as a somewhat more sane way of collecting my repetitive, project specific commands, without having to rely on shell history.
I'll just plop my project-specific workflows (series of shell commands) into a Justfile (that I don't commit, it's just for me). That allows me to be more rigorous and structured with how I'm iterating on a project.
It has syntax and semantics that are sufficiently saner than make, so I don't need to know a lot to be productive.
If I come back to a project after a couple weeks, I don't need to spelunk shell history. Just --list is enough to get back up to speed with how I was iterating.
Just (pun intended) a personal plug: I always liked the Make ease of use and the declarative GH Actions phylosophy. I also like to have the same workflows in local and in my remote CI, so I recently wrote a task runner with the (IMHO) ease of use of Make and GH Actions-like philosophy. It still lacks good docs, but I use it everyday on my projects and works like a charm.
I find it more powerful and from a certain point easier to create the tooling using the projects programming language. Every dev should be familiar with that language and ecosystem. E.g. for project that had several tools - Rust (server), .NET and Node (CLI tools) and Svelte (frontend) - I wrote all operational tools in Typescript and run them using Deno. Very clean and powerful (typesafe, composable, Deno std lib). You can add all kind of stuff like timings, logging, checks, whatever ...
Seems that I'm the only one who opened up the website and didn't know what was going on. At least two sentences what the "just" is, otherwise it's "if you, you know" and that isn't inviting page.
Justfiles are really awesome for repos where you have to use a bunch of complex, long to type CLI integrations. Especially if you’re using Deno scripts that all have different permission flags…
I'm also using a global justfile (`-g`) [1] to serve as a convenient location to aggregate any convenience functions, as well as call out to any standalone scripts as necessary.
You can also 'convert' all recipes to aliases so you get the best of both worlds, the ability to call with `just -g foo` or `foo`, from anywhere.
The docs example [2] uses a `user` justfile, but the principal is the same for global.
for recipe in `just --justfile ~/.user.justfile --summary`; do
alias $recipe="just --justfile ~/.user.justfile --working-directory . $recipe"
done
Most recently I've started using `fzf` and `bat` to allow interactive selection of recipes with syntax highlighted previews:
Now with a global `alias ji="just -g _choose"` I can interactively choose a recipe if I need a reminder of what I've set up.
This was inspired by the native `--choose` flag which does something similar, but by using `--summary` here, all recipes, including those that take arguments *, are listed, as well as any nested modules.
And because you can use any shebang, you can also write little python scripts to run with `uv`, including those with dependencies [5] declared in the shebang:
# list Cloudflare accounts
accounts:
#!/usr/bin/env -S uv run --script --with cloudflare --python 3.13
from cloudflare import Cloudflare
client = Cloudflare()
accounts = client.accounts.list()
print(accounts)
* interactively selected recipes that take arguments won't work by directly passing to `xargs` here, but in some cases where I do want that flexibility I just add a condition in the recipe to prompt for input, with `gum input` [4]. Flexibility. This is a belt and braces approach and only used where necessary as the `fzf` preview will have made it clear that a recipe takes arguments.
[positional-arguments]
foo $bar="":
#!/usr/bin/env bash
if [ -z "$bar" ]; then
bar=$(gum input --placeholder "bar")
fi
echo "looking up $bar"
I think you'd like what I've done with mise. You can have tasks in your global config (~/.config/mise/config.toml) which by default are shown no matter where you are. `mise run` will show a selector by default of all tasks available, so no need to manually setup fzf. Shebangs work the same. Commonly, mise users would also put "uv" into their config so other users don't need to set that up separately from mise itself.
Interactive inputs are something I'm planning on shipping relatively soon. It would not be hard to do—I've got the ui components to do it and the data model supports it.
very neat but already hitting cases where it doesn't play nice with pwsh scripts, even using the shebang. Back to using a dir full of .ps1 files I guess lol
Uh, isn't this just Make? I'd rather people run `make this` and `make that` than install a new tool to do the same damned thing. Sometimes software is just "done" and doesn't need to be reinvented.
In case you want to run Justfiles in places where you can't install the Just binary (for whatever reason), I wrote a compiler that transforms Justfiles into portable shell scripts that have byte-for-byte identical output in most cases.
https://github.com/jstrieb/just.sh
Previous HN discussion: https://news.ycombinator.com/item?id=38772039
Fantastic! This solves my big fear around getting used to such a tool.
My work primarily involves 'nix boxes that have to be very locked down and will be left in a place basically untouched for 20 years after I finish setting them up. Getting a reliable new binary of any sort on them is quite difficult, not least because we need to plan for things other far future people might be able to discover for troubleshooting down the line.
Could you tell more about these 'nix boxes? Sounds very interesting.
Why would you care now? In 20 years somebody else would be paid to fix it.
We love just and are using it in all projects now. So great. Our typical justfile has around ~20 rules. Here is an example rule (and helper) to illustrate how we use it in ci:
This example is a bit contrived, more typically we would have a rule like "just lint" and you might call it from "just ci".One of the best features is that just always runs from the project root directory. Little things like that add up after you've spent years wrestling with bash scripts.
The banner readability could be slightly improved using constants[1] (and prefixing with _ to hide it from list output).
[1]: https://just.systems/man/en/constants.html> Little things like that add up after you've spent years wrestling with bash scripts.
Can you please explain what you mean here? I looked at the GitHub examples and wondered why this would be preferable to Bash aliases and functions. I must be missing something.
Bash has a thousand pitfalls, and as you accumulate layers of scripting they start compounding. Little things like “what the hell directory is this command actually running from”, parsing input parameters, quoting rules, exit statuses, pipelining, etc.
Tools like just provide a very consistent and simple base to start with, and you can always still call a separate script, or drop directly into inline shell scripting.
So it's not a fundamentally different use-case, it's just an admission that shell scripts suck at what they do?
Of course. Is that news to you? Not a snark, I am genuinely surprised, assuming that you asked seriously.
I moved to ZSH some years ago but even that is not good enough. I thought of using Fish at one point but just said "frak this" and started writing Golang for anything that's more than 20-30 lines of bash/zsh scripting. Or requires their weird list / array syntaxes for iterating over stuff. Can't ever remember that with a gun to my head.
Shell scripts can be used safely if you know how to. Have solid error handling, exit on error (set -e), write tests (BATS) and a few other things to make sure it doesn't break. You are not gonna get the same performance with just or whatever new tooling there is just to run commands on your system
> Shell scripts can be used safely if you know how to
That's the contention point though -- I learned and relearned shell scripting no less than 7 separate times and it always slips away because it's not something I practice every day. Ultimately I concluded it's not worth it because you mostly have to memorize super weird syntax and strange exceptions to rules. At one point I was just like "screw this" and went for Golang.
> You are not gonna get the same performance with just or whatever new tooling there is just to run commands on your system
That's very debatable, I'd bet my Go programs process various things either faster or with the same speed. But even if they are slower that's often not important because most scripts I ever wrote were throwaway. Those that stuck around I have polished and re-polished, including with the measures you enumerated.
> if you know how to
That’s a big if. I worked on a shell based tool for a couple years and eventually accumulated the know-how and toolset to write reliable code; but nobody else could contribute as the learning curve was too great.
I switched to Ruby for all new tools and never looked back. Performance is rarely a concern in this territory, and you can always offload heavy work to another process.
Performance in your shell script is a new one. Can you cite a real world example where that would ever matter? My shell scripts just initiate build/export/deploy programs. They take milliseconds to run and then the programs they start take minutes. The perf of those milliseconds couldn't be more negligible.
For me, the niceties are in the built in functions[0]. Commands to manipulate paths(!!), get cpu counts, mess with environment variables, string processing, hashing, etc. All the gyrations a more sophisticated script is going to eventually require. Instead of having to hack on it in shell, you get cross-platform utilities which are not going to blow up because of something as wild as a space or quote mark.
[0] https://just.systems/man/en/functions.html
My favorite feature is the ability to decorate the recipe name with the OS and then write relevant code for each recipe that does the same in each OS.
This best explains what I must be missing. Saying, “shell scripts are bad,” doesn’t tell me anything. Thanks for giving me a concept to explore. I’ll have another look with this in mind.
Nah. This looks nothing more than a wrapper for bash scripts. I can easily write helper scripts which does exactly what you described above. I don't understand the need of using a whole different tooling when I can run scripts natively on my machine(s)
Bash scripts need wrappers because they suck so hard.
You can do it all in bash, yes, but it's very conceivable this "just" thing actually provides real value.
I love the look of `just` and have been meaning to try it out, but this feels like one of those examples where Make's dependency management shines—it lets you specify that many of these commands only need to run when particular files change:
And as time goes on, I always end up wanting to parallelize the commands that can be parallelized (citest, lint, eslint), so I'll turn `make ci` (or `just ci`) into its own little script.I've been using just at work and in personal projects for almost a year, and I like it a lot. In particular, its self documentation with `just --list` makes onboarding new folks easy. It's also just a nicer syntax than make.
Agreed. Is it that different than Make with `.PHONY` targets? Yes — it is Designed To Do Exactly What It Does, And It Does It Well. That counts for something in my book.
All my Justfiles start with this prelude to enable positional arguments, and a "default" target to print all the possible commands when you run `just` with no target name:
in mise you wouldn't need that preamble. `set positional-arguments` is just how it behaves normally and `mise run` doesn't just show available commands—it's also a selector UI
That's nice, but I don't have any interest in switching because Just does everything I want. I legitimately have zero feature requests regarding Just.
Maybe worth reminding the self-documenting Makefile [0] discussed here.
[0] https://news.ycombinator.com/item?id=30137254
I've been using this for years; love it
I don't have my work laptop to hand to compare, but I usually run "just" to get a list of commands and what they do, rather than "just --list". Hope that saves you 7 key presses going forwards.
Running `just` will invoke the first recipe, so you need to add one that invokes `just --list` for this to work — see https://just.systems/man/en/listing-available-recipes.html and my sibling comment.
That seems like the most useless pattern to take from make, especially when you name your tool ”just”.
Just what?
> Just what?
"Oh... come on! Just... <waving hands angrily>"
Pretty clear to me :).
The same applies to make without arguments though, make what? Grammar / word meaning aside, unknown / missing commands printing the help file or suggestions is a good pattern.
I think it's less grammatically ambiguous with make. It implicitly means "make <the project>". For most projects that's pretty well defined (and also grammatically correct since 'make' is a verb and 'just' is not).
But even so it would have been a better design for `make` to list top level targets or something.
Just execute.
Hmm. Maybe the dev that set it up made the first recipe run `just --list`
Yeah, I've been adding `just help` as an alias for `just --list` and making it the first recipe for this reason.
Not as much as 7, you can just type `just -l`.
Total agree. It constrains the chaos in my projects, and its easy to refactor bits into more sustainable cicd, if or when that is ever needed.
The self documenting aspect is what puts jt above a folder of shell scripts for me
This is one of the most important pieces of software in my development stack that "just" gets out of the way and does what it's supposed to do. Also has excellent Windows[1] support so I can take it everywhere!
[1]: https://github.com/LGUG2Z/komorebi/blob/master/justfile example justfile on my biggest and most active Windows project- might not seem like a lot but this has probably cumulatively saved me months of time
[flagged]
> I get that your project is Windows-only, but many projects aren't.
Nit: At this point you're better off starting a separate comment thread since you yourself already know that what you are about to talk about is not what my comment is talking about.
> Also has excellent Windows[1] support so I can take it everywhere!
Nit: You mentioned it can be used "everywhere". That would be a useful feature! But while it's kinda true, there's some quite big limitations IMO
The shell can be configured per OS. So, Windows can be set to use PowerShell and Linuxy systems will use sh.
From the docs
Few things work seamlessly across platforms, and that does not seem like a huge burden.> Wait, by "has excellent Windows support" you mean you have to set it to use Powershell or hope `sh` is installed on
I don't get what the problem is here? Do you protest against shebangs too? Why does a build script for a Windows only app need to use sh instead of powershell? I think you're interpreting "excellent windows support" to mean cross platform, and that's not what it means.
> So not only do you need just installed, which is yet another dependency,
Yeah if you want to use some software, your computer needs that software. That's not a dependency. So we're talking zero dependencies, or one of you absolutely need sh.
To be fair it is another dependency for the project that you are using just with. Its probably not software that you use for its own sake.
You can use the usual cmd (I do). You're not limited to Powershell. Also, you do understand that if a tool has first class support for Windows, that does mean it prioritizes Windows tools, right? Imagine I made a command runner, and said it has "excellent Linux support", and then someone comes along and complains that you have to install Powershell on Linux to use Windows recipes.
You can have Windows only recipes and Linux only recipes.
Furthermore, if you have bash installed on Windows (e.g. via git bash), you can put a shebang in your recipes to use bash.
We develop in Windows and deploy in Linux. Most of our recipes work in both OS's - either we use bash or Python for the recipe. The few that don't - we just mark as Windows only or Linux only so they're not available in the wrong OS.
> So not only do you need just installed, which is yet another dependency,
You do realize that Windows by default comes with almost no development tools, right? So yes, you do actually need to install things to get work done. The horror.
I'll also note that while you complain about just, you provide no alternative.
Weirdest rant ever.
You can keep your commands simple enough so that they can be executed by both `sh` and `cmd.exe`. If you need anything more complex than invoking other programs, `&&`, `|` and `>`, it's time to rewrite your build script in a real programming language anyway.
I'm not a fan. It works well for what it is, but what it is is an additional language to know in a place where you probably already have one lying around.
Also, like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further. It's nice that everybody is on the same page about which verbs are available, but those verbs likely change filesystem state among your .gitignored files. And since they're starting from an unknown state you end up with each Just command prefixed by other commands which prepare to run the actual command, so now you're sort of freestyling a package manager around each command in an ad-hoc way when maybe it's automation that deserves to be handled without depending on unspecified state in the project dir.
None of this is Just's fault. This is people using Just poorly. But I do think it (and make) sort of place you on a slippery slope. Wherever possible I'd prefer to reframe whatever needs doing as a build and use something like nix which is less friendly up front, but less surprising later on because you know you're not depending on the outputs of some command that was run once and forgotten about--suddenly a problem because the new guy can't get it to work and nobody else remembers why it works on theirs.
I find declarative build systems end up pretty frustrating in practice. What I want from a build often isn't the artifacts, but the side effects of producing the artifacts like build output or compilation time. You get this "for free" from an imperative tool, but represents a significant feature in a declarative system that's usually implemented badly if it's implemented at all. The problem gets worse the smarter your tool is.
Logs emitted during the build, or test results, or metrics captured during the build (such as how long it took)... these can all themselves be build outputs.
I've got one where "deploying" means updating a few version strings and image reverences in a different repo. The "build" clones that repo and makes the changes in the necessary spots and makes a commit. Yes, the side effect I want is that the commit gets pushed--which requires my ssh key which is not a build input--but I sort of prefer doing that bit by hand.
The developer time required to learn and properly use nix makes it unattractive to most teams. The benefits don't outweigh the costs of adoption.
Instead of debugging code, the team would have to spend significant time maintaining the build system for the build systems sake. Don't get me wrong, I want something nix-like in my toolbox. I want to love nix. But I wouldn't dare to argue my team to commit to the world of pain that comes with it.
There's a good reason that nix didn't see wide adoption in the industry.
In my experience, Nix is very high leverage. My company has ~5 nix gurus, but Nix is invisibly used by hundreds of engineers. Most engineers know we use Nix and that's about it.
Similar experience for me. In my company adopting nix paid off in weeks with no prior experience. Very happy with it almost 10 years later and at much larger scale. The difference between things working reliability or not is too big to overstate.
I tried using Nix but stopped for two very practical reasons: it's very slow and it's extremely disk heavy. Install a couple of things and suddenly your nix store weighs at 100 GB.
use only stable nix. override nixpkgs for inputs you add. after first build, use offline and no-substitute flags on reuse, alias such command. use nixdirenv.
read and setup store/gc settings work for you. do not use nixenv nor nix profile.
Interesting. For me it's generally much faster than other package managers. The evaluation takes some time, but copying derivations from a cache to the Nix store is so much faster than traditional package management.
I wonder if you somehow ended up eval'ing many versions of nixpkgs?
your nix store weighs at 100 GB
¯\_(ツ)_/¯ outside very constrained devices, who cares? I just checked my NixOS dev VM that I have used for months now and cannot remember when I last garbage collected. It's 188GiB, but I have many different versions of CUDA, Torch, etc. (the project I'm currently working on entails building some kernels for many different build configurations), and I run nixos-unstable, where a lot of stuff changes, so generations are pretty unique.
A 2TB NVMe SSD is just over 100 Euro. Caring about 100GiB seems to be optimizing for the wrong things.
I completely agree on embedded machines though. Just deploy it by copying the system closure, garbage collecting anything but the previous closure for backup, it'll be pretty much the same size as any other Linux system.
> For me it's generally much faster than other package managers.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
> outside very constrained devices, who cares?
Seriously, are we going to shame people who can't afford to buy lots of storage?? My smaller laptop has only 250GB, but that's freaking plenty if I stick with apt. But I can barely run Nix on it.
> Seriously, are we going to shame people who can't afford to buy lots of storage??
It's not just storage, though - storage may be cheap but once your machine is at capacity (the physical space in laptops is an important constraint) you have to replace perfectly good hardware to accommodate absurdly space-hungry software (looking at you, Vivado).
Also, don't forget that not everyone has always-available, fast, reliable, cost-free internet. By rural standards my connection's very good, but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time.
Digital wastefulness is a problem, and I do think we need to take it more seriously.
but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time
Except that Nix does not download 100 GiB under unless you are installing a gazillion packages. First, Nix downloads compressed output paths. Second, it's not like Nix packages are substantially larger than Debian, Ubuntu, or Fedora packages. The extra storage space comes from (1) Nix keeping multiple generations to allow you to roll back to previous versions of the system -- if you break something, you can always roll back; (2) people using multiple different versions of nixpkgs, which could lead to having multiple versions of system libraries.
(1) is a feature of Nix/NixOS, if you want to use less space, you can trade off the ability to roll back for space. You could always garbage collect everything except the current generation and it would be similar to other distributions. For (2), avoid using multiple nixpkgs versions.
I generally like keeping around a lot of generations, etc. so I don't mind my history of NixOS systems keeping 100-200 GiB. But if you care about space, garbage collect and it won't take up that amount of space.
Thanks - I appreciate the background info.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
Pretty much all popular package managers. APT/dpkg, DNF/rpm, pacman, etc.
I have just updated one machine to the latest unstable. It updated 333 packages, a substantial part of that system. It took 1 minute and 50 seconds, most of it downloading. So, not sure how it takes a good part of an hour for you.
Seriously, are we going to shame people who can't afford to buy lots of storage??
I'm not shaming anyone. Just saying that 1 or 2 TB is pretty normal nowadays (outside Mac, because Apple makes you pay for it). At any rate, you can make the size pretty similar to any other distribution. It's not like glibc or GNOME takes up substantially more disk space on Nix.
If you end up using 100 GiB of storage, you are either keeping a lot of system generations around or you somehow have different nixpkgs versions in your system's closure, ending up with duplicate versions of glibc, etc. If the former is the case, set up automatic garbage collection and the space use will be far less. E.g. on one machine I have only three NixOS unstable generations and the system is 18 GiB (which includes a bunch of machine learning models, etc.). It would probably be substantially less on NixOS stable, since there are less differences between generations (e.g. I have qemu, webkitgtk, etc. three times).
Adding some data here:
Total size of installation is roughly comparable between NixOS and, say, Ubuntu.
My laptop's Nix closure of 1 generation is 33 GB. My desktop Ubuntu has 27 GB (20 GB /usr + 7 GB in /var, where snaps and flatpaks are stored).
Indeed the disk usage of Nix comes from multiple generations. Every time there is a new version of glibc, gcc, or anything that "the world" depends on, it's another 33 GB download. Storing the old generation is entirely optional. The maximum disk space needed is 2 generations.
Updating Ubuntu to a new LTS version almost always costs me multiple hours, caused by interleaved questions on how to merge changed config files in /etc (which unfortunately one cannot seem to batch), apt installation being rather slow, and during recent years, the update generally breaking in some way that requires a major investigation (e.g. the updater itself dies, or afterwards I have no graphics). On NixOS, these problems do not exist, and the time to update is usually < 30 minutes.
In my experience Nix is a force multiplier. But you need someone on the team who has plenty of Nix experience, because you inevitably need to write your own derivations and smoothen over issues that you might encounter in nixpkgs.
We use Nix with Cachix in the team I currently work in. We use a lot of ML packages/kernels, which are nearly impossible to manage in Python venvs (long build times because we have to patch some dependencies, version incompatibilities, etc.). Now you can set up a development environment in seconds. The nicest thing is when we switch between branches we automatically have the state of the world needed for that branch (direnv yay).
It was some work to set up, but it saves so much time now.
How do you do the initial setup? I'm concerned with anything that happens before activating the dev shell.
Right now I wrote a bash script to check for nix, direnv, git, gpg, etc. But it feels a bit clumsy, compared to the flake that contains the dev shell.
For my own system I set up home manager. But I don't want to make the use of home manager a requirement, as it can be quite opinionated. (e.g. setting up direnv will be done by generating a .zshrc, which can be limiting to some)
For our particular project you only need to install Nix and then run nix develop, but I'd indeed recommend to use direnv. For me it's not an issue, since I run NixOS on development VMs, but a colleague who was not using Nix before (I think) also wrote a bash script to set up an AWS VM with the NixOS AMI and then rolls out a minimal NixOS configuration.
I think for people who don't want to dive into Nix much, doing an imperative install (nix profile install) of the necessary packages is also fine. You could even make your own small meta-package that depends on everything that is needed. Then they could do a nix profile install yourflake#yourmetapackage and have all the tools they need. But I agree direnv is a bit harder, since you'll have to put something in the shell rc/profile.
The imperative install is as many lines of code as the flake itself. That’s what’s bothering me. But a meta package would be a step in the right direction.
Thank you!
>What I want from a build often isn't the artifacts, but the side effects of producing the artifacts like build output or compilation time
You frequently build things not to get binaries but to spend time compiling?
The point is that there's often no way way to express "I want side effects" in declarative tools, and the number of side effects that might be useful is vast.
For example, sometimes I profiling the build times to see where I should focus effort.
Sometimes I want to see it to quickly check for issues where adding some dependency header causes build times to explode 100% in downstream dependencies during cold builds.
Another common occurrence for is trying to debug a platform, toolchain, or standard library issue and the build system either doesn't detect changes in those components or only makes the components readily accessible in an internal cache that's subject to invalidation issues. You'll usually get the wrong artifact or test results in those cases.
Some other systems (e.g. bazel/blaze comes to mind) actively try to hide side effects like stdout.
In all of these cases, the only way to actually get these side effects is to reach into the tool's internals by blowing away caches/output folders or reading live log files. That's a failure of the build tool.
> The point is that there's often no way way to express "I want side effects" in declarative tools, and the number of side effects that might be useful is vast.
Shake (https://shakebuild.com/) is pretty good about letting you specify that a specific step produces multiple artifacts.
I suspect Nix can do the same?
> Some other systems (e.g. bazel/blaze comes to mind) actively try to hide side effects like stdout.
Yes, blaze isn't all that great. You can tell, because Google folks check in generated artifacts into their repositories, instead of wrestling with getting blaze to build them.
Generally my aim with both Nix and Bazel are that, while they are the source of truth, day-to-day development and debugging occurs using language-native tools. So the only touch point for local development is when you are modifying the dependency graph in some way.
It's definitely more work (you need to maintain compatibility with two different build systems), but worth it for exactly these reasons.
I haven’t used it, but it sounds like make’s —-assume-new flag does exactly what you want for the first part. It lets you rebuild everything that would result from a changed file, including all side effects, without needing to first update the file.
Alas, Make is really, really awful in most other respects.
Really? It's the one part of the traditional c build system I actually still use. Easy to write, easy to debug, relatively small—what's the issue? I hear people complain about make incessantly but people rarely have substantial criticism to offer. Is it the syntax? Reliance on the filesystem? Inconsistencies between implementations?
As an actual builder it has limitations, such as not having (built in) the ability to know if it can still do an incremental build after changing some build option. That can result in inconsistent builds.
The main problem is that you often require more logic than makes sense to write in make, but it kind of has a language built into it so people try to use it. But as a language it's terrible (no scoping, many missing features). So people end up implementing their build logic in a bastard combination of make and shell which is very opaque and difficult to debug.
For example, I was recently trying to figure out how the OpenWRT makefiles are doing something, and it was really painful, because with make having no scoping any part of the system could end up affecting the piece you are looking at. There is a lot of dropping into shell to get stuff done, and a lot of the targets are themselves expanded variables, which makes it really opaque. Really a lot of it is not gaining from being written in make, they could do with rewriting large parts in a real language. But it would be a huge job. And that's where a lot of makefile systems end up
That's why you get tools like ninja where they decided not to allow any logic at all.
> That's why you get tools like ninja where they decided not to allow any logic at all.
Or you get shake, where your logic is the logic of a real programming language.
> Inconsistencies between implementations?
That's actually not too much of a problem in practice: almost everyone just uses Gnu Make.
> Easy to write, easy to debug [...]
Alas, Make becomes hard to write and really hard to debug past a certain complexity threshold. And you reach that complexity threshold very quickly.
> Is it the syntax?
Yes, the syntax of Make is awful, and I'm not even talking about ergonomics. Thanks to Make's abysmal syntax, special characters in your files make it barf completely. And by 'special' I mean something as mundane as spaces.
But everything you mentioned is far from the worst. See eg https://news.ycombinator.com/item?id=17088328 for a more comprehensive overview of Make's sins.
--always-make/-B is more in line, but yeah. Make has grown imperative models within its vast declarative morass.
So do you just not use incremental builds at all? That's insane.
Of course I do, but this isn't a thread about all the things that fit well in the paradigm.
I agree, but `Just` as an incremental improvement is a much easier sell to teams than asking them to think about their builds completely differently and rewrite everything to fit that.
Offering a cave man a flashlight is probably more helpful than offering them a lightbulb and asking them to wire up the cave to power it :D
It is definitely a very fine incremental improvement over make. It's just incremental progress in a direction that I don't want to be headed.
I mainly don't understand how Just is any better than a run/ directory full of executable shell scripts.
If that works well for you, use it.
I did that for 10+ years and got fed up with having to remember which names I gave to my scripts that month. I gradually evolved my views and that got reflected with the names of the scripts.
`just` helped me finally move away from that. Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
Absolutely nothing wrong with a directory full of scripts but I lost patience for having to scan what each does and moved away from them.
> Now I have i.e. `just check` in projects in different languages that all do the same thing -- check types and/or run various linters. I can go in a directory and run `just check` and I know I have taken care to have all checks that I want in place. Similarly I can run `just test` and I know I'll have the test suite ran, again regardless of the programming language or framework.
How is that different from having a scripts dir, and a script called `check` or `test`?
How is `just -l` different to `ls scripts`?
Tab completion. `just -l<tab>` shows all the commands and their descriptions.
Aside from that, it has lots of built-in ergonomics like consistent argument parsing, functions to say what OS you’re on, an easy way to hide helper functions, the ability to execute a justfile in a great-grandparent directory, etc.
You can totally do any of those things with shell scripts. I prefer letting someone else invent all the bells and whistles there so I don’t have to.
I am a bit confused. If you have your scripts in `scripts/`, doing `scripts/TAB` will also auto-complete! The other things seem like really minor benefits to me, not trying to say you should also feel the same, just giving my opinion.
Scripts/tab won’t show you the documentation of each script explaining what it’s for.
My genuine advice is to download it and play with it for an hour. If you don’t like it, you’ve learned a little about a tool you’re bound to come across sometime. If you do like it, now you’ve added another tool to your palette. Either way you learn something useful.
In my case I prefer all these utility scripts to be in one file because 90% of them are 1-2 lines anyway. Zero point dedicating a directory with several 5-line files.
I use "make", so I get most of this. But the one thing I would like is a sibling / parent / grandparent directory with ease, so I might switch.
I’ve used make for many things over the years. I’m competent with it. Make is such a breath of fresh air for the uses that don’t involve actually incrementally building software. It’s sooo less verbose. It’s hard to describe the feel of a thing, but imagine learning to program with Java and then finding Python. If you’re building a giant app developed by thousands of people, maybe Java’s complexity starts to show a benefit. If you just want to quickly script something up, Python gets the job done with a tenth the boilerplate.
There’s room for both. Neither replaces the other. But it turns out many of my projects need tools closer to Python/Just than Java/Make.
> Tab completion.
`./scripts/<tab>`
`./sc<tab><tab>`
I believe I already addressed that this is purely a matter of taste and convenience, not sure why you are not reading my comment and are asking for more.
And it was already said: if you like it more, use it. Nobody is holding a gun to your head. And I even explained that I used that in the past and moved away from it.
I also haven't seen in your previous response how Just is better than a subdir with shell scripts named according to a convention.
AFAICT, the productivity improvements you described came exclusively from using a consistent naming convention, not from Just. And since everyone's dev env supports subdirectories with shell scripts already, why not simply use that instead of requiring Just?
I got a down arrow on my comment that's your parent a minute before you responded. Coincidence, or you prefer to press it because you are not satisfied that I'm not your personal documentation agent?
Finally and additionally as a response: because it's also all in one place. I don't want 10+ scripts. For the third time: I used bespoke scripts and found them not good enough compared to Just, now for even more reasons clearly spelled out. Sigh.
I didn't downvote you, though I found your answer unhelpful. (I've now received 2 downvotes.)
10+ scripts with standard names ("clean", "test", "build", etc.) in a subdir added to $PATH seems to me to be easier to manage -- if the scripts are independent of each other. If they do have dependencies on each other, but the dependencies are "treelike" (meaning that for every target you might want to run, all of its transitive deps are reached via a unique path), it's still easier (than either make or Just) to have separate scripts, and turn each dep into a plain invocation at the top of each script. It's only when that approach starts to invoke deps multiple times (because it has become non-treelike) that either make or Just starts to offer an advantage.
I think if you look at this with clear eyes, you'll see that 100% of the value you feel you're getting from Just is actually coming from the naming convention that Just nudged you towards.
I like having individual files too as they can be independently managed by source control, linted, etc. And I've certainly been known to have a Makefile that's simply:
And then fill my `tasks/` directory with individual executables.Apologies for assuming you downvoted me then. :)
And I have not touched your comment btw. I rarely downvote these days and I have to be really pissed to do so. I was not pissed earlier, more like a little frustrated as you seemed to ask without reading, as if demanding a complete answer without willing to piece together the info given in several other comments.
So... you were talking about global scripts. I was not. I was talking about per-project directory with scripts because very often projects have their little quirks that make all their scripts frustratingly 99% identical but never 100%. I danced this tango dozens of times -- not exaggerating, I am a contractor (though I hope to finally stop, currently looking for a proper long-term job with good culture fit) and worked on many projects -- and ultimately got extremely frustrated.
At one point I did attempt to make those universal scripts you speak of. The even more maddening thing is that they worked for part of the projects... and didn't work for others. It was a rough 60/40 split. So you end up maintaining even more of them. So I gave up.
Very soon before that I found `just` and very quickly recognized the benefits: project-local commands / scripts, centralized location (just one file), ability to delegate to parent Justfile (i.e. you can have a dedicated folder for Golang projects and that one can contain a Justfile with e.g. `just lint` task that calls `go vet` and `staticcheck` etc., without having to copy-paste that into every Golang project Justfile file, though I actually prefer that nowadays -- better to have completely self-contained tooling after all but still, for super dev-specific stuff that does not belong in version control the parent Justfile workflow is quite a good fit), and a very easy syntax that still allows for doing stuff that will make you pull your hair out if you attempt them with pure sh or bash and if you haven't memorized their specifics over the course of a lifetime (which is something I attempted but gave up on because it was more or less memorization of exceptions of the exceptions).
Now, to address this:
> I also haven't seen in your previous response how Just is better than a subdir with shell scripts named according to a convention.
I am not impressed by conventions that are not enforced with a spiked club. Which means: we the people forget stuff easily. I suffered from that too. Conventions don't mean much when you misspell the script filename or put `-f` instead of `-e` in the `set` call at the top of the script. :)
I prefer loud failures and not silent mess-ups.
My position is informed by a lot of negative past experiences. Does not mean that my priorities are universal or unconditionally better. Not at all. It means that everything I got through in my career made me appreciate `just` and it was a near-perfect fit for my needs.
> I think if you look at this with clear eyes, you'll see that 100% of the value you feel you're getting from Just is actually coming from the naming convention that Just nudged you towards.
Sure, it encouraged me to finally settle on a naming convention but I've done this before as well. I still prefer the singular file approach + ability to delegate to parent files.
The less files in total the better. I have found this rule to make me more productive.
If you have gotten this far: nowhere did I claim objective improvements. I had discussions in the past (might have been in other `just` threads even!) with curmudgeons who loudly proclaimed "skill issue!" on my non-preference towards make's bash-isms and weird rules. So for them `make` + other scripts (even Perl / Python ones) are working just fine and the rest are "kids running after shiny toys".
I don't mind them thinking that. I have my motivation and, as said above, it's well-motivated given my past and my way of work and mental preferences.
Hope that helps.
Thanks for going into more depth. I wasn't aware that Just could delegate like that, which does sound useful. And I certainly agree that bash and make are absolutely Byzantine at this point -- footguns on footguns. There's much value in using a tool that is powerful enough to do what you need, but not much more -- since that makes it much easier to reason about what a given instance/invocation of that tool could possibly be doing, without spending hours (years?) down in the detail.
And it sounds like Just is that tool for you! I'll probably keep using make, now that I've spent so much time wrestling with its many idiosyncrasies, but you never know.
Thanks for productive response. <3
> And I certainly agree that bash and make are absolutely Byzantine at this point -- footguns on footguns.
Yeah, that's my problem. Not like I don't have memory in my brain, not like I can't learn make and bash -- I did so several times almost from scratch but as I am not using them every day, the memories always fade. It's best to relearn something without footguns than one with. Hence I am using `just`. It's straightforward and very easy to catch up with even if you forget it. Not so with make and bash.
If you are very invested in them and are feeling at home with them, great for you -- I am not claiming unquestionable and countless benefits. I am claiming it works well for my brain and my workflow, and most of all -- the frequency with which I have to do scripting.
Just so you can tend to your fragility around downvotes--you cannot downvote a reply to your own comment.
So he isn't the culprit.
I tried my best to get the discussion back on topic and off-topic low-effort replies like yours don't help.
Oh and I did not downvote him.
[flagged]
My comments are frustrated because I believe a response was already given to the question you asked. I'll be grateful if you at least don't misrepresent, even if it's difficult to find a common language. If you don't believe that I responded adequately then just ask a more detailed question.
But sure, here's one more reason for you, as said in a sibling subthread: I can have all my project's commands in one file.
Also it pays off to know what `just` does. As several other people were told (not only by me) in the bigger thread, it's an aggregating task runner, more or less. Not a dependency manager.
Honestly you're coming off a bit shit here, I don't read the other person's responses as defensive or insecure at all, so I suspect you're saying that to be rude.
yep just has good ergonomics for little things in my dev workflow
Agreed, particularly if you pipe to fzf.
(For those who haven't used it, fzf is a fuzzy-searchable menu for the command line. You pipe lines of input to it, and it shows them in a menu. You start typing and it fuzzy searches the menu and selects the best match. Then you press Enter to pipe that out, or Tab for multi-select. It's fantastic.)
I have convenience functions in my profile script that pipe different things to fzf...scripts, paths in the current directory to copy to the clipboard, etc. It's indispensable.
Bonus: progressive enhancement. If someone doesn't have fzf/those convenience functions, it's just a directory with shell scripts, so they don't have to do anything special to use them.
That works too. I've done both and I currently use Just because it collects the entrypoints to the project into a single file. This can provide an advantage where there's a bit of interdependence across your entrypoints.
E.g: You have a docker container, you might be `run`ning it, `exec`ing it etc. from the same compose-file. So Just gives you the ability to link those shared commands within the same file. Once the entrypoints get too numerous you can either break them into scripts (I do this partially depending on the level of behavioral complexity in the script) or partition your justfiles and import them into a single master.
Well, for one, your recipes can be in another language (e.g. Python).
You can build complex recipes out of simpler ones. Sure, you could do that by creating a new shell script that calls other shell scripts, but then you're reinventing just.
You don't need to be in the directory to run those scripts.
I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
> Well, for one, your recipes can be in another language (e.g. Python).
Surely this is true for stuff in a ./bin or ./scripts folder - binaries, python with shebang etc?
Ah, I see there's:
https://just.systems/man/en/shebang-recipes.html
Which could be done in shell, but typically rather be limited to oneliners (invoking awk) rather than piping a heredoc to an interpreter.
> You don't need to be in the directory to run those scripts.
There's already an easy way to solve this: $PATH.
> I think a better question for you: What's the benefit of putting .PHONY recipes in Makefiles, when you could just have a directory full of shell scripts. If you find yourself using .PHONY recipes, then you already have a reason to use just.
Well, I think it's the same question, rather than a better question. And the answer is yes, if all you need from make, now and in the future, is a set of .PHONY targets, then by all means just use shell scripts. make is used because often you need slightly more than this -- or you may do so tomorrow, and don't want to change the syntax you use to accomplish tasks.
> There's already an easy way to solve this: $PATH.
I have 10 projects. Each with their own set of shell scripts. You want me (and all other developers) to pollute the $PATH with 10 directories?
And then you have a namespace problem. I usually have a "test" recipe in my justfiles. The analog would be a test.sh file. But with your solution, it will have to be projA-test.sh and projB-test.sh.
And if I dump them all into the $PATH, how do I quickly see the scripts relevant to a particular project?
I tend to work on different projects in different terminal sessions so I don't find this a problem, but OK, I can see the benefit of making the tasks a command line executes dependent on the current directory. (There are tools that can auto-adjust $PATH for you like this, but that would be a weak argument against Just (unless you're using them already) since it would mean swapping Just for that-other-tool.)
If you use git and don't need multiple "layers" of Justfiles (i.e., if you have all your scripts in a scripts folder at the top level of your repo), then in bash you can get what you want with:
Now from any repo subdir, `run clean` will run a script named "clean" in the scripts folder at the top level.Although I'm coming off as a strong just evangelist, I do want to point out that if someone already has a workflow with just scripts, it's totally OK to continue with that. Personally, I think using just is simpler for those who already don't have that workflow.
Likewise, if you are using make as a command runner and already know make well enough - by all means continue! In my experience, though, someone who doesn't know make will be much more likely to learn just than make.
I tend to sneak justfiles into the projects I work on. They usually don't have any good automation (no make, perhaps some scripts with a doc/md file explaining which script is for what). I sneak the justfile in the repository, and when it's mature, start showing teammates how I use it. They typically then switch to it. I don't think they would switch to it if it were a Makefile.
You can put `./scripts` in your $PATH, if you want.
1. This requires that all your projects use "scripts" as the directory name.
2. This works only if you just happen to be in the directory above scripts.
Absolutely do not do this. That’s all well and good until you clone a repo that ‘scripts/ls’ => install_ransomwhere().
You can put .scripts last in PATH (or first - whichever disallows scripts/ls to take precedence over /usr/bin/ls)
You could put `.$MY-SECRET` it `PATH` and selectively symlink this to vetted script directories
Direnv is a great tool for this FWIW.
Won't work in Windows.
I think it really has to be emphasized: One of the great things about just is that it works in Windows with no hassles.
Are you sure it won't work?
I ask because cmd.exe has DOSKEY, which is basically a very slightly souped up version of bash's alias. I think it wouldn't be hard to use DOSKEY to replace CD and PUSHD with macros that run some command to update %PATH% and then change directory as usual.
There's probably a tool that will work similarly in Windows. I was saying it merely because that's what the direnv Github page implies.
But won't all the scripts break?
Are you writing scripts that will break?
just isn't something magical that will make scripts meant for Linux work in Windows, you know. Some people do actual development in Windows and have Windows scripts.
It's a different approach, none is better or worse, people simply have preferences.
And all other features aside, it seems to be able to call commands from any subdirectory in a project, which is actually nice compared with a normal shell. I mean, you can replicate this with some lines of shellscripting, but not everyone seems to maintain an elaborated $BIN of personal tools.
I do both. A script directory helps provide the underlying extensible tools. Just is a convenient ui for the most frequent dev use cases of a project.
1. The language is extremely simple and is consistent.
2. I agree on having to move away from imperative and go for declarative (if the latter was what you had in mind) -- any ideas for a better tool that does that and is just as easy to learn?
3. RE: cobbling together stuff with and around `just` is relatively trivial to fix f.ex. I have my own `just` recipes to bring up the entire set of dev dependencies for the project at hand, and then to tear them down. It's a very small investment and you get a lot of ROI.
4. RE: Nix, nah, if that's your sales pitch for it and against `just` then I'll just strongly disagree. Nix is a mess, has confusing cutesy naming terminology, has a big learning curve and a terrible language. All of that would be fine, mind you, and I could muscle through it easily but the moment I received several cryptic error messages that absolutely did not tell me what I did wrong and I had to go to forums and get yelled at, is the moment I gave up. `just` is simply much easier and I am not worried about not having Nix-like environments for my projects. Docker + compose work very well for this.
Finally, your point about an obscure single command that people forget about in the future applies to literally any and all task runners and dependency managers, Nix included. That's not a valid criticism towards `just` IMO.
1. It's a fine language but I have all kinds of "works on my machine" problems with it because it has no associated dependency manager. Other languages solve this with lockfiles and such, and it's likely that you're already doing that with one of those same languages in the same project. So just... Use the main language for whatever it is.
2. No, nothing's so easy, but you can get more if you're willing to work for it, and I think the juice is worth the squeeze.
3. For runtime state, I find that using just as a wrapper around Tilt or docker-compose or k3d or whatever just hides the perfectly adequate interfaces that those tools have. The wrapper discourages deeper tinkering with those tools. It's not a particularly difficult layer of abstraction to pierce, but it doesn't buy you enough to justify having an additional layer at all.
4. In the case I'm thinking of, the whole team was working happily because they had used a Just recipe to download a file from a different repo, and then somebody removed the recipe, but everyone (except the new guy) had the file from months ago, which worked. Nix wouldn't have let us accidentally get into a broken state and not know it. It would have broken as soon as we removed the derivation for the necessary file. I sent him the file through slack and then he was able to work, and only discovered later how it got there on my machine. That kind of uncertainty leads to expensive problems eventually.
1. I don't follow. I work with Elixir, Golang and Rust and I use their dependency managers just fine. F.ex. I have `just deps` that does `mix deps.get` in Elixir and `go get -u ./... && go mod tidy && go mod vendor` in Golang. Furthermore, `just` does not claim to do dependency management. So what do you mean here?
2. Sure but I am not paid for it. Nobody will look at me with admiration if I delay an important milestone with 2 weeks (or, more likely, 2 years) to invent such a tool. :/ So not sure I get you here either.
3. We're veering into bikeshedding here and I will not argue; use whatever interface works best for you. I personally love having `just up` / `just down` / `just start` / `just stop` for my development dependencies of any project project. No more one big shared Postgres instance that if I screw it up (and homebrew did that a number of times!) I'll have to dig through TimeMachine for DB backups. I wisened up eventually and started making scheduled exhaustive backups of each DB... and then said to myself "forget it" and just started using separate containers for each project. For my work I found wrapping the tools worth it for not having to remember their bespoke full command lines. I standardized my tasks and I can enter almost any directory and run the same `just ...` commands and get what I expect as a result. To me that's valuable. But again, use whatever is convenient for you. No argument from me.
4. I don't disagree here and I am kind of 50/50 because on the one hand this is failure of process + lack of proper dev/ops tooling (f.ex. deleting this or that should raise alarms i.e. every such repository should have CI that makes sure everything important stays in place). On the other hand if Nix or anything else spares you from having to install those guard rails then sure, then it's a good fit for you. For my work and hobbies Nix is a net negative and I gave it more than a fair chance and I had enough of opinionated diva-like tools whose message is "learn everything about me to love me, baby". No thanks. But that's just a single example. Again, if there are tools that spare you from screwing up something accidentally, I usually vote strongly in favor of them.
People like Just when they're the one who is writing the recipes, because those recipes implicitly depend on whatever they have installed at the time of writing so everything is easy, but then other people come to the project and it has a culture of "IDK I just use the Just recipe," except that recipe doesn't work unless you've been around since it was written and have all of the right versions of things. For instance I've got all these errors like:
> This application uses version go1.20 of the source-processing packages but runs version go1.23 of 'go list'. It may fail to process source files that rely on newer language features. If so, rebuild the application using a newer version of Go.
They don't seem to be hurting anything but I'm not really sure how to reason about them since somebody packaged the commands together but didn't specify anything about the environment. The Justfile entry tells me that it's running some script in $FOO_DOWNLOAD_DIR but I've got some sleuthing to do to figure out where that dir actually is and how its contents were populated and what it has to do with `go list`.
This is of course bad practice, but Just is the rug under which it is hidden and made to look like good practice. It's good that Just doesn't claim to manage dependencies, since it doesn't, but this action could instead be a go program in which case go would be handling those dependencies for me.
I don't disagree. Your example is a good demonstration why Nix -- or a much more thorough Justfile -- would be needed.
In my case I also supply the `.tool-versions` file so that only mandates the other dev to have Just and asdf / mise (for installing exactly the right versions of tools).
I also tried having full Dockerized development environment but that proved to be too much of a hassle.
But yep, in your scenario it seems like the other guys did sloppy work. Sadly 99% of everything can be misused by people who don't practice their craft well.
(EDIT: Golang programs should really be made to work with the latest version, all being said and done. Another example of sloppy work, if you don't mind me saying.)
Nix [...] and a terrible language
I never get this criticism. Nix is a pretty nice, small, functional programming language and lazy evaluation makes it really powerful (see e.g. using fixed-points for overlays). I wonder if this criticism comes from people who have never done any functional programming?
When I first started with Nix six years ago, the language was one of the things I immediately liked a lot. What I didn't like was the lack of documentation for all the functions, hooks, etc. in nixpkgs, though it certainly got better with time.
I did say I could learn it and I do FP for 8.5 years now. It's not that.
It's the obscure error messages, mostly. And as you said, documentation even to this day leaves stuff to be desired, thought that might be better nowadays, no idea and I don't plan to revisit still.
Maybe because it's been many years since I used C or C++ for anything serious, but I don't get that impression from using make in the first place. I haven't seen it used for setting up a build environment per se, so there aren't any "packages" for it to manage. When I've written a Makefile, I saw it as describing the structure of cache files used by the project's build process. And it felt much more declarative than the actual code. At the leaves, you don't tell it to check file timestamps; you tell it which files' timestamps need to be up to date, and let it infer which timestamps need to be compared and what the results of those comparisons need to be in order to trigger a rule. Similarly, a rule feels composed of other rules, more than it feels implemented by invoking them.
> like make, it encourages an imperative mode for project tooling and I think we should distance ourselves from that a bit further.
Um, what? `make` is arguably the most common declarative tool in existence ...
Whenever people complain about Make in detail, it's almost always either because they're violating Paul's Rules of Makefiles or because they're actually complaining about autotools (or occasionally cmake).
It's quite easy to accidentally write makefiles that build something different when you run them a second time, or when some server that used to be reliable suddenly goes down. Or when the user upgrades something that you wouldn't think is related.
It does no validation of inputs. So suppose you're bisecting your way towards the cause of a failure related to the compiler version. Ideally there would be a commit which changed the compiler version, so your bisect would find a nice neat boundary in version history where the problem began. Make, by contrast, is just picking up whatever it finds on the PATH and hoping for the best. So the best you can do is exclude the code as the source of the bug and start scratching your head about the environment.
That willingness to just pick up whatever it finds and make changes wherever it wants with no regard to whether dependency created by these state changes is made explicit and transparent to the user is what I mean by "imperative".
Make isn't, at all, declarative. It's almost entirely based on you writing out what to invoke, as opposed to what should exist and having the build system "figure that out".
That is, in make you say `$(CC) -c foo.c -o foo.o`, which is telling you, ultimately, how to compile the thing, while declarative build systems (bazel/nix/etc.) you say "this is a cc_binary" or "this is a cc_library" and you let it figure the rest out for you.
If your executable is named "foo" and there is a "foo.c" somewhere, your Makefile only needs to contain "foo:" and make will figure out how to build it using its default rules. If you have more than one file (ex: foo.c and bar.c), just write "foo: bar.c".
Modern build systems are more advanced and have better defaults, but the general idea is the same. They are all declarative. An imperative build system would be like a shell script.
Your "declarative" systems are no more declarative - they just hide the commands/flags being used, far worse than `include config.make`.
If you don't want to buy into the whole Nix philosophy, you can also use something like 'shake' (https://shakebuild.com/) to build your own buildsystem-like command line tooling.
just is not meant as a built tool, just a task runner. Those have vastly different goals.
make is a build system: it has targets, it has file deps, a dag resolver, etc.
But a task runner is basically a fancy aliaser with task deps and arg parsing/proxing.
And just is good at being that. Although I agree I not a fan of adding yet another DSL.
I love just. The main benefit for me at work is that it's much easier to convince others to use, unlike make.
I like make just fine, and it's useful to learn, but it's also a very opaque language to someone who may not even have very much shell experience. I've frequently found Makefiles scattered around a repo – which do still work, to be clear – with no known ownership, the knowledge of their creation lost with the person who wrote them, and subsequently left.
I'm hoping for this effect, as more and more I work with people who don't consider `make` the default (or, more often, have never heard of it).
But I think the hard part -- for any build system -- is achieving the ubiquity `make` had back in the day. You could "just" type "make" and you'd either build the project, or get fast feedback on how much that project cared about developers.
I've used Just at a workplace on a project I didn't start. It seemed slightly simpler than make when putting together task dependencies. But I couldn't figure out what justifies using it over make.
For me, it's a fit-for-purpose issue. Make is great when you're creating artifacts and want to rebuild based on changes. Just is a task runner, so while there's a notion of dependent tasks, there's no notion of dependent artifacts. If you're using a lot of .PHONY targets in a Makefile, you're mostly using it as a task runner -- it works, but it's not ergonomic.
I like that just will search upward for the nearest justfile, and run the command with its directory as the working directory (optional -- https://just.systems/man/en/attributes.html -- with fallback available -- https://just.systems/man/en/fallback-to-parent-justfiles.htm...). For example, I might use something like `just devserver` or `just testfe` to trigger commands, or `just upload` to push some assets -- these commands work from anywhere within the project.
My life wouldn't be that different if I just had to use Make (and I still use Make for some tasks), but I like having a language-agnostic, more ergonomic task runner.
Just a quick note for interested readers: you don't need to explicitly mark things as .PHONY in make, unless your Makefile lives next to files/folders with the same name as your targets. So unless you had some file called "install" in the same folder, you wouldn't need to have something like ".PHONY: install".
That's the right thing to do, so you should. Relying on implicit condition of specific file missing in current directory is very wrong IMO.
As a heavy Just user, I agree with all of this — great answer.
make is a build system and has a lot of complexity in it to make it optimal (or at least attempt to) for that use case.
just is a "command runner" and functionally the equivalent of packing up a folder full of short scripts into a single file with a little bit of sugar on top. (E.g., by default every script is executed with the CWD being the folder the justfile is in so you don't need to go search for that stackoverflow answer about getting the script's folder and paste that in the top of every script.)
If you use just as a build system, you're going to end up reimplementing half of make. If you try and use make as a command runner, you end up fighting it in many ways because you're not "building" things.
I've generally found the most value in just in situations where shell is a good way to implement whatever I"m doing but it's grown large enough that it could benefit from some greater organization.
> search for that stackoverflow answer about getting the script's folder and paste that in the top of every script
Ah, a fellow Person of Culture.
Having recipes just for Windows/Linux.
Being able to write your recipes in another language.
Not having to be in the directory where the Makefile resides.
Being able to call a recipe after the current recipe with && syntax.
Overall lower mental burden than make. make is very complex. just is very simple. If you know neither of the two, you'll get going much faster with just.
The manual lists the reasons why using it over make: https://just.systems/man/en/
The question is if those reasons are convincing to someone. The big advantage of Make is that it is probably already installed.
> You can disable this behavior for specific targets using make’s built-in .PHONY target name, but the syntax is verbose and can be hard to remember.
I think this is overstating things a bit. I first read `.PHONY` in a Makefile while i was a teenager and i figured out what it does just by looking at it in practice.
Makefiles do have some weirdness (e.g. tab being part of the syntax) but `.PHONY` is not one of them.
What does it offer over bazel?
Not making you want to shoot yourself in the head.
It does one thing very well, and it has well-written and useful documentation.
It literally just runs commands in a convenient way.
how well does it cache across a build farm?
Did you miss the part that says "it just runs commands in a convenient way" and "it is not a build system"?
[flagged]
?
I'm trying to understand what this tool is and why I should use it for my next project
You could have used two minutes to skim the top of the README. Not all projects make you want to commit suicide. :)
It's not a build system, it's a command runner, it is for different use cases
> The big advantage of Make is that it is probably already installed.
...unless you're on Windows, like me!
Make is installed on Windows, if you install Microsoft's C/C++ dev stack (typically via installing Visual Studio). They just use nmake instead of GNU make. They also include Cmake these days, as it's the common cross platform option.
> if you install Microsoft's C/C++ dev stack (typically via installing Visual Studio).
So I have to install this huge dependency just to use make, when my project is in Python?
Way easier to install just :-)
You wouldn't use GNU Make (the thing that comes for "default" on Linux) with Python either.
What a weird way to converse.
> You wouldn't use GNU Make (the thing that comes for "default" on Linux) with Python either.
But people do use Make all the time for Python projects - as a command runner. Pelican projects, for example, come with a Makefile to start the server, publish, etc.
The whole point of this submission is that many, many people use Makefiles not for incremental builds, but as a convenient place to store commonly used commands. And just is a better and simpler tool than make for that. If you're on Windows, it's a pain to install make, compared to installing just.
Busybox comes with a vestigial make. I wager git might. Those are both in winget.
The manual states that "just is a command runner, not a build system," and mentions "no need for .PHONY recipes!" This seems to suggest that there's no way to prevent Just from rebuilding targets, even if they are up-to-date. For me, one of the key advantages of using Make is its support for incremental builds, which is a major distinction from using a plain shell script to run some commands.
Maybe it’s the stacks I’m using, but I’ve always had incremental happen with language-native tooling like `go` or `cargo`. So for me at least, having lazy eval features like that would be an unnecessary increase in scope and complexity. With Just, I can just throw together different commands and it just works cross platform. I love it.
I much prefer that than the other way, ie letting language tooling become command runners (looking at you npm). That’s the worst of both worlds.
> I’ve always had incremental happen with language-native tooling like `go` or `cargo`
That makes sense, but for me, Make is incredibly useful for incremental file processing outside of programming. I've written tiny Makefiles that use glob patterns to batch-convert thousands of SVGs into PNGs and WebPs, but only for the modified SVG files. I've used Make to batch-convert modified LaTeX files to PDFs and render modified Blender projects into WebM videos for the web. Rendering videos is very time-consuming, so only rendering modified video files is a huge win.
Your first sentence says:
> just is a command runner, not a build system
And then you go ahead and complain that it is poor at building.
If you need a build tool, don't use just. Use make or something else. The purpose of just is to stop putting non-build stuff in Makefiles. And of course, it has a nice set of features that make doesn't.
I think there's been a misunderstanding.
> Your first sentence says
My first sentence was me quoting the Just manual and my second sentence was my observation about what that suggests. I wasn't asserting whether it's true or not, just sharing my interpretation, as I'm not familiar with Just.
> And then you go ahead and complain that it is poor at building.
I did not "complain" I stated that incremental builds, regardless of whether Just has them or not, is one feature I personally like about Make.
Going by the responses I received, Just does not appear to support incremental builds and a simple acknowledgement, minus the vitriol, would have sufficed.
If you need incrementalism, Just is not for you.
The programming languages that I use don't need to be told to not rebuild from scratch so yours is a pretty strange argument.
The moment you need to build the same software on windows its already justified IMO
For me is not needing to chain a lot of commands with && to ensure that it fails with the first command that fails. With just, if one of the commands of the recipe fails, it stops.
I saw many projects like this a while ago, and, although they all seemed great, I kept wondering why do I need such a complex thing just to save/run a bunch of scripts?
I ended up building my own script runner, fj.sh [1]. It's dead simple, you write your scripts using regular shell functions that accept arguments, and add your script files to your repos. Run with "fj myfunc myarg ...". Installation is basically downloading an executable shell script (fj.sh) and adding it to your PATH. Uninstall by removing it. That's all.
I'm not saying 'just' is bad—it is an awesome, very powerful tool, but you don't always need that much power, so keep an eye on your use case, as always.
[1] github.com/gutomotta/fj.sh
There is irony in “I don’t understand why people do X and ended up doing X but simpler”
There is nothing at all ironic, inconsistent or otherwise strange about "I don't understand why others' implementations of X are so complex".
'Just' too was simple at the beginning [1], but with time and usage things always become more complex that some script you do for your own specific use-case.
[1]: https://github.com/casey/just/tree/v0.2.23
Can anyone with experience with just and tools like npm/yarn explain if there are any benefits to use just instead of codifying commands into the "scripts" field of the package.json? Commands can also be enumerated. How often would I benefit from just's other features?
We don't use Just, but we have a Makefile that doesn't take advantage of any of Make's dependency features just to easily be able to run several commands in sequence.
JSON is just a really bad format for script configuration—you either have to string commands together on one big line with && or you have to pair package.json with some other strategy for organizing commands. That may end up being a `scripts` directory with a file per script, it could be that you use a framework that bakes all the complexity into shorter wrapper commands (a la vite), or you could use something like Just to sequence them.
It's not perfect but it gets the job done. Sometimes it's ugly but in the end it forces me to break commands down into subcommands, which can increase clarity.
But sometimes you do have to write a collection of script files for complex multi-line scripts. I assumed I would still do that with just? Is the idea for these to all live in a single just file? I like having larger programs separated as individual files. All good points, though. I like make too, but it can definitely be needlessly verbose. My main thing would be not wanting to need users to have another binary installed locally. Can just live in my repository?
Edit: Nevermind! https://just.systems/man/en/nodejs-installation.html
> `just-install` will install a local, platform-specific binary as part of the npm install command. This removes the need for every developer to install just independently using one of the processes mentioned above.
After digging into it more, it seems `just` requires `sh` to function, adding friction for Windows developers. I don't develop on Windows but that friction does reduce portability.
There's ambiguity in which package to use on Node. Both `just-install` and `rust-just` are recommended in the docs, with no disambiguation. `just-install` is maintained by another party and adds an attack surface I'm not sure I'm comfortable with given my current needs. The other recommended package, `rust-just` is also maintained by another party, has bad SEO and recommends being installed as a global dependency.
All of this just adds too much friction if one is already using a package.json. My monorepos frequently contain codebases in multiple languages and so far a package.json and workspaces workflow has met my needs.
I appreciate everyone for answering my questions and giving advice.
Actually, it was package.json scripts that pushed me toward just! I wanted that stuff in non-node projects (python/ruby/~), I wanted more complicated scripts, I wanted more logging output, I wanted comments... For whatever reason every project seems to have 10-20 little commands (often interdependent) and just makes that a breeze.
"yarn/npm install" has an artifact in the project directory, so here's one point for "make" instead of "just":
You can clone the repo and "make test", and it'll include "yarn install" automatically - then on subsequent "make test", it'll skip it because "node_modules" is already up-to-date. And then include it again later if someone updated the packages. The "touch" is so the last-modified timestamp on "node_modules" is updated even if "yarn install" doesn't add/remove anything, so make knows it succeeded."yarn install" is usually pretty fast when it has nothing to do, so I can see why people may not bother and just have it run every time, but patterns like this can be used for quite a bit. This way heavier commands don't need to be run repeatedly and devs don't need to know all the individual commands to run in sequence.
The touch trick is nice. There's definitely merit to such an approach, since it provides a simple cross-platform way to check if node_modules exists.
Though things begin to get more complicated if say, the project use plug-n-play resolution. `yarn` handles both cases.
Another benefit of `"test": "yarn && <test command>"` is that you also make sure the project is in a buildable state when testing.
package.json is specific to node projects, just can be used for anything. Why learn the quirks of something you can only use with a single programming language? I'm also a fan of the shebang recipes: https://just.systems/man/en/shebang-recipes.html
I place package.json files into non-node projects all the time just for some organizational benefits like workspaces and scripts. As a web-first engineer this doesn't particularly bother me. I'll check out shebang recipes, thanks!
Task is in a similar problem space.
Unlike Just which clearly states it is not a build system [1], Task can be told about expected files so tasks can be skipped to avoid unnecessary work [2]. So if your task is to build software, IMO make and the others like Task would be better.
If your tasks only care about the success code from a process, and/or are a Rust fan instead of Go, then Just should be fine. Otherwise, for specific use-cases like CI, you are likely already coding in a proprietary YAML/JSON/XML format.
[1] https://github.com/casey/just/blob/e1b85d9d0bc160c1ac8ca3bca...
[2] https://taskfile.dev/usage/#prevent-unnecessary-work
The one thing that converted us from Taskfile to Justfile is how it handle parameters injected at instantiation.
https://just.systems/man/en/recipe-parameters.html just works better for us than https://taskfile.dev/usage/#forwarding-cli-arguments-to-comm...
We use Docker Compose for our dev environment and were trying to do something like (notice the extra dash dash for separating the arguments out):
It was not working as we expected for some of the users due to the argument dash dash stuff - they were forgetting due to muscle memory but the following does: under the hood it was just calling (the equivalent): Just arguments are more ergonomic.This is how just does it:
I agree with you: I automate many K8S command with task but everytime I forget about the - -
Having a better input system will be a great improvement from usability perspective
Currently, every few months, I switch between Task and Just. Started with Task, went to Just, now I'm using Task again. Procrastination at its best
Task creator here.
How do you evaluate each tool? What do you miss on each that keeps you switching between them?
I understand you, though. I keep switching between Firefox and Chrome-based browsers because each has its pros and cons...
Passing parameters kinda sucks, someone else made a comparison in another thread about named parameters and how easy it is to pass and define them in Just. Love taskfile otherwise
Input parameter with - - is not really intuitive. It works, but the just way to handle input parameters is way more easier to remember
Personally I disagree, I think `--` is very intuitive.
Maybe it isn't super common knowledge, but `--` is in line with the POSIX argument parsing convention[0] and is used by many (most?) GNU/BSD tools and many other tools such as `kubectl`. This StackOverflow thread[1] also has some information about it.
[0] https://www.gnu.org/software/libc/manual/html_node/Argument-...
[1] https://unix.stackexchange.com/questions/11376/what-does-dou...
I unironically like the YAML format. It's very readable, imho, and most people (at least in the web space) already know it. It's better than the way just does attributes and descriptions.
On the other hand, what irks me is how parameters are fiddly to pass along. You have to define environment variables, instead of jusst passing them directly in the call.
I'm surprised nobody mentioned Rake yet. Having the full capability of Ruby and whatever gem you want makes it a dream for these kind of tasks. Absolutely love it.
That's what I dropped in to say. I've used most of them, and I think Rake is my favorite.
Pretty much all of the others are shell command runners with a couple of extra bits bolted on. Well and good most of the time, but it's another language to learn, and you're mostly SOL if it doesn't support something you want to do nicely.
With Rake, you get the same basic ability to do pre-set shell commands as the others, a single one or a sequence. But you also have the full power of Ruby, a full-fledged programming language, if you want to do anything more complex.
I was looking for this comment, because rake is great. One big thing is it never felt good imposing Ruby on a (say) a JS project (and I'm not sure of the current state of macos default ruby), so next time this comes up, I will be taking a look at just.
I recently looked at various alternatives to make and landed on https://taskfile.dev/
It handles dependencies and conditions well without needing to be a full blown bash expert.
I've used make for years, even partially wrote my own make interpreter once, I hate it as much as anybody else. But I don't feel confident investing in a new tool that has widespread industry adoption. I wish there was a 'better make' that tries to replace make the same way Zig wants to replace C, where they have great interop and make it easy to rewrite code into the new language.
There are a ton of better makes. It still didn't matter.
Any examples besides the subject?
"just" is not a better make, it is, well, just a command runner.
Make is designed, well, to make stuff, it is a build system. But now, it is showing its age as a build system, and other, more advanced systems have taken over, these are the "better make" [1]. But it turns out that make is flexible and can be used for other things, namely running commands, and it has been rather popular for this. Problem is, make is still a build system at its core, and it has some quirks that make it less than ideal as a simple command runner, notably the ".PHONY" target. Just is like make, but it is explicitly not a build system, which allows it to do away with most of these quirks.
So is it a "better make"? As a build system, no, it is intentionally a "worse make", but as "just a command runner", then it is indeed a "better make", and I am not aware of a similar project.
[1] https://en.wikipedia.org/wiki/List_of_build_automation_softw...
Cmake, rake, gradle, snakemake, scons, bazel, blaze, sbt, cabal, stack, invoke. Blah blah blah blah
What tends to happen is that some new language comes out. somebody decides they don't like the fact that make is actually more like prolog than anything else. They don't like prolog and just want to run some shell commands.
They then decide to demonstrate the productivity of their new language. they implement a build system in and mostly for that language.
People use it, a new language comes out and the cycle repeats.
Don't waste your time. Actually learn make and just be ok with the fact that it does look a lot like shell/bash, but it isn't.
Replacing make is like trying to replace the word "cool" in the English language. People have tried. It never succeeds
I’m old enough that Solaris came with SVR4 make as an alternative. On Windows, Borland make and nmake come to mind.
I started writing my tasks in mise (https://mise.jdx.dev/tasks/) instead of just, but I found that others didn’t want to install it. Something about mise being an all-in-one tool—combining asdf/direnv/virtualenv/global npm/task management—made installing it just for the task feature off-putting. At least that's my theory. So, I’m back to using just. I am happy that there isn't a ton of pushback on adding a justfile here and there. Maybe it’s the name—‘just’ feels lightweight and is known to be fast, so people are cool with it.
I'd be surprised if you weren't correct. Perhaps I could improve this a bit with the docs, but ultimately mise is complex and that will put people off no matter how good it is.
I think this is all fine though. I'm hard at work improving mise and will continue to do so for the foreseeable future. If someone is hesitant, I'd rather they wait a year until more kinks have been worked out, docs have been improved, feature gaps are closed, etc. I think this is especially true for tasks which only came out of experimental a few weeks ago.
Or people can just not use it. It's not like this is a business where I make more money when I have more DAU or anything. I just want to build a good tool for building sake after all.
I'm starting to use `mise` for tooling management and task running on greenfield projects, myself. Anything you feel `just` does better with regards to running tasks?
(author of mise)
The biggest advantage just has is that it's been around longer, in mise tasks only came out of experimental like a month ago. mise tasks themselves are stable, but there are still experimental things and some portions that need to be used more—like windows. That said, most of the stuff that needs polish are features just doesn't even have.
I had a look at the top issues for just and pretty much all of them I've handled in mise: https://github.com/casey/just/issues?q=is%3Aissue+is%3Aopen+...
here's my unashamedly biased thoughts on why I like mise tasks compared to just:
* tool integration - this is the obvious benefit. If you run `mise run test` on CI or wherever it'll setup your toolchains and wire them up automatically
* parallel tasks - I saw this as table-stakes so it's been there since the very beginning
* flags+options - mise tasks are integrated with usage (https://usage.jdx.dev) which provides _very_ comprehensive CLI argument support. We're talking way more than things like flags and default options, as an example, you can even have mise tasks give you custom completion support so you can complete `mise run server --app=<tab><tab>`
* toml syntax - it's more verbose, but I think it's more obvious and easier to learn
* file sources/outputs - I suspect just doesn't want to implement this because it would make it more of a "build tool" and less of a "task runner". I chose to despite having the same position that mise tasks is also not a "build tool". Still, I think even in the world of running tasks you want to only run things if certain files changed often.
* `mise watch` - this is mostly just a simple wrapper around `watchexec -- mise run ...` for now, but it's an area of the codebase I plan to focus on sometime in the next few months. Still, even as a simple wrapper it's a nice convenience.
* "file tasks" - in mise you can define tasks just by being executable and in a directory like "./tasks". This is great for complex scripts since you don't also need to add them to mise.toml.
I have not used just very much, but I did go through the docs and there are a handful of things I like that it definitely does better:
* help customization - it looks like you can split tasks into separate sections which is nice, I don't have that
* invoking multiple recipes - I don't love how this is done in mise with `mise run task1 ::: task2` but I _also_ wanted to make it easy to pass arguments. At least for now, the ":::" won out in the design—but I don't like it. Probably too late to change it anyhow.
* [no-cd] flag - both just and mise run tasks in the directory they're defined, but I prefer how this is overridden in just vs mise.
* expression/substitutions - mise uses tera for templating, which is very flexible, but it requires a bit more verbosity. I like that in just you can just use backticks or reference vars with minimal syntax. Same thing with things like joining paths and coalescing. I have all of this, but the syntax is definitely more verbose in mise. Arguably though, mise's verbosity might be easier to read since it's more obvious what you're saying.
* confirmation - I love that in just you can just add `[confirm]` to get a confirmation dialog for the task. I'm sure we'll get around to this at some point, mise already has confirmation dialogs so it shouldn't be hard to add. The tricky part will be getting it to work right when running a bunch of stuff in parallel.
* task output - I haven't used just that much so I can't actually say that it's "better", but having more control over how tasks are output is definitely a weak part of mise right now and is in need of more functionality like in just how you can add/remove "@" to echo out the command that's running
I want to call out one very silly thing that from reading these github issues sounds crazy. It sounds like both just and taskfile have the same behavior with `.env` files. In just and taskfile, variables defined in .env are ignored if they're already defined. I don't think anyone would want that—nobody has asked for mise to behave that way—and it doesn't appear either tool even allows you to change it!
Hi Jeff, thanks for creating mise! I am gearing up to migrate from asdf, very excited to check it out. Not totally sure we can adopt mise for tasks (we use just) but willing to give it a whirl. Putting run commands into toml sounds like it might be challenging, I wonder if there's syntactic sugar that would help.
most people just put simple tasks into toml (like `npm run test` or something), for anything complex, file tasks are much better: https://mise.jdx.dev/tasks/file-tasks.html
file tasks are basically just a directory of bash (or whatever shebang) scripts, but special comments give them extra functionality like dependencies or defining flags/options.
I was half tempted to make a toy runner called `use` when I first learned of `just` just so I could say... just use make.
In the Python ecosystem there has been quite a bit of debate around workflow tools (Hatch, PDM, flit, Poetry etc.) I tried out Poetry starting in probably 2018 or so and eventually realized how much I hated it: it was lagging behind on standards and the install/uninstall process was a moving target. But more than that, it... was an all-in-one tool, with its own definition of "all", almost all of which was irrelevant to me and which I was simply ignoring. I never ended up trying other options because I realized I would still have that same experience - although their various definitions of "all" are not identical.
I very much see the need in the Python ecosystem for a fully integrated user-level tool - something that sets up environments and allows people to use dependencies in their own one-off scripts. Pipx is almost there, if you build some wrappers around it to deal with the fact that it artificially refuses to "install" what it considers "libraries" (i.e. packages that don't define any explicit entry points). But it still is a bit rough around the edges, and more importantly is still based on Pip which has many faults. (I don't blame the design of `venv` for very much if anything, even if it's not quite how I would do things if we could start completely fresh; but it could use some nicer wrappers.)
But for development I've always thought it makes more sense to take a "Unix way" approach. Developers need the user tool for the basic mechanics of setting up packages, and then an actual toolchain built around that, with the chance to select individual tools according to their needs and preferences.
From my perspective, Just would be more useful if it had some ability to skip steps where the input hasn't changed.
Like maybe a Justfile's recipe could produce a "<task>.complete" kind of file, and could decide whether to re-run the task based on whether the task's inputs (or its dependencies' inputs).
Also if that sounds like a useful feature, consider using Make.
> Also if that sounds like a useful feature, consider using Make
Just not having that feature is _the_ defining difference in design between the two. If just were to ever add that it would likely kill its appeal. Not having that is what keeps the logic of a just invocation simple and what keeps Justfiles from devolving into the mess that Makefiles tend to with entangled build targets.
Make solves that problem. The problem that I have is that all of the tools I use day to day do their own dependency tracking and re-run tracking. Say I want to deploy a dotnet app to a k8s cluster - none of helm, docker, dotnet build, dotnet test expose their dependency tracking in a way that is straightforward to use with make. The most straightforward way to do it is to just run the commands anyway, IME.
I see more projects switch to PIXI, another Rust-written piece of software. RERUN was the one I follow the most https://github.com/rerun-io/rerun
It looks like much more than just command runner, but my projects happen to be needing much more than that too.
One reward you get for allowing yourself to become brainwashed by Bazel is you get a pretty nice task runner in every project that you've brought into the fold.
Several comments mention Task/Taskfile already, which is very similar in that you define tasks in YAML.
I think it's worth mentioning Mage/Magefile [1][2] as well, where your tasks are actual Go code. Similar to how Rake is for tasks in Ruby code.
It's useful when you have complex tasks.
It's like using Pulumi instead of Terraform.
[1] https://magefile.org/
[2] https://github.com/magefile/mage
I've been using babashka tasks [0] for a while. It has a nice api to run shell commands but it's all clojure based.
Am I missing out on justfiles? It seems to be quite popular among rust/nix circles but I'm afraid it's going to be yet another instance of Greenspun's tenth rule.
[0] https://book.babashka.org/#tasks
I switched from make a while ago because I was using it to run tasks in my Python projects, which doesn't require any of make's build tools.
I didn't like make's complicated syntax either. Everything just makes more sense now.
I already have a command runner, it's called a shell.
Apparently just also needs one to run.
I love `just` and have adopted it universally in all my projects. For what it does, it gets the job done fantastically.
That being said, I found myself needing a tool that builds a DAG of dependent tasks and automatically figures out what can be ran in parallel and what cannot -- obviously you have to spell out all tasks and who depends on what first.
Anybody knows such a tool?
EDIT: Apparently people did not get the hint that I believe `make` is an over-engineered pile of metric tons of legacy and I'll sooner slash my wrists than to learn it in full.
I did mean something ergonomic and easy to read and write. And no I'll never view `make` as such. I tried. Many times. I have better things to do in my life than to memorize exceptions of the exceptions.
I’ve not tried it but this popped up on here a while back and sound like it might fit the bill.
https://taskfile.dev/
Thanks, this one has been in my radar for a while, I'll absolutely get to it at one point.
I think this is exactly the intended use case for Ninja. It’s discussed in this recently posted article.
https://news.ycombinator.com/item?id=42268310
Thanks. That article is fairly disappointing for not having even one simple example file though...
Julia Evans has a Ninja introduction[0] with simple examples. I tried it for awhile, but ended up going back to GNU Make.
[0] https://jvns.ca/blog/2020/10/26/ninja--a-simple-way-to-do-bu...
Just gave this a read. Impressive. I'll give ninja a try soon, I have a possible use-case for it. Thanks!
I wrote frof [1] for exactly this purpose :)
Designed to be ultra-simple and with minimal "config-file acrobatics".
It looks like this [edit, formatting]:
https://github.com/j6k4m8/frof/Can you explain that one a little bit more to me, please?
I don't get the first two lines of your example well. They seem to show the dependency but which one is the default task, or how do you ask for a task to be ran?
You write the file and ALL steps are run in topological order so that a job never runs until its dependencies have run. i.e., in a tool I'll have `build.frof` as a separate frof file than `download-dependencies.frof`, perhaps. (If your preference is that those belong in the same file I'd be down to have PRs that support that! Should be very easy, I'm happy to try implementing this if there's interest.)
So for a file with those contents called `mygraph.frof`, you can (after installing) run `frof mygraph.frof` to kick off the jobs in the current shell (inheriting env vars etc).
[edit] maybe a clarifying example here: https://blog.jordan.matelsky.com/frof-render/
OK, so for the example in your comment upthread both `write` and `build` will be executed sequentially?
here they'll probably be executed simultaneously, since they both have zero dependencies and the machine can run multiple jobs at the same time. (can be disabled with `--max_jobs=1` or `-p=1`).
Here's another illustrative example:
In this situation, frof will schedule `Z` to run in a parallel thread ASAP, so it will likely run alongside A... and if Z takes longer to run than A, Z will continue running when A stops and B starts. But C will wait for all other jobs to finish before it can schedule.Nice, thanks a lot. Unfortunately I am quite swamped recently so I will definitely cannot help you with feature requests and testing but I have bookmarked frof and absolutely will be giving it a try.
Just one thing I would dislike... Python. How easy it is to run frof without having to fiddle with venvs and such?
no worries, good to know this would be a useful feature! I'll add it to my backlog.
and then should work!Was thinking about rewriting it in Go recently... :)
I've found prototyping in python followed by a rewrite in Go quite pleasant, would recommend
I'll try the vanilla Python route but knowing our mutual hatred, it'll crap the bed in 0.5s. :D We'll see.
> Was thinking about rewriting it in Go recently... :)
And then I might actually contribute. :)
Besides Make, I guess Bazel kind of fits the bill? It was very "Googly" last time I checked it out, but I think that was a decade ago and right when it was released, so it might be more fitting for not-Google nowadays.
I never looked at it but seen some fairly negative reviews here on HN. Any idea why? And why do you like it?
Imagine that instead of a make target listing its dependencies, you had to pull them out into a separately maintained BUILD file.
That’s not quite true, but it feels like it sometimes. Bazel is nice about seeing exactly what you need to rebuild if you touch a file. It’s very, very complex though.
In code terms, think of it as a framework that you have to embed your project into, not a Makefile or such that you’d drop into a project. That doesn’t make it bad and it has its niceties. You’ve gotta be prepared to pay for them with sweat equity.
Thank you. I heard similar sentiments RE: complexity and that's usually enough to turn me off of a tool.
Make?
Come on, be serious. If I wanted ancient sh-isms and bash-isms I would have learned make to 100% some 15 years ago.
I meant something ergonomic and easy to read and write.
> If I wanted ancient sh-isms and bash-isms
So don't, set make's shell to something else instead. It doesn't understand the recipes, it just dumps them to a file and runs $(SHELL) on them.
For a more extreme example, just to show what's possible:
I'm this years old when my life was revolutionized.
Not a bad idea, thanks. I did this a few times as well but when I analyzed the ROI I figured that just writing a simple-ish Golang program is just less confusing and more consistent in its totality when you ask yourself "do I really have to use Make and Python and, and, and...?".
So yeah, thanks for bringing visibility to this pretty decent compromising approach. It worked for me for a while but eventually I just went all-in to either use `just`, some _very_ short bash/zsh scripts, or jump all the way to Golang.
They are right, though, aren't they? I mean .. if you want something "modern", go ahead and learn Bazel. Make is quite a bit easier to learn, I'd say, and you don't need much (also no shell/bash) to express your DAG dependencies.
I'll agree on the DAG bit but I'll never use `make` again and I tried for no less than 10 years (on and off, not 24/7, otherwise I would have learned it long ago indeed).
I stay away from `make` almost religiously. Its complications _always_ find a way to creep into your file one day. Always. :(
So while they are technically correct and it's my fault for not saying I don't want `make` in the comment up-thread, I don't think my comment deserved the down arrows but oh well, I'll live through it.
Maven, Gradle, etc.
FYI to some people trivializing self-harm in a technical discussion is rather tasteless.
I disagree, fundamentally, that using a hyperbolic metaphor "trivializes" the underlying concept used.
Regardless, I have found over the last several years that attempting to scold people for not measuring up to your standards - ones they never signed up to uphold - without a serious attempt to justify them is strongly counterproductive.
Especially when it's couched in language that will readily be interpreted as snarky. One of the reasons people dislike the phrase "it's not my job to educate you" so much is that it takes for granted the presumption that the underlying idea is a subject of education, i.e., an objective fact rather than an article of someone's worldview. Prefacing a claim with "FYI" (i.e., "for your information") has the same issue. Taste, in the metaphorical sense used here, is definitionally not objective, and thus it is not possible in principle to "inform" others of what is or is not in good taste - only of some other community's standards for taste.
It's an exaggeration to illustrate a point. Still, thanks for bringing in the perspective.
Can anyone give me an example where I can actually replace my bash scripts with just? I don't see a point in using it if I can simply write a bash script (at least their examples are very easily replaceable)
I think of you're ok with "just writing a bash script" the tool is not for you
Everything I do with Just can pretty easily be done with Bash. But, doing it with bash is yucky. Doing with with Just is comfortable. If you don't have the yuck factor then I'd say you can just stick with Bash!
Many reasons are already mentioned in other comments. I'd add the following nice-to-have. Sometimes you'd find it easier/preferable to run some scripts with some other shell.
You can set the shell for some commands, for example:
```
set shell := [ "python3", "-c"]
# I can run python!
[no-cd]
foo-bar:
```And the API for your commands stays consistent for a very little effort. Of course you can achieve all this with just bash scripts but I find it faster and easier to provide a good devex this way.
I’ve been happy with Just at our workplace. It lets me focus more on the task at hand instead of Conan / Cmake incantations.
It’s consistent, easy to use and maintain, and keeps all relevant operations in one place.
I'm Stockholm syndrome with make at this point. I'm not sure I'd want it any other way.
I used this until AI became good enough. Now, for most purpose, I can just declare what I want to be done/executed and get perfect bash for it. I have a relatively complex Makefile that build graphql schemas and sets them up. It'd have been a no-go given how weird bash syntax is; but now I can get it generated and working from pretty much the first try.
There is lots of bash around and it's a very simple language, so AI models are pretty good at it.
What does that mean? You have a huge LLM generated bash script instead of a human-readable makefile? I do not understand.
no flags, no parallel tasks, no skipping tasks unless files change, no watching for changes
come see a modern task manager: https://mise.jdx.dev/tasks/
This looks cool, but why not have vars in .env files?
you can do that too: https://mise.jdx.dev/environments.html#env-file
but sometimes you don't want to make it an env var—just supports this too through the `export` keyword
My favorite command runner setup is just a simple bash script and .envrc
I can put my commands in a run file, which source a simple bash script, and use it like:
You can even `run help` to list all available commands.The setup is explained here: https://olivernguyen.io/w/direnv.run/
Create a bash script `run`:
And source a simple script `_cli.sh`:My favorite entry in this space is Argc. I like it because the only “new syntax” it introduces is metadata comments, and the rest is pure bash. The maintainer is also best-in-class in terms of responsiveness.
https://github.com/sigoden/argc
Why use this over .sh files?
IMHO you'd right to be sceptical because for me, it is only a slightly more ergonomic way to organise and run shell scripts. It's difficult to make the case it's much better but I found it interesting how "just being a bit nicer" for a common activity can be a really valuable quality of life improvement.
- easier - core benefit is making it nicer to implement multiple commands with arguments without inventing something equivalent in shell
- convenient - with "fallback" just will search up the folder tree to find the just command so I don't need to be in the right folder. I have justfiles at multiple levels in a project hierarchy and my cwd works as context to pick the right command
- polyglot - can use different languages as needed
- predictable - it's so nice when I return to a project and I have recipes for setting up my env, various types of build and test. The consequence of being a little more ergonomic means I capture more useful command lines that, for whatever reason, I would not have made into shell scripts because of the added friction.
> with "fallback" just will search up the folder tree to find the just command
So don't have just-files in your home directory?
You could if you want.
If you don't want just to search outside of your project folder then don't set fallback in your project root justfile and it stops there.
I don't know if this fixes the issues but some big problems with shell:
* Very bad UX on Windows
* Quoting is a disaster. I mean, the whole language is a disaster but quoting is an especially big wart. Make also has this issue; you literally can't use it with files containing things like spaces or colons.
* Shell scripts tend to start simple and reasonable and grow seamlessly into something that absolutely should not be a shell script.
My favourite solution is Deno. Zero faff to set up, easy to install, supports third party dependencies without metadata files or messing with environments, and you get to use a real programming language. Easily the best scripting tool for infrastructure tasks at the moment.
Unfortunately I'm forced to use Python at work which is nowhere near as good as Deno, but still beats the pants off shell scripting.
> Quoting is a disaster.
It surprises me a bit that, of all things that are a mess in shell, your comment mentions quoting. It’s one of the few things that absolutely make sense for me in shell scripting. Do you have an example for me where quoting feels messy to you?
> My favourite solution is Deno. Easily the best scripting tool for infrastructure tasks at the moment.
I don’t think there’s an objectively best technology for everyone. For example, how long-term are your infrastructure tasks? What are the chances your scripts are still going to work in 2 years? 5 years? 10 years?
Suppose you’re in a large enterprise embedded project which needs to work for 10 years or more, and the project uses shell scripts for infra tasks. Would you recommend to migrate those to Deno or Python?
Yes, this was literally from today:
https://programming.dev/post/22539101
Turn on shellcheck and you'll realise that nobody could get it right without tool assistance. In programming languages with "standard" quoting (Python, JavaScript, Rust, Go, C, etc.) you don't even really need to think about it.
> What are the chances your scripts are still going to work in 2 years? 5 years? 10 years?
100% because I'll maintain them.
> Suppose you’re in a large enterprise embedded project which needs to work for 10 years or more, and the project uses shell scripts for infra tasks. Would you recommend to migrate those to Deno or Python?
Absolutely yes. In fact the longer you expect it to last the stronger my recommendation would be. A shell script with 10 years of tech debt is a scary prospect.
> 100% because I'll maintain them.
What happens if you leave the project? Are your teammates going to maintain the scripts? What happens when one day, the Deno package gets updated and the script blows up? What if Deno becomes proprietary and closed source?
> A shell script with 10 years of tech debt is a scary prospect.
Several well-known executables on some Linux distros are really 20-year-old shell scripts. I haven't really seen them accumulate much tech debt.
>of all things that are a mess in shell, your comment mentions quoting. It’s one of the few things that absolutely make sense for me in shell scripting. Do you have an example for me where quoting feels messy to you?
1. The distinction that shell languages make between single-quotes and double quotes is unintuitive and not seen in other languages - wherein either they are interchangeable (like Python) or denote a completely separate type (like C and several others influenced by it).
2. I can't backslash-escape a single-quote within a single-quoted string. Single-quoting disables backslash-escapes that were already working outside of strings. I've lost count of the times I tried to input a command and was surprised to get a > continuation prompt because the shell thought I was still inside quotes, and then not had any good idea of how to fix my error on the previous line.
3. I can use backslash escapes in a double-quoted string, but then I'm also stuck with variable interpolations. It's difficult to produce a string that contains a literal double quote, literal dollar sign, literal at sign and literal double quote consecutively. Yes, by itself I can wrap that sequence in single quotes, but that doesn't generalize to contexts where I need more layers of quoting.
4. Really nothing generalizes very well to when you need multiple layers of quoting.
5. Not directly an issue with quoting, but there's implicit concatenation between quoted and non-quoted tokens if there's no space between the quote and the other part. This leads to many situations where you think you've gotten it right but you haven't, and don't notice until you either try to iterate on your script or carefully examine the output.
6. But you have to rely on that confusing behaviour if you need a single-quoted string that contains a literal single quote.
It's taken me quite a bit of practice to become able to do anything moderately complex, and I still have to check my notes sometimes. But really the underlying problem is that writing these things creates a demand to have some kind of internal structure within the string, so that parts of it can be further processed. It would be far nicer, for a start, if "interpolate values into the string" were an explicit operator rather than a magical property of double-quoted strings. But the main reason I end up using Python to orchestrate command-line tasks is just so I can have actual tuples or lists of strings and manipulate them on that level instead of at a textual level.
Because shell is absolutely miserable to work with, whereas Just has decent syntax.
I know the OP said ".sh files", but you can have executable python files (for instance as well)
Why use .sh files over this?
When this tool becomes as available out of the box as POSIX sh (i.e. practically everywhere, including embedded systems and containers), then this reversed argument will make some sense. I'm willing to bet anything that POSIX sh will still be with us 50+ years from now, and 'just' will be long forgotten by then. You really should have a stronger argument for introducing another dependency into your build process (and onto your developers) than "it has a slightly simpler syntax compared to the industry standard".
It's great that POSIX sh is available everywhere except where it isn't (Windows).
In all SW teams I've been in except one, sh was available, and people preferred writing things in something else (usually Python/Perl). I have had an order of magnitude more success convincing teammates to use just than convincing them to use sh.
It may be ubiquitous, but it's useless if you can't convince non-shell gurus to use it.
It's not "slightly simpler", it's massively simpler. Shell scripts are pretty much the worst syntax in existence (barring esolangs that go out of their way to be weird).
While shell syntax may be quirky, it absolutely allows you to write scripts that are simple, easy to understand, and maintainable.
Because they don’t require installing a new tool and learning new syntax.
Fewer tools to manage. It seems like this could also be replaced by some aliases in a .bashrc file.
I don't like adding extra dependencies and complicating things if they aren't adding significant benefit. What am I missing here? It seems like an alias with extra steps.
I use Invoke-Build[1] everywhere and I highly recommend it. It's cross-platform, uses PowerShell so we have serious programming language in the background and is extremely simple yet powerful: dependencies, integrated help, good defaults for error handling and starting directory, vs code support, DOT charts of task dependencies, incremental task, persistent builds, parallel stuff etc. See example usage here [2]
[1]: https://github.com/nightroman/Invoke-Build
[2]: https://github.com/majkinetor/mm-docs
I use it as a somewhat more sane way of collecting my repetitive, project specific commands, without having to rely on shell history.
I'll just plop my project-specific workflows (series of shell commands) into a Justfile (that I don't commit, it's just for me). That allows me to be more rigorous and structured with how I'm iterating on a project.
It has syntax and semantics that are sufficiently saner than make, so I don't need to know a lot to be productive.
If I come back to a project after a couple weeks, I don't need to spelunk shell history. Just --list is enough to get back up to speed with how I was iterating.
Nice. I didn't know about Just.
Just (pun intended) a personal plug: I always liked the Make ease of use and the declarative GH Actions phylosophy. I also like to have the same workflows in local and in my remote CI, so I recently wrote a task runner with the (IMHO) ease of use of Make and GH Actions-like philosophy. It still lacks good docs, but I use it everyday on my projects and works like a charm.
https://github.com/luismedel/bluish/
Some day I need to do a proper Show HN :-)
I wonder how this kind of post can bother someone enough to downvote it. I think is related with the posted link :shrug:
This topic appears to have sparked some furor, maybe your comment got caught in the crossfire?
You didn't mention it's written in Rust. Is that allowed?
I find it more powerful and from a certain point easier to create the tooling using the projects programming language. Every dev should be familiar with that language and ecosystem. E.g. for project that had several tools - Rust (server), .NET and Node (CLI tools) and Svelte (frontend) - I wrote all operational tools in Typescript and run them using Deno. Very clean and powerful (typesafe, composable, Deno std lib). You can add all kind of stuff like timings, logging, checks, whatever ...
Question - mise is also incorporating a command runner. Anyone tried it yet? We love just, of course. Always curious about new tools.
mise tasks (https://mise.jdx.dev/tasks/) are great!
IMO, mise tasks are much better than `just`. A few things that make mise superior:
— solid environment variables support
— can run tasks in parallel, has a watch task feature, support for skipping tasks,…
— mise supports file tasks (i.e., running shell scripts, as many comments are suggesting here - https://mise.jdx.dev/tasks/file-tasks.html)
— mise tasks are defined using `toml`, and not a custom syntax
I have been using this for months now - way easier than Taskfile.
The parameter injection and passing to commands was the thing that converted me.
Reliance over a Posix shell basically prevents me from using Just. Using bash from Git on Windows is a very weird choice.
Just recipes accepting command line arts and supporting documentation might be enough to finally push me away from Make.
Seems that I'm the only one who opened up the website and didn't know what was going on. At least two sentences what the "just" is, otherwise it's "if you, you know" and that isn't inviting page.
Justfiles are really awesome for repos where you have to use a bunch of complex, long to type CLI integrations. Especially if you’re using Deno scripts that all have different permission flags…
I am really moving spok
It’s golang based but very like make.
https://github.com/FollowTheProcess/spok
Used it in my graduate internship. It really made using the garbage ASP.NET commands easier. Thanks!
Why is "Just" superior to any other e.g. bash script with a bunch of subcommands?
I'm also using a global justfile (`-g`) [1] to serve as a convenient location to aggregate any convenience functions, as well as call out to any standalone scripts as necessary.
You can also 'convert' all recipes to aliases so you get the best of both worlds, the ability to call with `just -g foo` or `foo`, from anywhere.
The docs example [2] uses a `user` justfile, but the principal is the same for global.
Most recently I've started using `fzf` and `bat` to allow interactive selection of recipes with syntax highlighted previews: Now with a global `alias ji="just -g _choose"` I can interactively choose a recipe if I need a reminder of what I've set up.This was inspired by the native `--choose` flag which does something similar, but by using `--summary` here, all recipes, including those that take arguments *, are listed, as well as any nested modules.
And because you can use any shebang, you can also write little python scripts to run with `uv`, including those with dependencies [5] declared in the shebang:
…here with inline metadata: [1] https://just.systems/man/en/global-and-user-justfiles.html [2] https://just.systems/man/en/global-and-user-justfiles.html#r... [3] https://just.systems/man/en/selecting-recipes-to-run-with-an... [4] https://github.com/charmbracelet/gum?tab=readme-ov-file#inpu... [5] https://docs.astral.sh/uv/guides/scripts/#running-a-script-w...* interactively selected recipes that take arguments won't work by directly passing to `xargs` here, but in some cases where I do want that flexibility I just add a condition in the recipe to prompt for input, with `gum input` [4]. Flexibility. This is a belt and braces approach and only used where necessary as the `fzf` preview will have made it clear that a recipe takes arguments.
I think you'd like what I've done with mise. You can have tasks in your global config (~/.config/mise/config.toml) which by default are shown no matter where you are. `mise run` will show a selector by default of all tasks available, so no need to manually setup fzf. Shebangs work the same. Commonly, mise users would also put "uv" into their config so other users don't need to set that up separately from mise itself.
Interactive inputs are something I'm planning on shipping relatively soon. It would not be hard to do—I've got the ui components to do it and the data model supports it.
very neat but already hitting cases where it doesn't play nice with pwsh scripts, even using the shebang. Back to using a dir full of .ps1 files I guess lol
I love just! Any way to avoid remembering things is great.
Just use programming language to build itself, it is even possible with C [0]
[0] https://github.com/tsoding/nob.h
If it is painful, ditch that language.
I built something similar a couple of years back. Glad to see I wasn't alone in my itches
https://github.com/shikaan/shmux
Uh, isn't this just Make? I'd rather people run `make this` and `make that` than install a new tool to do the same damned thing. Sometimes software is just "done" and doesn't need to be reinvented.
I love just, this is such a great piece of software.
I was thinking the other day: why don't we use just instead of Dockerfiles to define containers?
Can you set a variable from one task and use it from another, or is it a bad thing to want this?
use this in my projects and love it
[dead]
[dead]
[flagged]
Nobody: Let's write all of our scripts in YAML
Me: !
We recently switched pgai over to just. And are quite happy so far. The hierarchical nature is quite nice: https://github.com/timescale/pgai