outlore 2 days ago

Really love the ideas behind Deno, and tried to do things the Deno way (Deno.json, JSR, modern imports, Deno Deploy) for a monorepo project with Next.js, Hono and private packages. Some things like Hono worked super well, but Next.js did not. Other things like types would sometimes break in subtle ways. The choice of deployment destination e.g. Vercel for Next also gave me issues.

Here is an example of a small microcut I faced (which might be fixed now) https://github.com/honojs/hono/issues/1216

In contrast, Bun had less cognitive overhead and just "worked" even though it didn't feel as clean as Deno. Some things aren't perfect with Bun either like the lack of a Bun runtime on Vercel

  • WorldMaker a day ago

    You picked a stack that is still very npm-centric, especially private npm packages. The sweet spot for doing things the Deno way still seems to be choosing stacks that themselves are very Deno and/or ESM-native. I've had some great experiences with Lume, for instance, and targeting things like Deno Deploy over Vercel. (JSR scores are very helpful at finding interesting libraries with great ESM support.) Obviously "start with a fresh stack" is a huge ask and not a great way to sell Deno, given how much time/effort investment exists in stacks like Next.js. But I think in terms of "What does Deno do best?" there's a sweet spot where you 0-60 everything in Deno-native/ESM-native tools.

    Also, yeah, a lot of Deno's npm compatibility keeps getting better, as mentioned in these 2.4 release notes there are a few new improvements. As another comment in these threads points out, for a full stack like the one you were trying, using Deno package.json first can give a better compatibility feeling than deno.json first, even if the deno.json first approach is the nicer/cleaner one long term or when you can go 0-60 in Deno-native/ESM-native greenfields.

    • qn9n 18 hours ago

      Lume is a wonderful development experience for creating some static sites, you can get up and running nice and quick and you have a lot of freedom over choices such as the templating language and data formats you want to use.

  • Ciantic 2 days ago

    It works surprisingly well when used in npm compatiblity mode, a lot like Bun is used.

    Running `deno install` in a directory with package.json will create a leaner version of node_modules, running `deno task something` will run scripts defined in `package.json`.

    Deno way of doing things is a bit problematic, as I too find it is often a timesink where things don't work, then if you have to escape back to node/npm it becomes a bigger hassle. Using Deno with package.json is easier.

  • qn9n 18 hours ago

    I would highly recommend giving Deno Fresh[1] a go, it has a lot of the same features as Next.js but I find it to result in a much cleaner codebase overall. This coupled with Deno's built in KV store and hosted on Deploy makes for quite a zen workflow to be honest.

    [1]: https://fresh.deno.dev

  • shepherdjerred 2 days ago

    100%. I was all-in on Deno, but there were just too many sharp edges. In contrast, Bun just works.

voat 2 days ago

People underestimate the node compatibility that Deno offers. I think the compat env variable will do a lot of adoption. Maybe a denon command or something could enable it automatically? Idk.

  • efilife a day ago

    The deno compatibility with node has been a lie for me. I tried to port a simple project (100-200 LOC) to deno and it took me an hour which should have been 5-10 minutes. It didn't support some of node's methods and if it did, it was completely undocumented. Had to install basic functionality from some obscure URLS. Onxe it came to porting my test suite I just gave up. The problem was CJS -> ESM transition that was way more painful than I anticipated it to be. And definitely not as simple as deno's docs make it to be. Couldn't just port the whole library

  • CuriouslyC 2 days ago

    Honestly, I was bullish on Deno back in the day, but I don't see why I'd use it over Bun now.

    • jitl 2 days ago

      Less segfault, improved security / capability model

      • spiffytech 2 days ago

        As a Bun user I don't really get segfaults anymore.

        • surajrmal 2 days ago

          I've written C for years. The only time it is safe from crashes is when the code doesn't churn and has consistent timing between threads. bun has constant feature churn and new hardware it runs on all the time providing novel timings. It is very unlikely going to be crash free any time soon.

        • drewbitt a day ago

          Just got one today! But yes it is better.

      • tmikaeld 2 days ago

        The security model is very underestimated imo, it will be very evident when more bun projects reach production and not experimental.

    • pjmlp 2 days ago

      I have yet no reason to fight IT and architects for having anything besides node on CI/CD pipelines and base images for containers.

blinkingled 2 days ago

Crazy that Deno is still not workable on FreeBSD because of the Rust V8 bindings not being ported.

  • Mond_ 2 days ago

    How big is the intersection of modern Javascript developers and FreeBSD users?

    • blinkingled 2 days ago

      Not as big as Linux but I know a few FreeBSD shops that run NodeJS apps so it's not entirely crazy to think that there are more and they would want to try Deno. Besides making your OSS software compilable on *BSD/Linux/Mac/Win has historically been a good thing to do anyways.

      • whizzter 2 days ago

        For a lowlevel runtime (ie V8 itself) I can accept certain lag since there might be some low-level differences in how signals,etc behave.

        However for more generic code Linux'isms often signals a certain "works-on-my-machine" mentality that might even hinder cross-distro compatibility, let alone getting things to work on Windows and/or osX development machines.

        I guess a Rust binding for V8 is a tad borderline, not necessarily low-level but still an indicator that there's a lack of care for getting things to work on other machines.

      • surajrmal 2 days ago

        Is it big enough to prioritize fixing though? The answer seems to be a no so far.

    • gr4vityWall 2 days ago

      Node.js is (maybe surprisingly) used a lot in less common operating systems like FreeBSD and Illumos.

  • shrubble 2 days ago

    It's more than a little surprising that portability between different Unices is not given more emphasis. "Back in the day" a program being portable between Sun Solaris, HP's HP-UX, Linux, FreeBSD was considered a sign of clean code.

    • jitl a day ago

      Back in the day, Sun Solaris and HP-UX were not end-of-life, and FreeBSD had more equal industry footing with Linux. Now Linux is the clear winner in server OS UNIX by a wide margin. Also, Ryan Dhal worked at Joyent, a Illumos/Solaris shop when he built Node; perhaps that has informed his lack of interest in supporting FreeBSD these days.

  • ctz 2 days ago

    Looks like it is in ports?

    • blinkingled 2 days ago

      Trying to compile it - it's 2.2.0 but better than nothing. I haven't seen any upstream patches for Rust V8 for FBSD so maybe out of tree ones in the ports if it does compile.

  • timhh a day ago

    I mean... you can probably see why they don't spend any effort on that.

aseipp 2 days ago

Nice list of solid changes. I really like Deno for scripting random glue code; I use it most places (maybe with the exception of random machine learning stuff, where python/uv fits.) Looking forward to gRPC support later this year, too, for some of my long-tail use cases. And the bundle command looks nice!

mcraiha 2 days ago

I really like that bundle subcommand is back. No need to use workarounds.

duesabati 2 days ago

I really love where Deno is going, it really is what Node should've been.

My only concern is that they lose patience to their hype-driven competition and start doing hype-driven stuff themselves.

  • forty 2 days ago

    I thought that Deno was the hype-driven competition of nodejs ;)

eranation 2 days ago

I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database (other than using 100% npm compatibility which will take many popular deno packages out of scope). The issue is that there is no real centralized package manager (by design) which makes it challenging. Was there any development in that direction?

  • TheDong 2 days ago

    > I believe the reason Deno is not more widely used in production environments is the lack of a standardized vulnerability database

    If this were a real blocker, then C/C++ wouldn't be used in production either, since both just lean on the language-agnostic CVE/GHSA/etc databases for any relevant vulnerabilities there... and C also heavily encourages just vendoring in entire files from the internet with no way to track down versions.

    Anyway, doesn't "deno.lock" exist, and anyone who cares can opt-in to that, and use the versions in there to check vulnerability databases?

  • simantel a day ago

    Wouldn't this also be a problem for Go, which just imports from URLs (mostly GitHub) as well?

    • jitl a day ago

      The go imports use a Google-owned proxy for resolution which has a vulnerability facility. All golang package installs use the Google-owned proxy unless you set GOPROXY=direct when running go commands.

      https://arc.net/l/quote/arrozgok

drewbitt a day ago

The LSP works quite a bit better than when I tried it a few months ago. It had memory issues back then. Thanks!

impulser_ 2 days ago

Surprised they went with esbuild for bundling instead of Rust based Rolldown which in about to be v1.

  • jitl a day ago

    esbuild is very stable and mature at this point, Rolldown is still rapidly evolving.

deafpolygon 2 days ago

I keep hearing good things about Deno. It might just convince me to try js after all!

  • hn_throw2025 2 days ago

    These days it might be good to go straight to TS.

    • WorldMaker 2 days ago

      Which is what the Deno defaults guide towards as well.

bflesch 2 days ago

Big fan of deno, congrats on shipping.

From a security standpoint it really icks me when projects prominently ask their users to do the `curl mywebsite.com/foo.sh | sh` thing. I know risk acceptance is different for many people, but if you download a file before executing it, at least you or your antivirus can check what it actually does.

As supply chain attacks are a significant security risks for a node/deno stack application, the `curl | sh` is a red flag that signals to me that the author of the website prefers convenience over security.

With a curl request directly executed, this can happen:

- the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code

- MITM attack gives you a different file than others receive

Node/deno applications using the npm ecosystem put a lot of blind trust into npm servers, which are hosted by microsoft, and therefore easily MITM'able by government agencies.

When looking at official docs for deno at https://docs.deno.com/runtime/getting_started/installation/ the second option behind `curl | sh` they're offering is the much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.

Even though deno has excellent programmers working on the main project, the deno.land website might not always be as secure as the main codebase.

Just my two cents, I know it's a slippery slope in terms of security risk but I cannot say that `curl | sh` is good practice.

  • dicytea 2 days ago

    I really never understood the threat model behind this often repeated argument.

    Most of these installation scripts are just simple bootstappers that will eventually download and execute millions lines of code authored and hosted by the same people behind the shell script.

    You simply will not be capable of personally auditing those millions lines of code, so this problem boils down to your trust model. If you have so little trust towards the authors behind the project, to the point that you'd suspect them pulling absurdly convoluted ploys like:

    > the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code

    How can you trust them to not hide even more malicious code in the binary itself?

    I believe the reason why this flawed argument have spread like a mind virus throughout the years is because it is something that is easy to do and easy to parrot in every mildly-relevant thread.

    It is easy to audit a 5-line shell script. But to personally audit the millions lines of code behind the binary that that script will blindly download and run anyways? Nah, that's real security work and no one wants to actually do hard work here. We're just here to score some easy points and signal that we're a smart and security-conscious person to our peers.

    > which are hosted by microsoft, and therefore easily MITM'able by government agencies.

    If your threat model includes government agencies maliciously tampering your Deno binaries, you have far more things to worry about than just curl | sh.

    • gr4vityWall 2 days ago

      I think bflesch's reasoning comes from the idea that the website developers may not hold their website to the same security standards as their software, and not from a trust issue. Nor from thinking the author themselves are malicious.

      FWIW, I don't have a strong opinion here, besides that I like Debian's model the most. Just felt that it was worth to point out the above.

  • CJefferson 2 days ago

    The problem is getting the new users onboarded. Telling people to use 'npm' doesn't help if you don't have npm installed.

    How do I install npm? The npm webpage tells me to go and install nvm. At that tells me to use curl | sh .

    So using npm for a new user is still requiring a curl | sh, just in a different place.

    • pxc 2 days ago

      If the actual installation process can be made simple, you can have users copy/paste the whole installation script rather than pulling it down with curl.

      See for instance...

      Setup instructions for Pkgsrc on macOS with the SmartOS people's binary caches: https://pkgsrc.smartos.org/install-on-macos/

      Spack installation instructions: https://spack-tutorial.readthedocs.io/en/latest/tutorial_bas...

      Guix setup used to look like this but now they have a shell script for download. Even so, the instructions advise saving it and walk you through what to expect so you can have reasonable expectations while installing it.

      Anyway, my point is that there are other ways to instruct people about the same kind of install process.

    • calrain 2 days ago

      Security is either taken seriously, or it isn't.

      If security shortcuts are taken here, trust nothing else.

  • geysersam 2 days ago

    > much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.

    It boils down to the question "is it more likely the attacker can impersonate or control `npm` servers or our own servers. If the answer to that question is "No" then curl pipe sh is not less secure than `npm install`.

    This is security theater. If you're assuming an attacker can impersonate anyone in the internet your only secure option is to cut the cable.

  • bugtodiffer 2 days ago

    using deno isn't good security practice, their sandbox is implemented like stuff from the 90s

    • homebrewer 2 days ago

      If you're writing server stuff, at the coarse-grained level of isolation that Deno provides you're better off using just about anything else and restricting access to network/disks/etc through systemd. Unlike Deno, it can restrict access to specific filesystem paths and network addresses (whitelist/blacklist, your choice), and you're not locked into using just Deno and not forced to write JS/TS.

      See `man systemd.exec`, `systemd-analyze security`, https://wiki.archlinux.org/title/Systemd/Sandboxing

      • crabmusket 2 days ago

        Deno can restrict access to filesystem files or directories, and to particular network domains, see docs for examples. https://docs.deno.com/runtime/fundamentals/security/#file-sy...

        However in general I don't think Deno's permission system is all that amazing, and I am annoyed that people call it "capability-based" sometimes (I don't know if this came from the Deno team ever or just misinformed third parties).

        I do like that "deno run https://example.com/arbitrary.js" has a minimum level of security by default, and I can e.g. restrict it to read and write my current working dir. It's just less helpful for combining components of varying trust levels into a single application.

        • bugtodiffer a day ago

          Yes it says it can do it, but it has been broken many times because it is shit

    • bflesch 2 days ago

      Is node "sandbox" different? Does it even have a sandbox?

      • throwitaway1123 a day ago

        Node does have a permissions system, but it's opt in. Many runtimes/interpreters either have no sandbox at all, or they're opt in, which is why Deno's sandbox is an upgrade, even if it's not as hardened as iptables or Linux namespaces.

    • oblio 2 days ago

      Can you expand on this please? Also curious which 90s tech there inspired by.

      • bugtodiffer 2 days ago

        It is matching strings instead of actually blocking things. That's how sandboxes were implemented when I was a kid.

        E.g. --allow-net --deny-net=1.1.1.1

        You cannot fetch "http://1.1.1.1" but any domain that resolves to 1.1.1.1 is a bypass...

        It's crap security

        • whizzter 2 days ago

          If security principles are important they should be on a deny-default basis with allow-lists rather than the other way around.

          If the deno runtime implements the fetch module itself, then post-resolution checking definitely should be done though. It's more of an bug though than a principled security lapse.

        • jeltz 2 days ago

          That isn't 90s security, that is just bad code. And bad code was written in the 90s and is still written today.

        • oblio a day ago

          Ah, so by default it's default deny everything but once you need to open up categories, you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?

          That's a bit of a silly model.

          • throwitaway1123 a day ago

            > you can't just allow exact what you need in that category? You have to allow the entire category and then deny everything you don't want/need?

            No, you can allow access to specific domains, IP addresses, filesystem paths, environment variables, etc, while denying everything else by default. You can for instance allow access to only a specific IP (e.g. `deno run --allow-net='127.0.0.1' main.ts`), while implicitly blocking every other IP.

            What the commenter is complaining about is the fact that Deno doesn't check which IP address a domain name actually resolves to using DNS resolution. So if you explicitly deny '1.1.1.1', and the script you're running fetches from a domain with an A record pointing to '1.1.1.1', Deno will allow it.

            In practice, I usually use allow lists rather than deny lists, because I very rarely have an exhaustive list on hand of every IP address or domain I'm expecting a rogue script to attempt to access.

            • oblio a day ago

              Yeah, that was my point, default deny vs default allow.

              If you can default deny, then you're good. It's kind of a junior sysadmin mistake, otherwise, I would say.

              • bugtodiffer a day ago

                There are usecases like SSRF where I want to allow any IP, except for my internal network. They promise they can do that, but they cant.

  • methyl 2 days ago

    Has any attack like this been ever seen in the wild? Not saying it's impossible – but I'm just curious if this vector was ever successfully exploited.

    • bflesch 2 days ago

      I'm sure there are cases where the website CMS was hacked and then malware served instead of the normal install script. The `curl | sh` approach has been around forever.

      And depending on what "interesting" IP address you are coming from, NSA/Microsoft/Apple will MITM your npm install / windows update / ios update accordingly.

      Same in the linux ecosystem, if you look at the maintainers of popular distributions, some of them had .ru / .cn email addresses before switching to more official email addressess using the project domain - IMO this change of email addressess happened due to public pressure on russia after the Ukraine invasion. Having access to main package signing keys for a linux distribution, you can provide special packages from your linux package mirror to interesting targets.

      All of these scenarios are extremely hard to prove after the fact and the parties involved are not the type of people who do public writeups.

      • oblio 2 days ago

        If the website CMS is hacked, they can just swap the installable binary to one's that's hacked, too.

        • pcl 2 days ago

          That’s why downloading and then executing is preferable — as the GP pointed out, you or your machine’s antivirus can have an opportunity to inspect the file prior to execution, whereas that is not an option when the bytes are streamed directly to the interpreter.

  • jgalt212 2 days ago

    It would be great if curl could take file integrity hash value as a command line argument.

    • lioeters 2 days ago

      I'd like to practice verifying file integrity, instead of running `curl | sh`. I see that sha256sum (or 512) is the standard command people use.

          # Download package and its checksum
          curl -fsSLO https://example.com/example-1.0.0.tar.gz
          curl -fsSLO https://example.com/example-1.0.0.tar.gz.sha256
      
          # Verify the checksum
          sha256sum -c example-1.0.0.tar.gz.sha256
      
      But if the server is compromised, the malicious actor would likely be able to serve a matching hash to their file?
  • troupo 2 days ago

    > ask their users to do the `curl mywebsite.com/foo.sh | sh` thing.

    Because it's easier than maintaining packages across 10+ package managers. And in case of Linux it might not require sudo to install something.

  • oulipo 2 days ago

    how is it more or less good practice than running any untrusted binary on your system? the only possible stuff would be that the script download is broken midway and it becomes a "dangerous script" because eg of a `rm -rf /some/path` which would become a `rm -rf /` but other than that, it's just the same as downloading any executable on your laptop and running them... any attacks you described on the shell download would work with any other binary, which users routinely do

  • whyever 2 days ago

    All the attacks you described also apply to downloading and executing a file. I don't think `curl | sh` is worse in this regard.

    • bflesch 2 days ago

      With a downloaded file your antivirus will run automated checks on it, you can calculate a hash signature and compare the value with others who also download the file, and you will notice if the file changes after you execute it.

    • davedx 2 days ago

      If you download it first, you can at least eyeball what's been downloaded to check it doesn't start by installing a bitcoin miner

      • geysersam 2 days ago

        How often do people do that when they install a package from npm, pypi, or other package repository? In practice never.

sylware 2 days ago

[flagged]

  • bargainbin 2 days ago

    Deno is a JS runtime (written in Rust) on the V8 engine.

    What’s horrible about V8?

    • sylware 17 hours ago

      V8 is c++, mechanically an abomination.

  • frou_dh 2 days ago

    Hobby-horse trolling detected.