I want to be able to run code from untrusted sources (other people, users of my SaaS application, LLMs) in an environment, where I can control the blast radius if something goes wrong.
Hey Simon, given it's you ... are you concerned about LLMs attempting to escape from within the confines of a Docker container or is this more about mitigating things like supply chain attacks?
I'm concerned about prompt injection attacks telling the LLM how to escape the Docker container.
You can almost think of a prompt injection attack as a supply chain attack - but regular supply chain attacks are a concern too, what if an LLM installs a new version of an NPM package that turns out to have been deliberately infected with malware that can escape a container?
When you use docker you can have full control over the networking layer already.
As you can bound it's networking to another container that will act as proxy/filter. How WASM offer that?
With reverse proxy you can log requests, or filter them if needed, restrict the allowed domains, do packet inspection if you want to go crazy mode.
And if an actor is able to tailor fit a prompt to escape docker, I think you have bigger issues in your supply chain.
I feel this wasm is bad solution. What it brings a VM or docker can't do?
And escaping a docker container is not that simple, require a lot of heavy lifting and not always possible.
Aside from my worries about container escape, my main problem with Docker is the overhead of setting it up.
I want to build software that regular users can install on their own machines. Telling them they have to install Docker first is a huge piece of friction that I would rather avoid!
The lack of network support for WASM fits my needs very well. I don't want users running untrusted code which participates in DDoS attacks, for example.
You have the same lack of network support with cgroups containers if you configure them properly. It isn't as if it's connected and filtered out, but as though it's disconnected. You can have it configured in such a way that it has network support but that it's filtered out with iptables, but that does seem more dangerous, though in practice that isn't where the escapes are coming from. A network namespace can be left empty, without network interfaces, and a process made to use the empty namespace. That way there isn't any traffic flowing from an interface to be checked against iptables rules.
I think that threat is generally overblown in these discussions. Yes, container escape is less difficult than VM escape, but it still requires major kernel 0day to do; it is by no means easy to accomplish. Doubly so if you have some decent hygiene and don't run anything as root or anything else dumb.
When was the last time we have heard container escape actually happening?
Just because you haven't heard of it doesn't mean the risk isn't real.
It's probably better to make some kind of risk assessment and decide whether you're willing to accept this risk for your users / business. And what you can do to mitigate this risk. The truth is the risk is always there and gets smaller as you add several isolation mechanisms to make it insignificant.
I think you meant “container escape is not as difficult as VM escape.”
A malicious workload doesn’t need to be root inside the container, the attack surface is the shared linux kernel.
Not allowing root in a container might mitigate a container getting root access outside of a namespace. But if an escape succeeds the attacker could leverage yet another privilege escalation mechanism to go from non-root to root
Better not rely on unprivileged containers to save you. The problem is:
Breaking out of a VM requires a hypervisor vulnerability, which are rare.
Breaking out of a shared-kernel container requires a kernel syscall vulnerability, which are common. The syscall attack surface is huge, and much of it is exploitable even by unprivileged processes.
They both can be highly unescapable. The podman community is smaller but it's more focused on solving technical problems than docker is at this point, which is trying to increase subscription revenue. I have gotten a configuration for running something in isolation that I'm happy with in podman, and while I think I could do exactly the same thing in Docker, it seems simpler in podman to me.
Apologies for repeating myself all over this part of the thread, but the vulnerabilities here are something that Podman and Docker can't really do anything about as long as they're sharing a kernel between containers.
If you're going to make containers hard to escape, you have to host them under a hypervisor that keeps them apart. Firecracker was invented for this. If Docker could be made unescapable on its own, AWS wouldn't need to run their container workloads under Firecracker.
This same, not especially informative content is being linked to again and again in this thread. If container escapes are so common, why has nobody linked to any of them rather than a comment saying "There are lots" from 3 years ago?
Perspective is everything, I guess. You look at that three year old comment and think it's not particularly informative. I look at that comment and see an experienced infosec pro at Fly.io, who runs billions of container workloads and doesn't trust the cgroups+namespaces security boundary enough so goes to the trouble of running Firecracker instead. (There are other reasons they landed there, but the security angle's part of it.)
Anyway if you want some links, here are a few. If you want more, I'm sure you can find 'em.
Some are covered off by good container deployment hygiene and reducing privilege, but from my POV it looks like the container devs are plugging their fingers in a barrel that keeps springing new leaks.
(To be fair, modern Docker's a lot better than it used to be. If you run your container unprivileged and don't give it extra capabilities and don't change syscall filters or MAC policies, you've closed off quite a bit of the attack surface, though far from all of it.)
But keep in mind that shared-kernel containers are only as secure as the kernel, and today's secure kernel syscall can turn insecure tomorrow as the kernel evolves. There are other solutions to that (look into gVisor and ask yourself why Google went to the trouble to make it -- and the answer is not "because Docker's security mechanisms are good enough"), but if you want peace of mind I believe it's better to sidestep the whole issue by using a hypervisor that's smaller and much more auditable than a whole Linux kernel shared across many containers.
I mean docker runs in sudo privileges for the most part, yes I know that docker can run rootless too but podman does it out of the box.
So if your docker container gets vulnerable and it can somehow break through a container, I think that with default sudo docker, you might get sudo privileges whereas in default podman, you would be having it as a user run executable and might need another zero day or smth to have sudo privilege y'know?
Everyday we grow closer to my dream of having a WASM based template engine for Python, similar to how Blazor takes Razor and builds it to WASM. I might have to toy with this when I get home.
Building packages with C/C++ extensions is still a bit tricky but you can see a list of all prebuilt packages for wasmer at https://pythonindex.wasix.org .
numpy is available there, scipy not (yet).
Wow, this is the key. If it just had python that’s not as useful but the major frameworks is the real value. Definitely going to keep an eye on this. I built a sandbox with deno for ai code generation. It works well enough but there are some use cases where python may make more sense. Nice!
How long should it take for "wasmer run python/python" to start showing me output? It's been hung for a while for me now (I upgraded to wasmer 6.1.0-rc.5).
"wasmer run python/python@=0.2.0" on the same machine gets me into Python 3.12.0 almost instantly.
OK got there in the end! I didn't time it but felt like around 10 minutes or more.
It did give me one warning message:
% wasmer run python/python
Python 3.13.0rc2 (heads/feat/dl-dirty:152184da8f, Aug 28 2025, 23:40:30) [Clang 21.1.0-rc2 (git@github.com:wasix-org/llvm-project.git 70df5e11515124124a4 on wasix
Type "help", "copyright", "credits" or "license" for more information.
warning: can't use pyrepl: No module named 'msvcrt'
>>>
The close to 'native' Python performance looks promising!
Just want to point out that this section avoids mentioning the best way to do it:
> AWS Lambda doesn't natively run unmodified Python apps:
>
> - You need adapters (such as https://github.com/slank/awsgi or https://github.com/Kludex/mangum) for running your WSGI sites.
> - WebSockets are unsupported.
> - Setup is complex, adapters are often unmaintained.
AWS provides https://github.com/awslabs/aws-lambda-web-adapter which is a) supported and b) written Rust, providing a translation of Lambda requests back into HTTP so you can use your usual entry point to the WSGI app. It is simple to set up.
WebSockets still not supported of course, but the issue of adapters is solved.
However it's worth point that due to the concurrency model of AWS Lambda (1 client request / ws message = 1 lambda invocation / one process only ever handles one request at a time before it can handle the next one), you would end up spawning much more AWS Lambda instances than you would with Cloudflare workers or Wasmer Edge.
There are cost implications obviously, but AWS lambda works this way also to make concurrency and scaling "simpler" by providing an easier mental model. Though much more expensive in theory
FFI support (like they have) is essential for any alternative Python to be worthwhile because so much of what makes Python useful today is numpy and keras and things like that.
That said, there is a need for accelerating branchy pure-python workloads too, I did a lot of work with rdflib where PyPy made all the difference and we also need runtimes that can accelerate those workloads.
Nice, but every time I look into WASM I have to wonder if containers and/or lite weight VMs wouldn’t be simpler and have less restrictions. We seem to have forgotten about microkernels and custom runtimes (like the various Erlang ones) as well…
Still, that close to native Python is an interesting place to be.
Are we at the point where I can store arbitrary scripts in a sql database and execute them with arguments, safely in a python sandbox from a host language that may or may not be python, and return the value(s) to the caller?
I'd love to implement customer supplied transformation scripts for exports of data but I need this python to be fully sandboxed and only operate on the data I give it.
Wasmer's approach hints at faster cold starts and better overall performance; the benchmarking against pyodide is a bit unclear, and it's unclear to me whether that would make or break viability for a use case like this.
But one thing this does make possible is if your arbitrary script is actually a persistent server, you can deploy that to edge servers, and interact with your arbitrary scripts over the network in a safe and sandboxed way!
That's almost exactly what I want to do too. I've experimented a bit with QuickJS for this - there's a Python module here that looks reasonably robust https://pypi.org/project/quickjs/ - but my ideal would be a WebAssembly sandbox since that's the world's most widely tested sandbox at this point.
Depending on the language, GC is either implemented in userspace using linear memory, or using the new GC extension to webassembly. The latter has some restrictions that mean not every language can use it and it's not a turnkey integration (you have to do a lot of work), but there are a bunch of implementations now that use wasm's native GC.
If you use wasm's native GC your objects are managed by the WASM runtime (in browsers, a JS runtime).
For things like goroutines you would emulate them using wasm primitives like exception handling, unless you're running in a host that provides syscalls you can use to do stuff like stack switching natively. (IIRC stack switching is proposed but not yet a part of any production WASM runtime - see https://webassembly.org/features/)
Based on what I read in a quick search, what Go does is generate each goroutine as a switch statement based on a state variable, so that you can 'resume' a goroutine by calling the switch with the appropriate state variable to resume its execution at the right point.
currently CPython's WASI build does not have asyncio support out of the box (at least according to [0]). This is, by my understanding, downstream of asyncio implementations in the standard library being built off of primitives around sockets and the like. And WASI, again by my understanding, does not support sockets.
In a browser environment there are theoretically ways you could piggyback off of the async support in the native ecosystem. But CPython is written to certain systems, so you're talking about CPython patches.
BUT the kind of beautiful thing is you can show up with your own asyncio event loop! So for example Pyodide just ships its own asyncio event loop[1]. This is possible thanks to Python's async infra just being built off of its generator concepts. async/await is, in itself, not something that "demands" I/O, just asyncio is.
Ideally, sure, but that would increase the already enormous burden of building a standards compliant web browser. For a healthy web ecosystem it's important that not only trillion dollar companies can contribute or compete.
Not every single website needs to support every single browser. This is a modern convenience, I was doing QA back in the day when we still had to support Internet explorer.
Internet explorer just didn't provide the same experience as Chrome.
You were supporting the tail end of an era that is universally agreed upon as an ecosystem failure. The internet didn't provide a consistent user experience for developers or for users, it generated mountains of legacy baggage, and it was frustrating for everyone.
For example if Firefox decides to add Rust support it doesn't mean every other browser needs to support it.
Just a handful of web experiences are going to be exclusive to Firefox. As is having Chrome as the only browser most people use isn't great for innovation.
Your comment is really relevant in the helium browser discussion. Its so on point.
People want different browsers so that chromium doesn't get to enforce their monopoly on web standards but I mean, its already happening. Like, if something runs on chrome and it doesn't run on firefox and is used by a lot of people...
Effectively firefox is ALSO forced to have those chromium features...
Basically the web standards is held hostage by chromium and we need a very heavy migration of large swathes of people away from chrome to something like firefox and that's whats being advocated I suppose.
I use zen / firefox because I also don't want chromium. I mean, idk if I have a particular reason except the above logic that I shared. honestly, idk to be honest.
You simply can't expect to run Roblox games inside of Chrome
Roblox can't generally be used to file your taxes.
But your visiting user created experiences.
The big problem is it's all controlled by one super company .
There's no reason we can't have an open source browser like which allows you to play various games, or run other sandbox applications. These applications could be programmed in a variety of different languages.
In this scenario, whatever I still need Chrome to handle certain important business, but I can use this alternate browser to engage with tons of other content.
I was actually thinking of creating a roblox alternative or atleast proposing the idea of modifying luanti which is open source to have roblox esque graphics.
So it would be the open source browser which allows you to play various games in some sense.
If you want sandbox applications, there is libriscv created by legendary fwsgonzo which can run on any device or wasm I suppose
Maintaining a browser is already hard enough, it's a very tough sell to convince 3+ browser vendors to implement a new language with its own standard library and quirks in parallel without a really convincing argument. As of yet, nobody has come up with a convincing enough argument.
Part of why WebAssembly was successful is that it's a way of generating javascript runtime IR instead of a completely new language + standard library - browsers can swap out their JavaScript frontend for a WASM one and reuse all the work they've done, reusing most of their native code generator, debugger, caches, etc. The primitives WASM's MVP exposes are mostly stuff browsers already knew how to do (though over time, it accumulated new features that don't have a comparison point in JS.)
And then WASM itself has basically no standard library, which means you don't have to implement a bunch of new library code to support it, just a relatively small set of JS APIs used to interact with it.
Every modern implementation I know of at least partially reuses the internals of the JS runtime, which enables things like cross-language inlining between WASM and JS.
Since they compiled the python interpreter to webassembly, yes you can now totally do a <python></python> webcomponent if you like.
Of course it requires the extra work of importing this interpreter.
Web browsers aren't going to come with multiple interpreters built-in, it would be too heavy.
I would be interested to see how short the time to run "Hello World" can be with python in a webpage, counting the time to load the whole page without cache.
If you transpile to javascript the performance will never exceed that of Javascript.
Typescript is a bit silly in that aspect because it removes all the types that the developers put in, they aren't used to improve time or memory performance at all.
FastHTML requires apsw (SQLite wrapper) even if you don't use it.
We already compiled apsw to WASIX but it also requires publishing a new version of Python to Wasmer (with sqlite dynamically linked instead of statically linked).
We will release a new Python version by the end of this week / beginning of next one, so by then FastHTML should be fully work in Wasmer! (both runtime and Edge)
I tried to understand what is "Wasmer Edge" but couldn't. They say on the front page "Make any app serverless. The cheapest, fastest and most scalable way to deploy is on the edge." and it seems like I can upload the source code of any app and they will convert it for me? Unlikely so.
Also it says "Pay CDN-like costs for your cloud applications – that’s Wasmer Edge." and I don't understand why I need to pay for the cloud if the app is serverless. That's exactly the point of serverless app that you don't need to pay for the servers because, well, the name implies that there is no server.
Confusingly, "Serverless" doesn't mean there's no server. It means that you don't have to manage a server yourself.
My preferred definition of serverless is scale-to-zero - where if your app isn't getting any traffic you pay nothing (as opposed to paying a constant fee for having your own server running that's not actually doing any work), then you pay more as the traffic scales up.
Frustratingly there are some "serverless" offerings out there which DO charge you even for no traffic - "Amazon Aurora Serverless v1" did that, I believe they fixed it in v2.
Still confusing, since infrastructure you don't have to manage yourself is sometimes called "managed". It makes sense from the perspective of "you are paying us to manage this for you".
It's a terrible name, but it's been around for over a decade now so we're stuck with it.
I mostly choose not to use it, because I don't like using ambiguous terminology if I can be more specific instead. So I'll say things like "scale-to-zero".
Normally, if you want to run your apps serverlessly you'll need to adapt your source code to it (both AWS Lambda and Cloudflare Workders require creating a custom HTTP handler).
In our case, you can run your normal server (lets say uvicorn) without any code changes required from our side.
Of course, you can already do this in Docker-enabled workloads: Google Cloud or Fly.io, for example. But that means that your apps will have long cold-start times at a way higher cost (no serverless).
Thank you for the explanation, now I can better see the differences between "serverless" platforms although I am still a little disappointed that so called "serverless" apps still require a (paid) server despite the name.
This bugs me all the time. Ethernet is serverless. Minesweeper is serverless. AWS Lambda is quite serverful, you're just not allowed to get a shell on that server.
I believe "serverless" in this sense means "like AWS lambda". Theoretically you upload some small scripts and they're executed on-demand out of a big resource pool, rather than you paying for/running an entire server yourself.
It seems like a horrible way to build a system with any significant level of complexity, but idk maybe it makes sense for very rarely used and light routes?
I’ve been looking at using lua for something like this: basically, users will be able to program robots in my lab (biotech) to do things, and I need a scripting language I can easily embed and control the runtime of in the larger system.
Lua is theoretically better in… almost every way, except everyone in bio uses python. So it could allow more easy modification of LLM generated scripts (not worried about the libraries because I mostly want to limit them: the scripts are mainly to just run robots, and you can have them webhook out if you need complicated stuff)
My question would be: would running a python sandbox vs a lua sandbox actually be appreciably better? Not sure yet, but will have to investigate with this new package (since it has Go bindings!)
Curious given you looked at both why you considered Lua to be better. I'd like to use Lua to teach freshmen and I need some arguments as to why it's better than Python.
Much better embedability and much easier to make it safely embedded. Doesn't require a wasm compilation, just can be in raw C, and that lua can directly integrate with host functions and vice versa - something even these wasm implementations struggle with.
JuputerLite also does this. Uses local storage and Pyodide kernel (python on wasm). It has a special version of pip, and wasm versions of a lot of libraries which usually use native code (numpy etc). Super impressive.
Philosophically speaking I believe we should not require a special version of pip to install packages, nor a "lite" version of Jupyter to run in WebAssembly.
We should be able to run Jupyter fully within the Wasmer ecosystem without requiring any changes on the package (to run either in the browser or the server).
I am so excited about python edge supported by wasm because I used python on cloudflare worker but there are so many limitations just simple pure python code supported.
Yeah, when I see these kinds of headlines about Python, I'm always left wondering what they mean by "fast". In this case, "fast" means "still slower than Python usually is".
I'm not sure I understand correctly: is it a new serverless offering competing with the likes of vercel and fly.io, but with a different technology and pricing strategy? And the wasm container means that I can deploy my streamlit of FastAPI ETL apps without the Docker overhead or slowness of streamlit cloud?
Wouldnt be better to have sandboxing built directly to cpython? Why there is no such thing already "include" in cpython? Or maybe to create some limited sandboxed venv?
Would it be possible to make it work on iOS or android? I always missed better support of python on mobiles. In the past used PythonKit rapid prototype and interop with Swift but had limited set of modules. Wish to use this in react native for interop between js and python
Since LLMs have made me so lazy that I never bother to search or read on my own, can someone tell me whether I can use uv as my project management tool with wasmer? What's the story here?
I simply CANNOT go back to use packages without uv, it would be unthinkable to me.
Actually, now that I think of it, my laziness might have started when I learned perl 30 years ago.
Does your solution support interop between modules written in different languages? I would love to be able to pass POD objects between Python and JS inside the same runtime.
For a backend project in Java, I use Jep for Python interoperability and making use of Python ecosystem. It gives me a "non-restricted" Python to have in my Java code, something I'm quite happy with. Wondering how this compares to that .
thanks. it'd be great to have a quick tutorial on doing so.
this is close to my dream of creating Frankenstein apps with the web platform instead of graal :)
> Now, you can run any kind of Python API server, powered by fastapi, django, flask, or starlette, connected to a MySQL database automatically when needed
I assume this is targeting the standalone WebAssembly use case, we're not...running MySQL in browsers right?
OK this looks promising:
Running that gave me a Python 3.12 shell apparently running entirely in a WebAssembly sandbox!I've been trying to find a robust, reliable and easy way to run a Python process in WebAssembly (outside of a browser) for a few years.
Thanks!
Forgot to put it on the article, but the latest Python requires the Wasmer rc.5 to run! (the final release will be coming very soon)
I tried to run these commands. It downloads python@3.13.1, but then hangs without producing any output. However, it seems to work for
> I've been trying to find a robust, reliable and easy way to run a Python process in WebAssembly (outside of a browser) for a few years. reply
What’s the use case? Is it the sandboxing? Is it easier than running Python in a container?
I want to be able to run code from untrusted sources (other people, users of my SaaS application, LLMs) in an environment, where I can control the blast radius if something goes wrong.
What's wrong with Docker for this?
I keep on hearing that Docker isn't designed as a security boundary for this kind of thing.
Firecracker is meant to be secure but it's a lot harder to work with.
Hey Simon, given it's you ... are you concerned about LLMs attempting to escape from within the confines of a Docker container or is this more about mitigating things like supply chain attacks?
I'm concerned about prompt injection attacks telling the LLM how to escape the Docker container.
You can almost think of a prompt injection attack as a supply chain attack - but regular supply chain attacks are a concern too, what if an LLM installs a new version of an NPM package that turns out to have been deliberately infected with malware that can escape a container?
When you use docker you can have full control over the networking layer already. As you can bound it's networking to another container that will act as proxy/filter. How WASM offer that?
With reverse proxy you can log requests, or filter them if needed, restrict the allowed domains, do packet inspection if you want to go crazy mode.
And if an actor is able to tailor fit a prompt to escape docker, I think you have bigger issues in your supply chain.
I feel this wasm is bad solution. What it brings a VM or docker can't do?
And escaping a docker container is not that simple, require a lot of heavy lifting and not always possible.
Aside from my worries about container escape, my main problem with Docker is the overhead of setting it up.
I want to build software that regular users can install on their own machines. Telling them they have to install Docker first is a huge piece of friction that I would rather avoid!
The lack of network support for WASM fits my needs very well. I don't want users running untrusted code which participates in DDoS attacks, for example.
You have the same lack of network support with cgroups containers if you configure them properly. It isn't as if it's connected and filtered out, but as though it's disconnected. You can have it configured in such a way that it has network support but that it's filtered out with iptables, but that does seem more dangerous, though in practice that isn't where the escapes are coming from. A network namespace can be left empty, without network interfaces, and a process made to use the empty namespace. That way there isn't any traffic flowing from an interface to be checked against iptables rules.
Escaping a container is apparently much easier than escaping a VM.
I think that threat is generally overblown in these discussions. Yes, container escape is less difficult than VM escape, but it still requires major kernel 0day to do; it is by no means easy to accomplish. Doubly so if you have some decent hygiene and don't run anything as root or anything else dumb.
When was the last time we have heard container escape actually happening?
Just because you haven't heard of it doesn't mean the risk isn't real.
It's probably better to make some kind of risk assessment and decide whether you're willing to accept this risk for your users / business. And what you can do to mitigate this risk. The truth is the risk is always there and gets smaller as you add several isolation mechanisms to make it insignificant.
I think you meant “container escape is not as difficult as VM escape.” A malicious workload doesn’t need to be root inside the container, the attack surface is the shared linux kernel.
Not allowing root in a container might mitigate a container getting root access outside of a namespace. But if an escape succeeds the attacker could leverage yet another privilege escalation mechanism to go from non-root to root
To quote one of HN's resident infosec experts: Shared-kernel container escapes are found so often they're not even all that memorable.
More here: https://news.ycombinator.com/item?id=32319067
apparently...
Like it's also possible in a VM.
What about running non privileged containers! You need really to open some doors to make it easier!
Better not rely on unprivileged containers to save you. The problem is:
Breaking out of a VM requires a hypervisor vulnerability, which are rare.
Breaking out of a shared-kernel container requires a kernel syscall vulnerability, which are common. The syscall attack surface is huge, and much of it is exploitable even by unprivileged processes.
I posted this thread elsewhere here, but for more info: https://news.ycombinator.com/item?id=32319067
Is Podman unescapable compared to Docker?
They both use the same fundamental isolation mechanisms, so no.
They both can be highly unescapable. The podman community is smaller but it's more focused on solving technical problems than docker is at this point, which is trying to increase subscription revenue. I have gotten a configuration for running something in isolation that I'm happy with in podman, and while I think I could do exactly the same thing in Docker, it seems simpler in podman to me.
Apologies for repeating myself all over this part of the thread, but the vulnerabilities here are something that Podman and Docker can't really do anything about as long as they're sharing a kernel between containers.
The vulnerability is in kernel syscalls. More info here: https://news.ycombinator.com/item?id=32319067
If you're going to make containers hard to escape, you have to host them under a hypervisor that keeps them apart. Firecracker was invented for this. If Docker could be made unescapable on its own, AWS wouldn't need to run their container workloads under Firecracker.
This same, not especially informative content is being linked to again and again in this thread. If container escapes are so common, why has nobody linked to any of them rather than a comment saying "There are lots" from 3 years ago?
I did apologize, didn't I? :-)
Perspective is everything, I guess. You look at that three year old comment and think it's not particularly informative. I look at that comment and see an experienced infosec pro at Fly.io, who runs billions of container workloads and doesn't trust the cgroups+namespaces security boundary enough so goes to the trouble of running Firecracker instead. (There are other reasons they landed there, but the security angle's part of it.)
Anyway if you want some links, here are a few. If you want more, I'm sure you can find 'em.
CVE-2022-0492: https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups
CVE-2022-0847: https://www.datadoghq.com/blog/engineering/dirty-pipe-contai...
CVE-2023-2640: https://www.crowdstrike.com/en-us/blog/crowdstrike-discovers...
CVE-2024-21626: https://nvd.nist.gov/vuln/detail/cve-2024-21626
Some are covered off by good container deployment hygiene and reducing privilege, but from my POV it looks like the container devs are plugging their fingers in a barrel that keeps springing new leaks.
(To be fair, modern Docker's a lot better than it used to be. If you run your container unprivileged and don't give it extra capabilities and don't change syscall filters or MAC policies, you've closed off quite a bit of the attack surface, though far from all of it.)
But keep in mind that shared-kernel containers are only as secure as the kernel, and today's secure kernel syscall can turn insecure tomorrow as the kernel evolves. There are other solutions to that (look into gVisor and ask yourself why Google went to the trouble to make it -- and the answer is not "because Docker's security mechanisms are good enough"), but if you want peace of mind I believe it's better to sidestep the whole issue by using a hypervisor that's smaller and much more auditable than a whole Linux kernel shared across many containers.
I mean docker runs in sudo privileges for the most part, yes I know that docker can run rootless too but podman does it out of the box.
So if your docker container gets vulnerable and it can somehow break through a container, I think that with default sudo docker, you might get sudo privileges whereas in default podman, you would be having it as a user run executable and might need another zero day or smth to have sudo privilege y'know?
Docker would be hacky and cumbersome especially when compared to anything assembly like.
The sandboxing, especially for AI Agents.
Everyday we grow closer to my dream of having a WASM based template engine for Python, similar to how Blazor takes Razor and builds it to WASM. I might have to toy with this when I get home.
Ideally, something that we could share between JS and Python.
Hasn't Pyodide been available for some years now?
Yes but it works only in the browser - running Pyodide outside of a browser is a lot of extra work.
My previous attempts are described here:
- https://til.simonwillison.net/deno/pyodide-sandbox
- https://til.simonwillison.net/webassembly/python-in-a-wasm-s...
not true
I don't know what runtime it uses but I have tests in nightly CI that run exactly like this.see https://pyodide.org/en/stable/development/building-packages-...
Interesting - I hadn't seen that before:
> Pyodide provides an experimental command line runner for testing packages against Pyodide. Using it requires nodejs version 20 or newer.
Looks like it's a recent addition?
No clue - I added that CI job around 6 months ago.
I tracked it down to this PR from September 2022, so it's been around for a while: https://github.com/pyodide/pyodide/pull/2976
What is the point of running python in webassembly outside browser?
See comment here: https://news.ycombinator.com/item?id=45365165
I want a robust sandbox I can run untrusted code in, outside of the browser.
Does this work for packages with C/C++ extensions e.g. numpy and scipy?
Building packages with C/C++ extensions is still a bit tricky but you can see a list of all prebuilt packages for wasmer at https://pythonindex.wasix.org . numpy is available there, scipy not (yet).
Wow, this is the key. If it just had python that’s not as useful but the major frameworks is the real value. Definitely going to keep an eye on this. I built a sandbox with deno for ai code generation. It works well enough but there are some use cases where python may make more sense. Nice!
Any chance your deno thing is on GH?
Bummer. Back to Docker then.
Seems like it already does for some, assuming Pillow and FFMpeg are on the list.
``` ╰─ wasmer run python/python
error: Spawn failed
╰─▶ 1: compile error: Validate("exceptions proposal not enabled (at offset 0x191a)")
```
You'll need the latest wasmer RC for proper exceptions support.
We unfortunately didn't get the final release out quite in time...
How long should it take for "wasmer run python/python" to start showing me output? It's been hung for a while for me now (I upgraded to wasmer 6.1.0-rc.5).
"wasmer run python/python@=0.2.0" on the same machine gets me into Python 3.12.0 almost instantly.
Compilation with LLVM takes quite a while (the final release will show a spinner...).
So please wait a bit - subsequent runs will be fast, since compiled Python will be cached.
Oh so it's actually compiling everything on my machine?
Any chance `wasmer run python/python` might download a pre-compiled version in the future?
Yeah, that's mentioned as a small side note in the blog post - we are working on it, and will hopefully have it ready in a week or two!
OK got there in the end! I didn't time it but felt like around 10 minutes or more.
It did give me one warning message:
That sounds like the compilation is accidentally triggering this old frustration [0].
[0] https://github.com/python/cpython/issues/131189
Compilation was slow for me on macOS too.
We'll improve this very soon, right now the experience is less than ideal.
Ideally we would download the compiled artifacts instead of compiling as Simon commented... it will be a much better experience for everyone!
It says the same thing in the pyodide repl for what it's worth.
The close to 'native' Python performance looks promising!
Just want to point out that this section avoids mentioning the best way to do it:
AWS provides https://github.com/awslabs/aws-lambda-web-adapter which is a) supported and b) written Rust, providing a translation of Lambda requests back into HTTP so you can use your usual entry point to the WSGI app. It is simple to set up.WebSockets still not supported of course, but the issue of adapters is solved.
You can also use Websockets with AWS Lambda through API Gateway.
And you can also use Websockets with Cloudflare workers: https://developers.cloudflare.com/workers/runtime-apis/webso...
However it's worth point that due to the concurrency model of AWS Lambda (1 client request / ws message = 1 lambda invocation / one process only ever handles one request at a time before it can handle the next one), you would end up spawning much more AWS Lambda instances than you would with Cloudflare workers or Wasmer Edge.
There are cost implications obviously, but AWS lambda works this way also to make concurrency and scaling "simpler" by providing an easier mental model. Though much more expensive in theory
FFI support (like they have) is essential for any alternative Python to be worthwhile because so much of what makes Python useful today is numpy and keras and things like that.
That said, there is a need for accelerating branchy pure-python workloads too, I did a lot of work with rdflib where PyPy made all the difference and we also need runtimes that can accelerate those workloads.
Nice, but every time I look into WASM I have to wonder if containers and/or lite weight VMs wouldn’t be simpler and have less restrictions. We seem to have forgotten about microkernels and custom runtimes (like the various Erlang ones) as well…
Still, that close to native Python is an interesting place to be.
VMs are much more complex that running a wasm process.
Not at scale. At scale, kubevirt has you covered, even without lightweight hypervisors.
WASM runs in the browser which is why it receives so much consideration (rightfully so).
Yes, but I am more concerned with server-side, where you need to do a bit more and with better performance.
Are we at the point where I can store arbitrary scripts in a sql database and execute them with arguments, safely in a python sandbox from a host language that may or may not be python, and return the value(s) to the caller?
I'd love to implement customer supplied transformation scripts for exports of data but I need this python to be fully sandboxed and only operate on the data I give it.
Arguably/pedantically, Pyodide has had this for a while: see https://developer.nvidia.com/blog/sandboxing-agentic-ai-work... for a use case.
Wasmer's approach hints at faster cold starts and better overall performance; the benchmarking against pyodide is a bit unclear, and it's unclear to me whether that would make or break viability for a use case like this.
But one thing this does make possible is if your arbitrary script is actually a persistent server, you can deploy that to edge servers, and interact with your arbitrary scripts over the network in a safe and sandboxed way!
I hadn't seen that NVIDIA article before... turns out they're running Python inside Pyodide inside WebAssembly inside Chrome inside Playwright inside Node.js! https://github.com/JosephTLucas/wasm-plotly/blob/main/server...
I'm always on the lookout for ways to run Python in a sandbox but that feels like one too many levels for me.
Pyodide inside Deno removes at least the headless browser layer: https://til.simonwillison.net/deno/pyodide-sandbox
> where I can store arbitrary [python] scripts in a sql database and execute them with arguments
Please no :sob:
Why not?
That's almost exactly what I want to do too. I've experimented a bit with QuickJS for this - there's a Python module here that looks reasonably robust https://pypi.org/project/quickjs/ - but my ideal would be a WebAssembly sandbox since that's the world's most widely tested sandbox at this point.
Trino isn't too far from that. It runs a wasm build of Python on Java, via the Chicory wasm runtime.
How does WASM replace/implement language specific features like goroutines or Python's asyncio loop, or the specifics of each language's GC?
Depending on the language, GC is either implemented in userspace using linear memory, or using the new GC extension to webassembly. The latter has some restrictions that mean not every language can use it and it's not a turnkey integration (you have to do a lot of work), but there are a bunch of implementations now that use wasm's native GC.
If you use wasm's native GC your objects are managed by the WASM runtime (in browsers, a JS runtime).
For things like goroutines you would emulate them using wasm primitives like exception handling, unless you're running in a host that provides syscalls you can use to do stuff like stack switching natively. (IIRC stack switching is proposed but not yet a part of any production WASM runtime - see https://webassembly.org/features/)
Based on what I read in a quick search, what Go does is generate each goroutine as a switch statement based on a state variable, so that you can 'resume' a goroutine by calling the switch with the appropriate state variable to resume its execution at the right point.
currently CPython's WASI build does not have asyncio support out of the box (at least according to [0]). This is, by my understanding, downstream of asyncio implementations in the standard library being built off of primitives around sockets and the like. And WASI, again by my understanding, does not support sockets.
In a browser environment there are theoretically ways you could piggyback off of the async support in the native ecosystem. But CPython is written to certain systems, so you're talking about CPython patches.
BUT the kind of beautiful thing is you can show up with your own asyncio event loop! So for example Pyodide just ships its own asyncio event loop[1]. This is possible thanks to Python's async infra just being built off of its generator concepts. async/await is, in itself, not something that "demands" I/O, just asyncio is.
[0]: https://docs.python.org/3.13/library/asyncio.html [1]: https://pyodide.org/en/stable/project/release-notes/v0.17.0....
I actually want browsers to support other languages natively.
Brendan Eich ( the creator of JavaScript) was kind enough chime in that it would be impossible for variety of reasons.
Obviously he knows more about this than me, but I think Google could put Dart in there if they really wanted.
WebAssembly is pretty close though.
Ideally, sure, but that would increase the already enormous burden of building a standards compliant web browser. For a healthy web ecosystem it's important that not only trillion dollar companies can contribute or compete.
Not every single website needs to support every single browser. This is a modern convenience, I was doing QA back in the day when we still had to support Internet explorer.
Internet explorer just didn't provide the same experience as Chrome.
You were supporting the tail end of an era that is universally agreed upon as an ecosystem failure. The internet didn't provide a consistent user experience for developers or for users, it generated mountains of legacy baggage, and it was frustrating for everyone.
I was doing building and qa when we had to support Netscape Navigator. Not having a varied set of options for browsers comes with clear downsides.
I think we agree ?
For example if Firefox decides to add Rust support it doesn't mean every other browser needs to support it.
Just a handful of web experiences are going to be exclusive to Firefox. As is having Chrome as the only browser most people use isn't great for innovation.
Back to Internet Explorer ActiveX times.
> Not every single website needs to support every single browser.
Most sane HN commenter. Jesus Christ.
HN: We want real diversity in browsers, not everything should be a Chromium reskin.
Me: Well different browsers could support different features.
HN: NOT LIKE THAT, we want every website to be compatible with everything!
Your comment is really relevant in the helium browser discussion. Its so on point.
People want different browsers so that chromium doesn't get to enforce their monopoly on web standards but I mean, its already happening. Like, if something runs on chrome and it doesn't run on firefox and is used by a lot of people...
Effectively firefox is ALSO forced to have those chromium features...
Basically the web standards is held hostage by chromium and we need a very heavy migration of large swathes of people away from chrome to something like firefox and that's whats being advocated I suppose.
I use zen / firefox because I also don't want chromium. I mean, idk if I have a particular reason except the above logic that I shared. honestly, idk to be honest.
Roblox is kinda like a alternative web browser.
You simply can't expect to run Roblox games inside of Chrome
Roblox can't generally be used to file your taxes.
But your visiting user created experiences.
The big problem is it's all controlled by one super company .
There's no reason we can't have an open source browser like which allows you to play various games, or run other sandbox applications. These applications could be programmed in a variety of different languages.
In this scenario, whatever I still need Chrome to handle certain important business, but I can use this alternate browser to engage with tons of other content.
Yes roblox could be interpreted in that way.
Minecraft in a similar vein too.
I was actually thinking of creating a roblox alternative or atleast proposing the idea of modifying luanti which is open source to have roblox esque graphics.
So it would be the open source browser which allows you to play various games in some sense.
If you want sandbox applications, there is libriscv created by legendary fwsgonzo which can run on any device or wasm I suppose
https://www.luanti.org/
https://github.com/libriscv/libriscv
Diversity of implementations and opinions, not diversity of standards. Big difference.
Maintaining a browser is already hard enough, it's a very tough sell to convince 3+ browser vendors to implement a new language with its own standard library and quirks in parallel without a really convincing argument. As of yet, nobody has come up with a convincing enough argument.
Part of why WebAssembly was successful is that it's a way of generating javascript runtime IR instead of a completely new language + standard library - browsers can swap out their JavaScript frontend for a WASM one and reuse all the work they've done, reusing most of their native code generator, debugger, caches, etc. The primitives WASM's MVP exposes are mostly stuff browsers already knew how to do (though over time, it accumulated new features that don't have a comparison point in JS.)
And then WASM itself has basically no standard library, which means you don't have to implement a bunch of new library code to support it, just a relatively small set of JS APIs used to interact with it.
Webassembly does not generate JavaScript IR. Not sure where you got that idea. Maybe you're thinking of asm.js?
Every modern implementation I know of at least partially reuses the internals of the JS runtime, which enables things like cross-language inlining between WASM and JS.
What would "support other languages natively" give you that WebAssembly doesn't?
Inline Python where all you have to do is put in <python></python>
You can do that today with PyScript:
Demo: https://static.simonwillison.net/static/2025/pyscript-demo.h...Or you can use MicroPython which is much smaller:
Demo: https://static.simonwillison.net/static/2025/pyscript-microp...What kind of safeguards would we need in place with this sort of feature in html? What are the security implications?
Since they compiled the python interpreter to webassembly, yes you can now totally do a <python></python> webcomponent if you like. Of course it requires the extra work of importing this interpreter. Web browsers aren't going to come with multiple interpreters built-in, it would be too heavy.
I would be interested to see how short the time to run "Hello World" can be with python in a webpage, counting the time to load the whole page without cache.
Try benchmarking https://static.simonwillison.net/static/2025/pyscript-microp... and see. It's pretty minimal.
DOM access without JS interop?
Dart transpiles to Javascript already - not exactly native support, but practically the next best thing.
That being said, I'm also 100% behind the effort to standardize WASM as the cross-platform compilation target of choice.
If you transpile to javascript the performance will never exceed that of Javascript. Typescript is a bit silly in that aspect because it removes all the types that the developers put in, they aren't used to improve time or memory performance at all.
Would the app have outbound network access to do some Python scheduling stuff that involves pulling from another endpoint?
Eg something like this flask-based app? (Yes the code is shit, I’m just a sysadmin learning Python with some AI support at that time).
https://github.com/jgbrwn/my-upc/blob/main/app.py
Also, if wasmer supports Starlette, I assume it would support FastHTML (web framework that uses Starlette under the hood) ?
Yes, since it supports Starlette/ASGI, FastHTML should work just fine.
FastHTML requires apsw (SQLite wrapper) even if you don't use it. We already compiled apsw to WASIX but it also requires publishing a new version of Python to Wasmer (with sqlite dynamically linked instead of statically linked).
We will release a new Python version by the end of this week / beginning of next one, so by then FastHTML should be fully work in Wasmer! (both runtime and Edge)
Sounds awesome!
Actually I’d imagine probably scheduling won’t work at all with wasmer?
Wasmer already support jobs (cron jobs, and jobs after certain triggers: deployment, app creation, ...), although is not fully documented yet.
We'll be improving our docs soon!
I tried to understand what is "Wasmer Edge" but couldn't. They say on the front page "Make any app serverless. The cheapest, fastest and most scalable way to deploy is on the edge." and it seems like I can upload the source code of any app and they will convert it for me? Unlikely so.
Also it says "Pay CDN-like costs for your cloud applications – that’s Wasmer Edge." and I don't understand why I need to pay for the cloud if the app is serverless. That's exactly the point of serverless app that you don't need to pay for the servers because, well, the name implies that there is no server.
Confusingly, "Serverless" doesn't mean there's no server. It means that you don't have to manage a server yourself.
My preferred definition of serverless is scale-to-zero - where if your app isn't getting any traffic you pay nothing (as opposed to paying a constant fee for having your own server running that's not actually doing any work), then you pay more as the traffic scales up.
Frustratingly there are some "serverless" offerings out there which DO charge you even for no traffic - "Amazon Aurora Serverless v1" did that, I believe they fixed it in v2.
Then it should be called manageless?
Still confusing, since infrastructure you don't have to manage yourself is sometimes called "managed". It makes sense from the perspective of "you are paying us to manage this for you".
Back in the day it was called PHP. You uploaded a PHP file. No server to manage. Nothing else to do!
It's a terrible name, but it's been around for over a decade now so we're stuck with it.
I mostly choose not to use it, because I don't like using ambiguous terminology if I can be more specific instead. So I'll say things like "scale-to-zero".
Scale to zero covers a lot of turf from AWS Lambda to CDNs to Fargate to DO app platform.
these are just automanaged cloud servers, I guess?
Thanks for the feedback.
Normally, if you want to run your apps serverlessly you'll need to adapt your source code to it (both AWS Lambda and Cloudflare Workders require creating a custom HTTP handler).
In our case, you can run your normal server (lets say uvicorn) without any code changes required from our side.
Of course, you can already do this in Docker-enabled workloads: Google Cloud or Fly.io, for example. But that means that your apps will have long cold-start times at a way higher cost (no serverless).
Hope this makes things clear!
Thank you for the explanation, now I can better see the differences between "serverless" platforms although I am still a little disappointed that so called "serverless" apps still require a (paid) server despite the name.
This bugs me all the time. Ethernet is serverless. Minesweeper is serverless. AWS Lambda is quite serverful, you're just not allowed to get a shell on that server.
I believe "serverless" in this sense means "like AWS lambda". Theoretically you upload some small scripts and they're executed on-demand out of a big resource pool, rather than you paying for/running an entire server yourself.
It seems like a horrible way to build a system with any significant level of complexity, but idk maybe it makes sense for very rarely used and light routes?
"Serverless" means Function-as-a-service, think of it like CGI-bin scripts but you pay per execution.
I'm kinda confused as to the licensing on this. Can I use this as a sandboxed python for random (non-web) projects like I would pydiode?
I get an https error at https://docs.wasmer.io/: net::ERR_CERT_AUTHORITY_INVALID
Very cool!
I’ve been looking at using lua for something like this: basically, users will be able to program robots in my lab (biotech) to do things, and I need a scripting language I can easily embed and control the runtime of in the larger system.
Lua is theoretically better in… almost every way, except everyone in bio uses python. So it could allow more easy modification of LLM generated scripts (not worried about the libraries because I mostly want to limit them: the scripts are mainly to just run robots, and you can have them webhook out if you need complicated stuff)
My question would be: would running a python sandbox vs a lua sandbox actually be appreciably better? Not sure yet, but will have to investigate with this new package (since it has Go bindings!)
Curious given you looked at both why you considered Lua to be better. I'd like to use Lua to teach freshmen and I need some arguments as to why it's better than Python.
Much better embedability and much easier to make it safely embedded. Doesn't require a wasm compilation, just can be in raw C, and that lua can directly integrate with host functions and vice versa - something even these wasm implementations struggle with.
Also, with luajit, it is much faster.
JuputerLite also does this. Uses local storage and Pyodide kernel (python on wasm). It has a special version of pip, and wasm versions of a lot of libraries which usually use native code (numpy etc). Super impressive.
https://jupyter.org/try-jupyter/lab/
We are actually going on another direction.
Philosophically speaking I believe we should not require a special version of pip to install packages, nor a "lite" version of Jupyter to run in WebAssembly.
We should be able to run Jupyter fully within the Wasmer ecosystem without requiring any changes on the package (to run either in the browser or the server).
How close is this to working? Can this new stack run a Jupyter kernel within a browser and get the front end to talk to it in the same browser?
Are dependencies easier to install or does it work only for packages that have pure wheel support?
Wondering how this compares to e.g. Jep for Java/Python interoperability (https://github.com/ninia/jep).
Would be way more exciting if it could _compile_ Python to Wasm (or does it?).
wait why is this the first time I saw something like this!
I remember when I was in 6th grade wanting to make some minecraft mods but didn't want to learn java for it and wanted something like python.
I also remember a video that did something like that but it was highly limited or smth iirc.
Could this be used to create a minecraft plugin in python?
Edit: Damn, there is this pyspigot thing which seems really fascinating..
https://pyspigot-docs.magicmq.dev/#getting-started
Tbh, nowadays I would prefer writing minecraft mods in kotlin but the pyspigot code is also definitely really interesting!
I am so excited about python edge supported by wasm because I used python on cloudflare worker but there are so many limitations just simple pure python code supported.
Words "Python" and "fast" do not belong in the same sentence.
python is fast because computers are fast, but yea compared to rust it isn't, and in a lot of cases it doesn't have to be
Yeah, when I see these kinds of headlines about Python, I'm always left wondering what they mean by "fast". In this case, "fast" means "still slower than Python usually is".
I'm not sure I understand correctly: is it a new serverless offering competing with the likes of vercel and fly.io, but with a different technology and pricing strategy? And the wasm container means that I can deploy my streamlit of FastAPI ETL apps without the Docker overhead or slowness of streamlit cloud?
Wouldnt be better to have sandboxing built directly to cpython? Why there is no such thing already "include" in cpython? Or maybe to create some limited sandboxed venv?
Would it be possible to make it work on iOS or android? I always missed better support of python on mobiles. In the past used PythonKit rapid prototype and interop with Swift but had limited set of modules. Wish to use this in react native for interop between js and python
Yes, running Wasmer Python package on iOS or Android is 100% doable.
In fact, we want to even run it on browsers.
We are small team, so we have to pick our battles very carefully, but we would welcome any patch to make it work (if it doesn't work already!).
I don't see a mention of uv here.
Since LLMs have made me so lazy that I never bother to search or read on my own, can someone tell me whether I can use uv as my project management tool with wasmer? What's the story here?
I simply CANNOT go back to use packages without uv, it would be unthinkable to me.
Actually, now that I think of it, my laziness might have started when I learned perl 30 years ago.
It may be built in different ways, but uv is one of them yes. Check out the fastapi + websockets example here: https://github.com/wasmer-examples/python-fastapi-websockets
this rocks, using python as an embedded scripting language is Easy and Cool but also Dangerous and Evil, feel like this could make it less dangerous.
Isn't it just Python built with Emscripten?
Wasmer doesn't use Emscripten, but WASIX. See https://wasix.org/
Does your solution support interop between modules written in different languages? I would love to be able to pass POD objects between Python and JS inside the same runtime.
For a backend project in Java, I use Jep for Python interoperability and making use of Python ecosystem. It gives me a "non-restricted" Python to have in my Java code, something I'm quite happy with. Wondering how this compares to that .
See https://github.com/ninia/jep
Hmm, I tried it out.
> wasmer app create --template=static-website
gets you from empty folder to initialized template and deployed static website in like 10 seconds when logged in.
Pretty nice.
What does that do? A static website with some language compiled to wasm running in the browser?
How would i use numpy from javascript using this?
Using Wasmer-JS that should be doable. We just need to release a new version!
thanks. it'd be great to have a quick tutorial on doing so. this is close to my dream of creating Frankenstein apps with the web platform instead of graal :)
https://www.graalvm.org/latest/reference-manual/polyglot-pro...
WASMBots: Fast, Cheap, and Out Of Control!
https://people.csail.mit.edu/brooks/papers/fast-cheap.pdf
Interesting - do they compile Python to WebAssembly or just have a Python interpreter compiled to WebAssembly and running as usual?
This time we compiled the Python interpreter to WebAssembly.
If you are curious on compiling Python to WebAssembly please check py2wasm: https://wasmer.io/posts/py2wasm-a-python-to-wasm-compiler
Interesting; I wonder what the limitations of WebAssembly are in this context, as compared to the machine code
How are you going to sandbox WebAsm when it's not even defined?
> Now, you can run any kind of Python API server, powered by fastapi, django, flask, or starlette, connected to a MySQL database automatically when needed
I assume this is targeting the standalone WebAssembly use case, we're not...running MySQL in browsers right?
To be clear "fast" means "almost as fast as native Python", not "actually fast". Impressive achievement anyway.
"fast" is not "blazing fast"
“Fast” applies to Java/C#/Node, if it’s native Python then it is anything but fast.