Chicory seems like it'll be pretty useful. Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.
I've used it to experiment with using WebAssembly to extend the Bazel build system (which is written in Java). Currently there are several Bazel rulesets that need platform-specific helper binaries for things like parsing lock files or Cargo configs, and that's exactly the kind of logic that could happily move into a WebAssembly blob.
I don't understand logic and layers of abstraction here.
Chicory runs on JVM. Bazel runs on JVM. How inserting WebAssembly layer will help to eliminate platform-specific helper binaries? These binaries compiled to WebAssembly will be run, effectively, on JVM (through one additional layer of APIs provided by Chicory), right? Why you cannot write these helpers directly in JVM language, Java, Kotlin, Clojure, anything? Why do you need additional layer of Chicory?
Why would you rewrite (parts of) Cargo from Rust to something that runs on the JVM, when you can use Wasm as basically an intermediate target to compile the Rust down to JVM bytecode?
Or how about running something like Shellcheck (written in Haskell) on the JVM as part of a build process?
You can see the same idea for the Go ecosystem (taking advantage of the Go build system) on the many repos of this org:
https://github.com/wasilibs
Aren't WASM Components pretty constrained? My (very fuzzy) understanding is that they must basically manage all of their own memory, and they can only interact by passing around integer handles corresponding to objects they manage internally.
Part of the component model is codegen to build object structures in each language so that you can pass by reference objects that have an agreed upon shape.
Yes they each have their own linear memory, that’s one of the advantages of component model. It provides isolation at the library level and you don’t have to implicitly agree that each library gets the level of access your application does. It provides security against supply chain side attacks.
Having said that, component model isn’t supported by all runtimes and since its binding and code gen are static at compile time, it’s not useful for every situation. Think of it like a C FFI more than a web API receiving JSON, for example. Upgrading the library version would mean upgrading your bindings and rebuilding your app binary too, the two must move in lock-step.
Oh, these tools are written in languages which can be directly compiled to WebAssembly without any changes?
Yes, then it make sense, thank you for clarification.
Yeah, pretty much all of them are written in either Go or Rust. The Go tools pull in the Go standard library's Go parser to do things like compute dependencies via package imports, and the Rust ones use the Cargo libraries to parse Cargo.toml files.
From the perspective of a Bazel ruleset maintainer, precompiled helper tools are much easier to provide if your language has easy cross-compilation. So maybe one day Zig will start to make an appearance too.
Yes, but WASM gives you more, especially WASM Components. E.g., FFI doesn't offer sandboxing, and unloading symbols is tricky. The WIT (WebAssembly Interface Types) IDL (+ bindings codegen) makes objects' exports explicit, but more importantly, their imports too (i.e., dependencies).
None of what 'jcmfernandes lists are part of WebAssembly. At best they can be considered related technologies, like the relationship between the JVM and JavaBeans.
And in terms of design, they're closer to COM or OLE. The modern replacement for CORBA/DCOM/etc is HTTP+JSON (or gRPC), which doesn't try to abstract away the network.
I've had the misfortune of working professionally with CORBA, and I've spent some time trying to keep up with WIT/WASI/that whole situation. Whatever WIT is going to be, I can assure you it's very different from CORBA.
The best way I think to describe WIT is that it seems to be an attempt to design a new ABI, similar to the System V ABI but capable of representing the full set of typesystems found in every modern language. Then they want to use that ABI to define a bunch of POSIX-ish syscalls, and then have WebAssembly as the instruction set for their architecture-independent executable format.
The good news is that WIT/WASI/etc is an independent project from WebAssembly, so whether it succeeds or fails doesn't have much impact on the use of WebAssembly as a plugin mechanism.
Correct, they are a part of WASI. Indeed, different things, but well, tightly related. Made sense to talk about them given the chat on bridging gaps in bazel using WASM.
Yes, the concept is old. I may be wrong, but to me, this really seems like it, the one that will succeed. With that said, I'm sure many said the same about the technologies you enumerated... so let's see!
I really don't want to sound flamewar-y, but how is WebAssmebly's security model well-designed compared to a pure Java implementation of a brainfuck interpreter? Similarly, java byte code is 100% safe if you just don't plug in filesystem/OS capabilities.
It's trivial to be secure when you are completely sealed off from everything. The "art of the deal" is making it safe while having many capabilities. If you add WASI to the picture it doesn't look all that safe, but I might just not be too knowledgeable about it.
It's really difficult to compare the JVM and wasm because they are such different beasts with such different use cases.
What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.
This makes it flexible and applicable to situations where java never could be - incorporating pluggable bits of logic into high-frequency glue code.
Wasm + some DB API is a pure stored procedure compute abstraction that's client-specifiable and safe.
Wasm + a simple file API that assumes a single underlying file + a stream API that assumes a single outgoing stream, that's a beautiful piece of plumbing for an S3 like service that lets you dynamically process files on the server before downloading the post-processed data.
There are a ton of use cases where "X + pluggable sandboxed compute" is power-multiplier for the underlying X.
I don't think the future of wasm is going to be in the use case where we plumb a very classical system API onto it (although that use case will exist). The real applicability and reach of wasm is the fact that entire software architectures can be built around the notion of mobile code where the signature (i.e. external API that it requires to run) of the mobile code can be allowed to vary on a use-case basis.
> What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.
Originally, but that's rapidly changing as people demand more performant host application interfacing. Sophisticated interfacing + GC + multithreading means WASM could (likely will) fall into the same trap as the JVM. For those too young to remember, Java Applet security failed not because the model was broken, but because the rich semantics and host interfacing opened the door to a parade of implementation bugs. "Memory safe" languages like Rust can't really help here, certainly not once you add JIT into the equation. There are ways to build JIT'd VMs that are amenable to correctness proofs, but it would require quite alot of effort and the most popular and performant VMs just aren't written with that architectural model in mind. The original premise behind WASM was to define VM semantics simple enough that that approach wouldn't be necessary to achieve correctness and security in practice; in particular, while leveraging existing JavaScript VM engines.
The thing is, sophisticated interfacing, GC, and multithreading - assuming they're developed and deployed in a particular way - only apply in the cases where you're applying it to use cases that need those things. The core compute abstraction is still there and doesn't diminish in value.
I'm personally a bit skeptical of the approach to GC that's being taken in the official spec. It's very design-heavy and tries to bring in a structured heap model. When I was originally thinking of how GC would be approached on wasm, I imagined that it would be a few small hooks to allow the wasm runtime to track rooted pointers on the heap, and then some API to extract them when the VM decides to collect. The rest can be implemented "in userspace" as it were.
But that's the nice thing about wasm. The "roots-tracker" API can be built on plain wasm just fine. Or you can write your VM to use a shadow stack and handle everything internally.
The bigger issue isn't GC, but the ability to generate and inject wasm code that links into the existing program across efficient call paths - needed for efficient JIT compilation. That's harder to expose as a simple API because it involves introducing new control flow linkages to existing code.
The bespoke capability model in Java has always been so fiddly it has made me question the concept of capability models. There’s was for a long time a constant stream of new privilege escalations mostly caused by new functions being added that didn’t necessarily break the model themselves, but they returned objects that contained references to objects that contained references to data that the code shouldn’t have been able to see. Nobody to my recollection ever made an obvious back door but nonobvious ones were fairly common.
I don’t know where things are today because I don’t use Java anymore, but if you want to give some code access to a single file then you’re in good hands. If you want to keep them from exfiltrating data you might find yourself in an Eternal Vigilance situation, in which case you’ll have to keep on top of security fixes.
We did a whole RBAC system as a thin layer on top of JAAS. Once I figured out a better way to organize the config it wasn’t half bad. I still got too many questions about it, which is usually a sign of ergonomic problems that people aren’t knowledgeable enough to call you out on. But it was a shorter conversation with fewer frowns than the PoC my coworker left for me to productize.
WASI does open up some holes you should be considerate of. But it's still much safer than other implementations. We don't allow you direct access to the FS we use jimfs: https://github.com/google/jimfs
I typically recommend people don't allow wasm plugins to talk to the filesystem though, unless they really need to read some things from disk like a python interpreter. You don't usually need to.
I wouldn't say 100% safe. I was able to abuse the JVM to use spectre gadgets to find secret memory contents (aka private keys) on the JVM. It was tough but lets not overexagerate about JVM safety.
You can have some fun with WebAssembly as well regarding spectre.
> Unfortunately, Spectre attacks can bypass Wasm's isolation guarantees. Swivel hardens Wasm against this class of attacks by ensuring that potentially malicious code can neither use Spectre attacks to break out of the Wasm sandbox nor coerce victim code—another Wasm client or the embedding process—to leak secret data.
WebAssembly doesn't have access to the high-resolution timers needed for Spectre attacks unless the host process intentionally grants that capability to the sandboxed code.
See this quote from the paper you linked:
"""
Our attacks extend Google’s Safeside [24] suite and, like the Safeside POCs, rely on three low-level instructions: The rdtsc instruction to measure execution time, the clflush instruction to evict a particular cache line, and the mfence instruction to wait for pending memory operations to complete. While these instructions are not exposed to Wasm code by default, we expose these instructions to simplify our POCs.
"""
The security requirements of shared-core hosting that want to provide a full POSIX-style API are unrelated to the standard use of WebAssembly as an architecture-independent intermediate bytecode for application-specific plugins.
'gf000 correctly notes that WebAssembly's security properties are basically identical to any other interpreter, and there's many options for bytecodes (or scripting languages) that can do some sort of computation without any risk of a sandbox escape. WebAssembly is distinguished by being a good generic compilation target and being easy to write efficient interpreters/JITs for.
WebAssembly doesn't exist in isolation, it needs host process to actually execute.
So whatever security considerations are to be taken from bytecode semantics, they are useless in practice, which keeps being forgotten by its advocates.
As they, and you point out, "WebAssembly's security properties are basically identical to any other interpreter,..."
The WebAssembly bytecode semantics are important to security because they make it possible to (1) be a compilation target for low-level languages, and (2) implement small secure interpreters (or JITs) that run fast enough to be useful. That's why WebAssembly is being so widely implemented.
Java was on a path to do what WebAssembly is doing now, back in the '90s. Every machine had a JRE installed, every browser could run Java applets. But Java is so slow (and its sandboxing design so poor) that the world gave up on Java being able to deliver "compile once run anywhere".
If you want to take a second crack at Sun's vision, then you can go write your own embedded JVM and try to convince people to write an LLVM backend for it. The rest of us gave up on that idea when applets were removed from browsers for being a security risk.
People talk all the time about Java, while forgeting such king of polyglot bytecodes exist since 1958, there are others that would be quite educating to learn about instead of always using Java as an example.
Ok, show me a bytecode from the 60s (or 90s!) to which I can compile Rust or Go and then execute with near-native performance with a VM embedded in a standard native binary.
The old bytecodes of the 20th century were designed to be a compilation target for a single language (or family of closely-related languages). The bytecode for Erlang is different from that of Smalltalk is different from that of Pascal, and that's before you start getting into the more esoteric cases like embedded Forth.
The closest historical equivalent to today's JVM/CLR/WebAssembly I can think of is IBM's hardware-independent instruction set, which I don't think could be embedded and definitely wasn't portable to microcomputer architectures.
The extent of how each bytecode was used doesn't invalidate their existence.
Any bytecode can be embedded, it is a matter of implementation.
> The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted.
> The ACK's notability stems from the fact that in the early 1980s it was one of the first portable compilation systems designed to support multiple source languages and target platforms
Plenty more examples available to anyone that cares to dig what happened after UNCOL idea came to be in 1958.
Naturally one can always advocate that since 60 years of history have not provided that very special feature XYZ, we should now celebrate WebAssembly as the be all end all of bytecode, as startups with VC money repurpose old ideas newly wrapped.
> The extent of how each bytecode was used doesn't invalidate their existence.
It does, because uptake is the proof of suitability to purpose. There's no credit to just being first to think of an idea, only in being first to implement it well enough that everyone wants to use it.
> Any bytecode can be embedded, it is a matter of implementation.
Empty sophistry is a poor substitute for thought. Are you going to post any evidence of your earlier claim, or just let it waft around like a fart in an elevator?
In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.
For anyone who's reading this thread, check out the patent (https://patents.google.com/patent/EP0464526A2/en) and you'll understand quickly that ANDF is closer to a blend of LLVM IR and Haskell's Cmm. It's designed to be used as part of a multi-stage compiler, where part of the compiler frontend runs on the developer system (emitting ANDF) and the rest of the frontend + the whole backend + the linker runs on the target system. No relationship to WebAssembly, JVM bytecode, or any other form of bytecode designed to be executed as-is with predictable platform-independent semantics.
> More than 20 programming tools vendors offer some 26 programming languages
> — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.
I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly.
> Naturally one can always advocate that since 60 years of history have not
> provided that very special feature XYZ, we should now celebrate WebAssembly
> as the be all end all of bytecode, as startups with VC money repurpose old
> ideas newly wrapped.
Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.
If you reflexively dislike any technology that is adopted by startups, and then start confabulating nonsense to justify your position despite all evidence, then the technology isn't the problem.
> It does, because uptake is the proof of suitability to purpose. There's no credit to just being first to think of an idea, only in being first to implement it well enough that everyone wants to use it.
Depends on how the sales pitch of those selling the new stack goes.
> Empty sophistry is a poor substitute for thought. Are you going to post any evidence of your earlier claim, or just let it waft around like a fart in an elevator?
Creative writing, some USENET flavour, loving it.
> In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.
It is a matter of prior art, not what they achieved in practice.
> I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly.
I never written that the CLR predates the JVM, where is that can you please point us out?
C++/CLI is as standard C and C++, as using emscripten clang extensions for WebAssembly integration with JavaScript.
But I tend to forget at the eyes of FOSS folks, clang and GCC language extensions are considered regular C and C++, as if defined by ISO themselves.
> Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.
Naturally, when folks are honest about the actual capabilities and the past they build upon.
I love WebAssembly Kubernetes clusters reinventing application servers, by the way, what a cool idea!
WebAssembly being described as a sandbox is perfectly valid. Applications with embedded sandboxes for plugins use the sandbox to protect the application from the plugin, not to protect the plugin from itself. The plugin author can protect the plugin from itself by using a memory-safe language that compiles to WebAssembly; that's on them and not on the embedding application.
Except the tiny detail that the whole application is responsible for everything it does, including the behaviour of plugins it decides to use, so if the plugin can be exposed to faulty behaviour on its outputs, that will influence the expected behaviour from the host with logic building on those outputs, someone will be very happy and write a blog post with a funny name.
> Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.
I thought Java had all of this sandboxing stuff baked in? Wasn't that a big selling point for the JVM once upon a time? Every other WASM thread has someone talking about how WASM is unnecessary because JVM exists, so the idea that JVM actually needs WASM to do sandboxing seems pretty surprising!
The JVM was designed with the intention of being a secure sandbox, and a lot of its early adoption was as Java applets that ran untrusted code in a browser context. It was a serious attempt by smart people to achieve a goal very similar to that of WebAssembly.
Unfortunately Java was designed in the 1990s, when there was much less knowledge about software security -- especially sandboxing of untrusted code. So even though the goal was the same, Java's design had some flaws that made it difficult to write a secure JVM.
The biggest flaw (IMO) was that the sandbox layer was internal to the VM: in modern thought the VM is the security boundary, but the JVM allows trusted and untrusted code to execute in the same VM, with java.lang.SecurityManager[0] and friends as the security mechanism. So the attack surface isn't the bytecode interpreter or JIT, it's the entire Java standard library plus every third-party module that's linked in or loaded.
During the 2000s and 2010s there were a lot of Java sandbox escape CVEs. A representative example is <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0422>. Basically the Java security model was broken, but fixing it would break backwards compatibility in a major way.
--
Around the same time (early-mid 2010s) there more thought being put into sandboxing native code, and the general consensus was:
- Sandboxing code within the same process space requires an extremely restricted API. The original seccomp only allowed read(), write(), exit(), and sigreturn() -- it could be used for distributed computation, but compiling existing libraries into a seccomp-compatible dylib was basically impossible.
- The newly-developed virtualization instructions in modern hardware made it practical to run a virtual x86 machine for each untrusted process. The security properties of VMs are great, but the x86 instruction set has some properties that make it difficult to verify and JIT-compile, so actually sitting down and writing a secure VM was still a major work of engineering (see: QEMU, VMWare, VirtualBox, and Firecracker).
Smartphones were the first widespread adoption of non-x86 architectures among consumers since PowerPC, and every smartphone had a modern web browser built in. There was increasing desire to have something better than JavaScript for writing complex web applications executing in a power-constrained device. Java would have been the obvious choice (this was pre-Oracle), except for the sandbox escape problem.
WebAssembly combines architecture-independent bytecode (like JVM) with the security model of VMs (flat memory space, all code in VM untrusted). So you can take a whole blob of legacy C code, compile it to WebAssembly, and run it in a VM that runs with reasonable performance on any architecture (x86, ARM, RISC-V, MIPS, ...).
Also note, we have the AOT compiler which can target the JVM bytecode directly as well as Dalvik/Android which is experimental but nearly spec complete :)
It'd be interesting to see a benchmark for what the total overhead is for Rust->WASM->Chicory AoT->native-image versus native Rust; I've been pleasantly surprised by the JVM in the past, so I'd hope it'd be a relatively small hit.
Even in interpreter mode, rust wasm programs seem very fast for me on Chicory. I'm not sure if we have any specific benchmarks but the graal team did some and i think it's based on a rust guest program https://chicory.dev/blog/chicory-1.0.0/#the-race-day
ahaha, that's intriguing!
I think there are still some gaps but we are comparing results(with GraalWasm) on Photon here: https://github.com/shaunsmith/wasm-bench
Should be easy to build a native image and compare!
Glad you appreciate it! On top of being a Java joke it’s an homage to my home, New Orleans. We still drink coffee with chicory here due to some events during the civil war and then a changed cultural taste. Though the history in this isn’t exactly clear I think https://neworleanshistorical.org/items/show/1393
For people who aren't aware, Chicory has long been used (e.g. in Europe during WW2) as a coffee substitute, and Java is another name for coffee, thus Chicory is a substitute for Java.
Edit: I originally thought Chicory was a JVM replacement using WebAssembly (e.g. to run Java applets in modern browsers, using WebAssembly). It appears that it's actually a WebAssembly runtime, to run WebAssembly code on the JVM. So the name is a lot less cool than I thought it was.
There are a few, and they are really interesting! The reason we wrote Chicory though is we're interested in extending the capabilities of existing Java applications through plugins. The intro of this talk explains some of this reasoning: https://www.youtube.com/watch?v=00LYdZS0YlI
Not sure why you're being downvoted. One of the best tools Microsoft made regarding WebAssembly and C# is Blazor. Developers can focus on building web applications and use C# on both the front-end and back-end and drive the UI either server side or WASM without missing a beat. Essentially bypassing the need for JavaScript.
I can only imagine such a capability for Java or other languages would be infinitely useful.
Google web toolkit was released 18 yers ago that essentially allowed you to create early web2.0 apps (like Gmail) in Java. AJAX and a lot of web2.0 innovations were essentially originated from GWT
Arguably, GWT was too ambitious. That made it somewhat of a PITA to work with.
J2CL is a much better approach (IMO) but is somewhat too little too late.
The best analogy to what GWT was is ASP.NET Webforms but ran on the client. That extra baggage (along with an antiquated build system and setup) made it really hard to keep GWT up to date.
I'm excited to see Java bytecode->WASM though. Now that WASM ships with a GC we should see some really neat stuff in terms of the size of the runtime needed for an AOT bytecode->wasm.
I would even argue that large scale JS web apps were plain impossible without Google Closure (the compiler they used to compile both Java and JS to JS, and to add types to JS) at the time.
This looks very cool - I'm going to read into the implementation, there's something about producing JVM bytecode from WASM instructions and then having the JVM JIT compile it into native instructions that amuses me.
How far is this from the hypothetical (I think, for now) scenario of including a WASM build as a fallback "platform" in jars that include some native code for a number of platforms? A number of platforms that will never be complete, not when you include the future?
Is it only me who is interested in creating minecraft plugins in go code which can compile to wasm and now run in chicory natively in the jvm itself , sure there is some overhead but oh I don't mind that , I think that somebody would want to hook the minecraft api to be callable from wasm and then some more shenanigans on the wasm side as well and then on the golang side as well but oh boy , It can be fun. I really wanted to create a python minecraft plugin when I was really young / just starting out programming , I really was close to learning kotlin just for minecraft.
Minecraft holds dear to me.
I'm not sure where they congregate, but I've encountered some minecraft people using chicory probably for this purpose. IDK how minecrafts plugins currently work but i'd imagine chicory is more secure and the overhead may be worth it for some cases.
JavaScript and WASM really seem more portable. They have now approximated what Java web applets tried to achieve. And now WASM can be run in Mainframe IBM JVMs. Nashorn or Rhino seems like it runs JavaScript there.
JavaScript and WASM are now getting close to COBOL's importance. That’s no mean feat.
Chicory seems like it'll be pretty useful. Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.
I've used it to experiment with using WebAssembly to extend the Bazel build system (which is written in Java). Currently there are several Bazel rulesets that need platform-specific helper binaries for things like parsing lock files or Cargo configs, and that's exactly the kind of logic that could happily move into a WebAssembly blob.
https://github.com/jmillikin/upstream__bazel/commits/repo-ru...
https://github.com/bazelbuild/bazel/discussions/23487
I don't understand logic and layers of abstraction here.
Chicory runs on JVM. Bazel runs on JVM. How inserting WebAssembly layer will help to eliminate platform-specific helper binaries? These binaries compiled to WebAssembly will be run, effectively, on JVM (through one additional layer of APIs provided by Chicory), right? Why you cannot write these helpers directly in JVM language, Java, Kotlin, Clojure, anything? Why do you need additional layer of Chicory?
You don't, just, easily rewrite everything. Being able to just re-use is the trick!
Exactly.
Why would you rewrite (parts of) Cargo from Rust to something that runs on the JVM, when you can use Wasm as basically an intermediate target to compile the Rust down to JVM bytecode?
Or how about running something like Shellcheck (written in Haskell) on the JVM as part of a build process?
You can see the same idea for the Go ecosystem (taking advantage of the Go build system) on the many repos of this org: https://github.com/wasilibs
This is great. The future is made of libraries packaged as WASM Components.
Aren't WASM Components pretty constrained? My (very fuzzy) understanding is that they must basically manage all of their own memory, and they can only interact by passing around integer handles corresponding to objects they manage internally.
Part of the component model is codegen to build object structures in each language so that you can pass by reference objects that have an agreed upon shape.
Yes they each have their own linear memory, that’s one of the advantages of component model. It provides isolation at the library level and you don’t have to implicitly agree that each library gets the level of access your application does. It provides security against supply chain side attacks.
Having said that, component model isn’t supported by all runtimes and since its binding and code gen are static at compile time, it’s not useful for every situation. Think of it like a C FFI more than a web API receiving JSON, for example. Upgrading the library version would mean upgrading your bindings and rebuilding your app binary too, the two must move in lock-step.
Oh, these tools are written in languages which can be directly compiled to WebAssembly without any changes? Yes, then it make sense, thank you for clarification.
Yeah, pretty much all of them are written in either Go or Rust. The Go tools pull in the Go standard library's Go parser to do things like compute dependencies via package imports, and the Rust ones use the Cargo libraries to parse Cargo.toml files.
From the perspective of a Bazel ruleset maintainer, precompiled helper tools are much easier to provide if your language has easy cross-compilation. So maybe one day Zig will start to make an appearance too.
Java already has plenty of FFI variants for that.
Yes, but WASM gives you more, especially WASM Components. E.g., FFI doesn't offer sandboxing, and unloading symbols is tricky. The WIT (WebAssembly Interface Types) IDL (+ bindings codegen) makes objects' exports explicit, but more importantly, their imports too (i.e., dependencies).
Basically CORBA, DCOM, PDO, RMI, Jini and .NET Remoting for a new generation.
None of what 'jcmfernandes lists are part of WebAssembly. At best they can be considered related technologies, like the relationship between the JVM and JavaBeans.
And in terms of design, they're closer to COM or OLE. The modern replacement for CORBA/DCOM/etc is HTTP+JSON (or gRPC), which doesn't try to abstract away the network.
They are certainly not much different from WIT (WebAssembly Interface Types) IDL (+ bindings codegen).
I've had the misfortune of working professionally with CORBA, and I've spent some time trying to keep up with WIT/WASI/that whole situation. Whatever WIT is going to be, I can assure you it's very different from CORBA.
The best way I think to describe WIT is that it seems to be an attempt to design a new ABI, similar to the System V ABI but capable of representing the full set of typesystems found in every modern language. Then they want to use that ABI to define a bunch of POSIX-ish syscalls, and then have WebAssembly as the instruction set for their architecture-independent executable format.
The good news is that WIT/WASI/etc is an independent project from WebAssembly, so whether it succeeds or fails doesn't have much impact on the use of WebAssembly as a plugin mechanism.
Correct, they are a part of WASI. Indeed, different things, but well, tightly related. Made sense to talk about them given the chat on bridging gaps in bazel using WASM.
Yes, the concept is old. I may be wrong, but to me, this really seems like it, the one that will succeed. With that said, I'm sure many said the same about the technologies you enumerated... so let's see!
I really don't want to sound flamewar-y, but how is WebAssmebly's security model well-designed compared to a pure Java implementation of a brainfuck interpreter? Similarly, java byte code is 100% safe if you just don't plug in filesystem/OS capabilities.
It's trivial to be secure when you are completely sealed off from everything. The "art of the deal" is making it safe while having many capabilities. If you add WASI to the picture it doesn't look all that safe, but I might just not be too knowledgeable about it.
It's really difficult to compare the JVM and wasm because they are such different beasts with such different use cases.
What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.
This makes it flexible and applicable to situations where java never could be - incorporating pluggable bits of logic into high-frequency glue code.
Wasm + some DB API is a pure stored procedure compute abstraction that's client-specifiable and safe.
Wasm + a simple file API that assumes a single underlying file + a stream API that assumes a single outgoing stream, that's a beautiful piece of plumbing for an S3 like service that lets you dynamically process files on the server before downloading the post-processed data.
There are a ton of use cases where "X + pluggable sandboxed compute" is power-multiplier for the underlying X.
I don't think the future of wasm is going to be in the use case where we plumb a very classical system API onto it (although that use case will exist). The real applicability and reach of wasm is the fact that entire software architectures can be built around the notion of mobile code where the signature (i.e. external API that it requires to run) of the mobile code can be allowed to vary on a use-case basis.
> What wasm brings to the table is that the core tech focuses on one problem: abstract sandboxed computation. The main advantage it brings is that it _doesn't_ carry all the baggage of a full fledged runtime environment with lots of implicit plumbing that touches the system.
Originally, but that's rapidly changing as people demand more performant host application interfacing. Sophisticated interfacing + GC + multithreading means WASM could (likely will) fall into the same trap as the JVM. For those too young to remember, Java Applet security failed not because the model was broken, but because the rich semantics and host interfacing opened the door to a parade of implementation bugs. "Memory safe" languages like Rust can't really help here, certainly not once you add JIT into the equation. There are ways to build JIT'd VMs that are amenable to correctness proofs, but it would require quite alot of effort and the most popular and performant VMs just aren't written with that architectural model in mind. The original premise behind WASM was to define VM semantics simple enough that that approach wouldn't be necessary to achieve correctness and security in practice; in particular, while leveraging existing JavaScript VM engines.
The thing is, sophisticated interfacing, GC, and multithreading - assuming they're developed and deployed in a particular way - only apply in the cases where you're applying it to use cases that need those things. The core compute abstraction is still there and doesn't diminish in value.
I'm personally a bit skeptical of the approach to GC that's being taken in the official spec. It's very design-heavy and tries to bring in a structured heap model. When I was originally thinking of how GC would be approached on wasm, I imagined that it would be a few small hooks to allow the wasm runtime to track rooted pointers on the heap, and then some API to extract them when the VM decides to collect. The rest can be implemented "in userspace" as it were.
But that's the nice thing about wasm. The "roots-tracker" API can be built on plain wasm just fine. Or you can write your VM to use a shadow stack and handle everything internally.
The bigger issue isn't GC, but the ability to generate and inject wasm code that links into the existing program across efficient call paths - needed for efficient JIT compilation. That's harder to expose as a simple API because it involves introducing new control flow linkages to existing code.
The bespoke capability model in Java has always been so fiddly it has made me question the concept of capability models. There’s was for a long time a constant stream of new privilege escalations mostly caused by new functions being added that didn’t necessarily break the model themselves, but they returned objects that contained references to objects that contained references to data that the code shouldn’t have been able to see. Nobody to my recollection ever made an obvious back door but nonobvious ones were fairly common.
I don’t know where things are today because I don’t use Java anymore, but if you want to give some code access to a single file then you’re in good hands. If you want to keep them from exfiltrating data you might find yourself in an Eternal Vigilance situation, in which case you’ll have to keep on top of security fixes.
We did a whole RBAC system as a thin layer on top of JAAS. Once I figured out a better way to organize the config it wasn’t half bad. I still got too many questions about it, which is usually a sign of ergonomic problems that people aren’t knowledgeable enough to call you out on. But it was a shorter conversation with fewer frowns than the PoC my coworker left for me to productize.
WASI does open up some holes you should be considerate of. But it's still much safer than other implementations. We don't allow you direct access to the FS we use jimfs: https://github.com/google/jimfs
I typically recommend people don't allow wasm plugins to talk to the filesystem though, unless they really need to read some things from disk like a python interpreter. You don't usually need to.
I wouldn't say 100% safe. I was able to abuse the JVM to use spectre gadgets to find secret memory contents (aka private keys) on the JVM. It was tough but lets not overexagerate about JVM safety.
You can have some fun with WebAssembly as well regarding spectre.
> Unfortunately, Spectre attacks can bypass Wasm's isolation guarantees. Swivel hardens Wasm against this class of attacks by ensuring that potentially malicious code can neither use Spectre attacks to break out of the Wasm sandbox nor coerce victim code—another Wasm client or the embedding process—to leak secret data.
https://www.usenix.org/conference/usenixsecurity21/presentat...
People have to stop putting WebAssembly in some pedestral of bytecode formats.
WebAssembly doesn't have access to the high-resolution timers needed for Spectre attacks unless the host process intentionally grants that capability to the sandboxed code.
See this quote from the paper you linked:
""" Our attacks extend Google’s Safeside [24] suite and, like the Safeside POCs, rely on three low-level instructions: The rdtsc instruction to measure execution time, the clflush instruction to evict a particular cache line, and the mfence instruction to wait for pending memory operations to complete. While these instructions are not exposed to Wasm code by default, we expose these instructions to simplify our POCs. """
The security requirements of shared-core hosting that want to provide a full POSIX-style API are unrelated to the standard use of WebAssembly as an architecture-independent intermediate bytecode for application-specific plugins.
'gf000 correctly notes that WebAssembly's security properties are basically identical to any other interpreter, and there's many options for bytecodes (or scripting languages) that can do some sort of computation without any risk of a sandbox escape. WebAssembly is distinguished by being a good generic compilation target and being easy to write efficient interpreters/JITs for.
WebAssembly doesn't exist in isolation, it needs host process to actually execute.
So whatever security considerations are to be taken from bytecode semantics, they are useless in practice, which keeps being forgotten by its advocates.
As they, and you point out, "WebAssembly's security properties are basically identical to any other interpreter,..."
The implementation makes all the difference.
The WebAssembly bytecode semantics are important to security because they make it possible to (1) be a compilation target for low-level languages, and (2) implement small secure interpreters (or JITs) that run fast enough to be useful. That's why WebAssembly is being so widely implemented.
Java was on a path to do what WebAssembly is doing now, back in the '90s. Every machine had a JRE installed, every browser could run Java applets. But Java is so slow (and its sandboxing design so poor) that the world gave up on Java being able to deliver "compile once run anywhere".
If you want to take a second crack at Sun's vision, then you can go write your own embedded JVM and try to convince people to write an LLVM backend for it. The rest of us gave up on that idea when applets were removed from browsers for being a security risk.
People talk all the time about Java, while forgeting such king of polyglot bytecodes exist since 1958, there are others that would be quite educating to learn about instead of always using Java as an example.
Ok, show me a bytecode from the 60s (or 90s!) to which I can compile Rust or Go and then execute with near-native performance with a VM embedded in a standard native binary.
The old bytecodes of the 20th century were designed to be a compilation target for a single language (or family of closely-related languages). The bytecode for Erlang is different from that of Smalltalk is different from that of Pascal, and that's before you start getting into the more esoteric cases like embedded Forth.
The closest historical equivalent to today's JVM/CLR/WebAssembly I can think of is IBM's hardware-independent instruction set, which I don't think could be embedded and definitely wasn't portable to microcomputer architectures.
The extent of how each bytecode was used doesn't invalidate their existence.
Any bytecode can be embedded, it is a matter of implementation.
> The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems",[1] but it was never widely adopted.
https://en.wikipedia.org/wiki/Architecture_Neutral_Distribut...
> The ACK's notability stems from the fact that in the early 1980s it was one of the first portable compilation systems designed to support multiple source languages and target platforms
https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.
https://news.microsoft.com/2001/10/22/massive-industry-and-d...
Plenty more examples available to anyone that cares to dig what happened after UNCOL idea came to be in 1958.
Naturally one can always advocate that since 60 years of history have not provided that very special feature XYZ, we should now celebrate WebAssembly as the be all end all of bytecode, as startups with VC money repurpose old ideas newly wrapped.
In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.
For anyone who's reading this thread, check out the patent (https://patents.google.com/patent/EP0464526A2/en) and you'll understand quickly that ANDF is closer to a blend of LLVM IR and Haskell's Cmm. It's designed to be used as part of a multi-stage compiler, where part of the compiler frontend runs on the developer system (emitting ANDF) and the rest of the frontend + the whole backend + the linker runs on the target system. No relationship to WebAssembly, JVM bytecode, or any other form of bytecode designed to be executed as-is with predictable platform-independent semantics.
I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly. Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.If you reflexively dislike any technology that is adopted by startups, and then start confabulating nonsense to justify your position despite all evidence, then the technology isn't the problem.
> It does, because uptake is the proof of suitability to purpose. There's no credit to just being first to think of an idea, only in being first to implement it well enough that everyone wants to use it.
Depends on how the sales pitch of those selling the new stack goes.
> Empty sophistry is a poor substitute for thought. Are you going to post any evidence of your earlier claim, or just let it waft around like a fart in an elevator?
Creative writing, some USENET flavour, loving it.
> In particular, your reference to ANDF is absurd and makes me think you're having this discussion in bad faith. I remember ANDF, and TenDRA -- I lost a lot of hours fighting the TenDRA C compiler. Nobody with any familiarity with ANDF would put it in the same category as WebAssembly, or for that matter any other reasonable bytecode.
It is a matter of prior art, not what they achieved in practice.
> I want to see you explain why you think the CLR pre-dates the JVM. Or explain why you think C++/CLI is the same as compiling actual standard C/C++ to WebAssembly.
I never written that the CLR predates the JVM, where is that can you please point us out?
C++/CLI is as standard C and C++, as using emscripten clang extensions for WebAssembly integration with JavaScript.
But I tend to forget at the eyes of FOSS folks, clang and GCC language extensions are considered regular C and C++, as if defined by ISO themselves.
> Yes, it is in fact normal to celebrate when advances in compiler implementation, security research and hardware performance enable a new technology that solves many problems without any of the downsides that affected previous attempts in the same topic.
Naturally, when folks are honest about the actual capabilities and the past they build upon.
I love WebAssembly Kubernetes clusters reinventing application servers, by the way, what a cool idea!
I love that i can get this kind of depth of conversation in hn.
Pssst, it is the usual WebAssembly sales pitch.
Linear memory accesses aren't bound checked inside the linear memory segment, thus data can still be corrupted, even if it doesn't leave the sandbox.
Also just like many other bytecode based implementations, it is as safe as the implementations, that can be equally attacked.
https://webassembly.org/docs/security/
https://www.usenix.org/conference/usenixsecurity20/presentat...
https://www.usenix.org/conference/usenixsecurity21/presentat...
https://www.usenix.org/conference/usenixsecurity22/presentat...
WebAssembly being described as a sandbox is perfectly valid. Applications with embedded sandboxes for plugins use the sandbox to protect the application from the plugin, not to protect the plugin from itself. The plugin author can protect the plugin from itself by using a memory-safe language that compiles to WebAssembly; that's on them and not on the embedding application.
Except the tiny detail that the whole application is responsible for everything it does, including the behaviour of plugins it decides to use, so if the plugin can be exposed to faulty behaviour on its outputs, that will influence the expected behaviour from the host with logic building on those outputs, someone will be very happy and write a blog post with a funny name.
Looking forward to seeing more Chicory in Bazel, is a great use-case! Thanks for spearheading it!
> Java doesn't have easy access to the platform-specific security mechanisms (seccomp, etc) that are used by native tools to sandbox their plugins, so it's nice to have WebAssembly's well-designed security model in a pure-JVM library.
I thought Java had all of this sandboxing stuff baked in? Wasn't that a big selling point for the JVM once upon a time? Every other WASM thread has someone talking about how WASM is unnecessary because JVM exists, so the idea that JVM actually needs WASM to do sandboxing seems pretty surprising!
The JVM was designed with the intention of being a secure sandbox, and a lot of its early adoption was as Java applets that ran untrusted code in a browser context. It was a serious attempt by smart people to achieve a goal very similar to that of WebAssembly.
Unfortunately Java was designed in the 1990s, when there was much less knowledge about software security -- especially sandboxing of untrusted code. So even though the goal was the same, Java's design had some flaws that made it difficult to write a secure JVM.
The biggest flaw (IMO) was that the sandbox layer was internal to the VM: in modern thought the VM is the security boundary, but the JVM allows trusted and untrusted code to execute in the same VM, with java.lang.SecurityManager[0] and friends as the security mechanism. So the attack surface isn't the bytecode interpreter or JIT, it's the entire Java standard library plus every third-party module that's linked in or loaded.
During the 2000s and 2010s there were a lot of Java sandbox escape CVEs. A representative example is <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0422>. Basically the Java security model was broken, but fixing it would break backwards compatibility in a major way.
--
Around the same time (early-mid 2010s) there more thought being put into sandboxing native code, and the general consensus was:
- Sandboxing code within the same process space requires an extremely restricted API. The original seccomp only allowed read(), write(), exit(), and sigreturn() -- it could be used for distributed computation, but compiling existing libraries into a seccomp-compatible dylib was basically impossible.
- The newly-developed virtualization instructions in modern hardware made it practical to run a virtual x86 machine for each untrusted process. The security properties of VMs are great, but the x86 instruction set has some properties that make it difficult to verify and JIT-compile, so actually sitting down and writing a secure VM was still a major work of engineering (see: QEMU, VMWare, VirtualBox, and Firecracker).
Smartphones were the first widespread adoption of non-x86 architectures among consumers since PowerPC, and every smartphone had a modern web browser built in. There was increasing desire to have something better than JavaScript for writing complex web applications executing in a power-constrained device. Java would have been the obvious choice (this was pre-Oracle), except for the sandbox escape problem.
WebAssembly combines architecture-independent bytecode (like JVM) with the security model of VMs (flat memory space, all code in VM untrusted). So you can take a whole blob of legacy C code, compile it to WebAssembly, and run it in a VM that runs with reasonable performance on any architecture (x86, ARM, RISC-V, MIPS, ...).
What a thorough, excellent response. I learned a lot, thank you for taking the time to write this up!
A few cool things based on Chicory:
OPA: https://github.com/StyraInc/opa-java-wasm
Integration with Debezium has been launched today too: https://debezium.io/blog/2025/02/24/go-smt/
And SQLite will come next: https://github.com/roastedroot/sqlite4j
Some more interesting use cases in production:
Running python UDFs in Trino: https://trino.io/docs/current/udf/python.html
Running the Ruby parser in Jruby: https://blog.enebo.com/2024/02/23/jruby-prism-parser.html
Looking forward to this reviving NestedVM's pure Java SQLite. It's only been (checks notes…) 20 years.
http://nestedvm.ibex.org/
https://benad.me/blog/2008/1/22/nestedvm-compile-almost-anyt...
To be clear: I'm fully supportive of this effort. NestedVM's SQLite is 100% my inspiration for my Wasm based Go SQLite driver.
also the chicory Extism SDK https://github.com/extism/chicory-sdk and the mcpx4j library used for mcp.run Java integration, see e.g. https://docs.mcp.run/tutorials/mcpx-spring-ai-java
...and Chicory works on Android too https://docs.mcp.run/tutorials/mcpx-gemini-android
How does it compare to graal wasm? https://github.com/oracle/graal/blob/master/wasm/README.md/
take a look at this blog post, these are early results but we collaborated with the Graal team for a fair comparison https://chicory.dev/blog/chicory-1.0.0#the-race-day
Also note, we have the AOT compiler which can target the JVM bytecode directly as well as Dalvik/Android which is experimental but nearly spec complete :)
Wizard's slow interpreter also runs on the JVM, albeit it very, very slowly. Have you done any benchmarking against Wizard?
we should!
It'd be interesting to see a benchmark for what the total overhead is for Rust->WASM->Chicory AoT->native-image versus native Rust; I've been pleasantly surprised by the JVM in the past, so I'd hope it'd be a relatively small hit.
Even in interpreter mode, rust wasm programs seem very fast for me on Chicory. I'm not sure if we have any specific benchmarks but the graal team did some and i think it's based on a rust guest program https://chicory.dev/blog/chicory-1.0.0/#the-race-day
ahaha, that's intriguing! I think there are still some gaps but we are comparing results(with GraalWasm) on Photon here: https://github.com/shaunsmith/wasm-bench Should be easy to build a native image and compare!
Related. Others?
Chicory 1.0.0-M1: First Milestone Release - https://news.ycombinator.com/item?id=42086590 - Nov 2024 (3 comments)
A Zero-Dependency WebAssembly Runtime for the JVM - https://news.ycombinator.com/item?id=38759030 - Dec 2023 (1 comment)
I'd like to take a moment to appreciate how cute the name is.
Love stuff like that.
Glad you appreciate it! On top of being a Java joke it’s an homage to my home, New Orleans. We still drink coffee with chicory here due to some events during the civil war and then a changed cultural taste. Though the history in this isn’t exactly clear I think https://neworleanshistorical.org/items/show/1393
Came here to say the same thing, excellent name.
For people who aren't aware, Chicory has long been used (e.g. in Europe during WW2) as a coffee substitute, and Java is another name for coffee, thus Chicory is a substitute for Java.
Edit: I originally thought Chicory was a JVM replacement using WebAssembly (e.g. to run Java applets in modern browsers, using WebAssembly). It appears that it's actually a WebAssembly runtime, to run WebAssembly code on the JVM. So the name is a lot less cool than I thought it was.
it really is a perfect name. credit to u/bhelx!
For some reason, I think that instead a Java runtime written in WebAssembly would be more useful.
https://cheerpj.com/
https://thenewstack.io/cheerpj-3-0-run-apps-in-the-browser-w...
Thanks for explaining ;)
There are a few, and they are really interesting! The reason we wrote Chicory though is we're interested in extending the capabilities of existing Java applications through plugins. The intro of this talk explains some of this reasoning: https://www.youtube.com/watch?v=00LYdZS0YlI
Not sure why you're being downvoted. One of the best tools Microsoft made regarding WebAssembly and C# is Blazor. Developers can focus on building web applications and use C# on both the front-end and back-end and drive the UI either server side or WASM without missing a beat. Essentially bypassing the need for JavaScript.
I can only imagine such a capability for Java or other languages would be infinitely useful.
Google web toolkit was released 18 yers ago that essentially allowed you to create early web2.0 apps (like Gmail) in Java. AJAX and a lot of web2.0 innovations were essentially originated from GWT
https://en.wikipedia.org/wiki/Google_Web_Toolkit
Arguably, GWT was too ambitious. That made it somewhat of a PITA to work with.
J2CL is a much better approach (IMO) but is somewhat too little too late.
The best analogy to what GWT was is ASP.NET Webforms but ran on the client. That extra baggage (along with an antiquated build system and setup) made it really hard to keep GWT up to date.
I'm excited to see Java bytecode->WASM though. Now that WASM ships with a GC we should see some really neat stuff in terms of the size of the runtime needed for an AOT bytecode->wasm.
I would even argue that large scale JS web apps were plain impossible without Google Closure (the compiler they used to compile both Java and JS to JS, and to add types to JS) at the time.
TeaVM https://www.teavm.org/
Then you'll be run Java in the browser! Wait, isn't that applets?
Java runtime without miltithreading? No thanks.
This looks very cool - I'm going to read into the implementation, there's something about producing JVM bytecode from WASM instructions and then having the JVM JIT compile it into native instructions that amuses me.
It's very amusing to me as well. The first thing i did was run and SNES emulator and definitely made me chuckle https://x.com/bhelx/status/1809235314839281900
thinking about that makes we want to see a performance comparison of WASM code running in Chicory vs running on other non-Java WASM hosts
Chicory is how we're able to run newly popular MCP servers on Android!
https://docs.mcp.run/blog/2024/12/27/running-tools-on-androi...
How far is this from the hypothetical (I think, for now) scenario of including a WASM build as a fallback "platform" in jars that include some native code for a number of platforms? A number of platforms that will never be complete, not when you include the future?
I would say pretty close, check this for example: https://github.com/roastedroot/sqlite4j
Is it only me who is interested in creating minecraft plugins in go code which can compile to wasm and now run in chicory natively in the jvm itself , sure there is some overhead but oh I don't mind that , I think that somebody would want to hook the minecraft api to be callable from wasm and then some more shenanigans on the wasm side as well and then on the golang side as well but oh boy , It can be fun. I really wanted to create a python minecraft plugin when I was really young / just starting out programming , I really was close to learning kotlin just for minecraft. Minecraft holds dear to me.
I'm not sure where they congregate, but I've encountered some minecraft people using chicory probably for this purpose. IDK how minecrafts plugins currently work but i'd imagine chicory is more secure and the overhead may be worth it for some cases.
I'd think an RPC would be the cleaner pattern, but I don't know how to deploy a service-oriented Minecraft server.
I want to do the opposite I want to run jvm languages on wasm
There are a few efforts in this direction, TeaVM but also Graal, I think you just need to stay tuned
also https://github.com/mirkosertic/Bytecoder and https://github.com/i-net-software/JWebAssembly as well as the already mentioned (in some other comment) https://cheerpj.com/
most of them are still experimental, while CheerpJ offers Enterprise support (but it targets the browser).
oh, and let's not forget j2cl https://github.com/google/j2cl
Would Scala be able to run on this?
Yes i think so. it's just a jar
Scala needs the Garbage Collection and Exception Handling proposals(afaik). They are planned but not implemented yet in Chicory.
wait, was the question running scala in wasm or running chicory in scala?
To wasm or not to wasm.
Well, running Chicory in Scala is seamless, I thought this comment was targeting the effort to compile Scala to Wasm in the Scala.js project.
Can we use any JVM language, like Clojure?
I think it's the other direction isn't it? As in, it's a runtime written in Java that runs WebAssembly, not a JVM that runs on WebAssembly.
I could be wrong but that's the impression I got.
that is correct
Yes, you should be able to call wasm stuffs from Clojure with it too.
This is distributed as just a jar, should you can invoke it from Clojure, if that is what you mean.
[dead]
I feel like a WASM-native JVM runtime would make more sense these days
There is TeaVM for whatever it’s worth.
JavaScript and WASM really seem more portable. They have now approximated what Java web applets tried to achieve. And now WASM can be run in Mainframe IBM JVMs. Nashorn or Rhino seems like it runs JavaScript there.
JavaScript and WASM are now getting close to COBOL's importance. That’s no mean feat.
fun fact: there is an ongoing effort to bring Wasm to Rhino via Chicory
https://github.com/mozilla/rhino/discussions/1485
There are a few! But also there is lots of Java software out there and Wasm is a great way to extend it and bring new functionality.