nayajunimesh 2 days ago

Most validation libraries like Zod create deep clones of your data during validation, which can impact performance in high-throughput applications. I built decode-kit to take a different approach: assertion-based validation that validates and narrows TypeScript types in-place, without any copying or transformation. Here's what the API looks like in practice:

import { object, string, number, validate } from "decode-kit";

// Example of untrusted data (e.g., from an API) const input: unknown = { id: 123, name: "Alice" };

// Validate the data (throws if validation fails) validate(input, object({ id: number(), name: string() }));

// `input` is now typed as { id: number; name: string } console.log(input.id, input.name);

When validation fails, decode-kit takes an equally thoughtful approach. Rather than being prescriptive about error formatting, it exposes a structured error system with an AST-like path that precisely indicates where validation failed. It does include a sensible default error message for debugging, but you can also traverse the error path to build whatever error handling approach fits your application - from simple logging to sophisticated user-facing messages.

The library also follows a fail-fast approach, immediately throwing when validation fails, which provides both better performance and clearer error messages by focusing on the first issue encountered.

I'd love to hear your thoughts and feedback on this approach.

  • kenward 9 hours ago

    > fail-fast approach, immediately throwing when validation fails

    would this mask any errors that would occur later in the validation?

    • nayajunimesh 7 hours ago

      With the fail-fast approach, yes - unless we introduce an option to collect all errors. In my own applications, I have found this to be a better default because the 'average' requests is valid and paying a constant overhead just to be thorough on rare invalid cases can be wasteful.

      My overall takeaway has mostly been to not optimize for the worst case by default. Keep fail-fast as baseline for boundaries and hot paths, and selectively enable “collect all” where it demonstrably saves human time.

sizediterable 9 hours ago
  • tomjakubowski 8 hours ago

    Libraries like runtypes, zod, et al. market themselves as validation libraries, but they function as parsing libraries in the sense this article means: with them you "parse" untyped POJOs at the I/O boundary and get typed values (or a raised exception) out the other end.

    Typescript language features like branded types, private constructors can make it so those values can only be constructed through the parse method.

    They're really not much different, in terms of type safety*, from something like Serde.

    *: they are of course different in other important ways -- like that Serde can flexibly work with all kinds of serialized formats.

  • nayajunimesh 7 hours ago

    I understand completely, and the library is intentionally unopinionated in that regard. We simply ensure that the value passed matches the provided schema and ruleset and refine the type in-place.

    In certain cases (like validating that an input is ISO8601 format), we refine the input type to a branded type (we have a Iso8601 branded type). At runtime it's just a string, but at compile time TypeScript treats it as a distinct type that can only be obtained through validation. But, it is still not transforming or parsing the data in the way that the blog post intends, which is by design.

    https://github.com/nimeshnayaju/valleys?tab=readme-ov-file#i...

  • ale 9 hours ago

    Libraries like these are meant for runtime validation. I agree though. I prefer to use the compiler itself (tsc --noEmit) than recreating the validation logic.

    • mirekrusin 6 hours ago

      It doesn't compete with static type system, it complements it. Static type system in typescript can't do anything with unknown/any values that are crossing i/o boundary - they require runtime assertion to bring them into statically typed, safer world.

scottmas a day ago

The whole benefit over zod seems to be perf, so could you do some benchmarking? I wonder if it’s worth it

  • nayajunimesh a day ago

    Yes, the primary focus is memory efficiency; performance improvement is a side effect of that. From my own benchmarks, I have found that to be the case. If you're validating thousands of objects per second or working with memory constraints, the difference becomes quite significant. Happy to share the full benchmark code if you'd like to run them yourself!

    • typeofhuman 6 hours ago

      You should include this benchmark in your repo and README if you want to build trust.

      I think anything that declares itself as a performance improvement over the competition ought to prove it!

seabass 10 hours ago

I was surprised by the source including a bunch of try/catch, which results in deopts for that code path as far as I understand, given that the stated benefit over Zod and other validators was that this should be run in performance critical code. I’d be curious to see benchmarks that show whether this is faster than zod, valibot, and zod4 mini in hot code paths.

  • nayajunimesh 7 hours ago

    We do use try/catch in a few places. However, in normal operations (valid input), no exceptions are thrown - the try/catch blocks are present in certain validators but do not execute their catch clauses. AFAIK, modern engines generally don’t impose large steady-state penalties merely for the presence of a try/catch when no exception is thrown; the measurable cost is usually when exceptions are actually thrown. When an element/property fails, the catch is used to construct precise error paths. That’s intentionally trading some failure-path overhead for better developer diagnostics.

    In the new few days, I'll prepare benchmarks to compare with Zod and Valibot!

    • seabass 3 hours ago

      The information I have on this could be outdated, so take this with a grain of salt, but it used to be the case that in hot code paths the presence of a try/catch would force a deoptimization whether or not you throw. The optimizing compiler in v8, for example, would specifically not run on any functions containing try/catch due to its inability to speculatively inline the optimized code. If you're feeling up to it, you can prove whether that is still the case with `d8 --allow-natives-syntax --trace-deopt ./your-script.js` and sprinkle in some `%OptimizeFunctionOnNextCall` in your code. I did a quick search for `try {` in the zod 4 source and didn't see anything, so I suspect that the performance issues surrounding try/catch are still at least somewhat around, unless they are simply avoiding try/catch for code cleanliness which could totally be the case. Regardless, I'd encourage you to look into whether plain old boolean return values in your validators would work for your project. Just include the `throw` part without all the `try/catch` and the code itself will likely be simpler, faster, and easy for the JIT to optimize. Good luck on those benchmarks.

gr4vityWall 6 hours ago

It would be nice to provide some benchmarks comparing it to Zod, arktype, etc. Comparing it across different runtimes (Node.js, Bun, web browsers, etc.) would be great, too.

cluckindan 6 hours ago

This would be way more powerful if there was a way to infer a validator from a TypeScript type.

orangee 8 hours ago

assertion-based API is a neat concept