Show HN: I built a tensor library from scratch in C++/CUDA
github.comHi HN,
Over the past few months, I've been building `dsc`, a tensor library from scratch in C++/CUDA. My main focus has been on getting the basics right, prioritizing a clean API, simplicity, and clear observability for running small LLMs locally.
The key features are: - C++ core with CUDA support written from scratch. - A familiar, PyTorch-like Python API. - Runs real models: it's complete enough to load a model like Qwen from HuggingFace and run inference on both CUDA and CPU with a single line change[1]. - Simple, built-in observability for both Python and C++.
Next on the roadmap is adding BF16 support and then I'll be working on visualization for GPU workloads.
The project is still early and I would be incredibly grateful for any feedback, code reviews, or questions from the HN community!
GitHub Repo: https://github.com/nirw4nna/dsc
[1]: https://github.com/nirw4nna/dsc/blob/main/examples/models/qw...
I noticed you interface with the native code via ctypes. I think cffi is generally preferred (eg, https://cffi.readthedocs.io/en/stable/overview.html#api-mode...). Although you'd have more flexibility if you build your own python extension module (eg using pybind), which will free you from a simple/strict ABI. Curious if this strict separation of C & Python was a deliberate design choice.
Yes, when I designed the API I wanted to keep a clear distinction between Python and C. At some point I had two APIs: 1 in Python and the other in high-level C++ and they both shared the same low-level C API. I find this design quite clean and easy to work with if multiple languages are involved. When I'll get to perf I plan to experiment a bit with nanobind (https://github.com/wjakob/nanobind) and see if there's a noticeable difference wrt ctypes.
The call overhead of using ctypes vs nanobind/pybind is enormous
https://news.ycombinator.com/item?id=31378277
Even if the number reported there is off, it's not far off because ctypes just calls out to libffi which is known to be the slowest way to do ffi.
Thanks for pointing this out! I'll definitely have to investigate other approaches. nanobind looks interesting but I don't need to expose complex C++ objects, I just need the 'fastest' way of calling into a C API. I guess the goto for this is CFFI?
It's the same thing - both nanobind and cffi compile the binding. The fact that nanobind let's you expose cpp doesn't prevent you from only exposing c. And IMHO nanobind is better because you don't need to learn another language to use it (ie you don't need to learn cffi's DSL).
This is very cool. I'm wondering if some of the templates and switch statements would be nicer if there was an intermediate representation and a compiler-like architecture.
I'm also curious about how this compares to something like Jax.
Also curious about how this compares to zml.
You are absolutely correct! I started working on a sort of compiler a while back but decided to get the basics down first. The templates and switch(s) are not really the issue but rather going back and forth between C & Python. This is an experiment I did a few months ago: https://x.com/nirw4nna/status/1904114563672354822 as you can see there is a ~20% perf gain just by generating a naive C++ kernel instead of calling 5 separate kernels in the case of softmax.
Cool stuff! Is the goal of this project personal learning, inference performance, or something else?
Would be nice to see how inference speed stacks up against say llama.cpp
Thanks! To be honest, it started purely as a learning project. I was really inspired when llama.cpp first came out and tried to build something similar in pure C++ (https://github.com/nirw4nna/YAMI), mostly for fun and to practice low-level coding. The idea for DSC came when I realized how hard it was to port new models to that C++ engine, especially since I don't have a deep ML background. I wanted something that felt more like PyTorch, where I could experiment with new architectures easily. As for llama.cpp, it's definitely faster! They have hand-optimizing kernels for a whole bunch of architectures, models and data types. DSC is more of a general-purpose toolkit. I'm excited to work on performance later on, but for now, I'm focused on getting the API and core features right.
If someone wanted to learn the same thing, what material would you suggest is a good place to start?
You just need a foundation of C/C++. If you already have that then just start programming, it's way better than reading books/guides/blogs (at least until you're stuck!). Also, you can read the source code of other similar projects on GitHub and get ideas from them, this is what I did at the beginning.
Both uses cublas under the hood. So I think it is similar for prefilling (of course, this framework is too early and don't have FP16 / BF16 support for GEMM it seems). Hand-roll gemv is faster for token generation hence llama.cpp is better.
Unrelated: my man, I loved your C vision library back in the day.
Do you have any plan for the serialization and deserialization of your tensor and nn library?
Right now I can load tensors directly from a safetensors file or from a NumPy array so I don't really have in mind to add my own custom format but I do plan to support GGUF files.
super n00b question , what kind of labtop do you need to do project like this? Is mac ok? or do you need dedicated linux labtop?
I developed this on an HP Omen 15 with an i7-8750H, a GTX 1050TI and 32GB or RAM with Linux Mint as my OS.
Any laptop with an Nvidia card
does gaming labtop works with windows? have always used mac for development because tool chain is so much easier, wondering if there is a difference between windows and linux for cuda development
You can always use WSL
Why not zig.
Because I happen to know C++ and I just wanted to build something rather than learn a new language. Zig looks very interesting though, there are already other projects in this space that use it with great success (see: https://github.com/zml/zml).
It's very C-like, heavy use of macros, prefixes instead of namespaces, raw pointers for arrays etc. Technically you're compiling C++, but... not really.
No negative or positive comment on its usability though, I'm not an ML/Neural Network simulation person.
I've found adherence to C++ conventions in low-level software to be a rather contentious issue, mostly recently when working in an ML compiler group. One set abhorred the use of macros, the other any kind of polymorphism or modern C++ feature.
Coming from a background of working with OS kernels and systems software, I don't mind the kind of explicit "C++ lite" style used by the OP. Left to my own devices, I usually write things that way. I would think twice if I was trying to design a large framework, but ... I try to avoid those.
If you think that, I encourage you to check out this presentation:
https://www.youtube.com/watch?v=zBkNBP00wJE
About writing a Commodore C64 game in modern(ish) C++
maybe it will sway you a bit :-)
Yes! This was actually one of my initial goals! I actually like to work in a C-style-C++ let's say where I turn off C++ features I don't need and just use the one I actually need like templates, objects ecc... I find this style to be easy to reason about when it comes to performance.
The proper way to reason about performance is to use a profiler, not second guessing what C like code generates.