Show HN: I built a tensor library from scratch in C++/CUDA

github.com

117 points by nirw4nna a day ago

Hi HN,

Over the past few months, I've been building `dsc`, a tensor library from scratch in C++/CUDA. My main focus has been on getting the basics right, prioritizing a clean API, simplicity, and clear observability for running small LLMs locally.

The key features are: - C++ core with CUDA support written from scratch. - A familiar, PyTorch-like Python API. - Runs real models: it's complete enough to load a model like Qwen from HuggingFace and run inference on both CUDA and CPU with a single line change[1]. - Simple, built-in observability for both Python and C++.

Next on the roadmap is adding BF16 support and then I'll be working on visualization for GPU workloads.

The project is still early and I would be incredibly grateful for any feedback, code reviews, or questions from the HN community!

GitHub Repo: https://github.com/nirw4nna/dsc

[1]: https://github.com/nirw4nna/dsc/blob/main/examples/models/qw...

aklein a day ago

I noticed you interface with the native code via ctypes. I think cffi is generally preferred (eg, https://cffi.readthedocs.io/en/stable/overview.html#api-mode...). Although you'd have more flexibility if you build your own python extension module (eg using pybind), which will free you from a simple/strict ABI. Curious if this strict separation of C & Python was a deliberate design choice.

  • nirw4nna a day ago

    Yes, when I designed the API I wanted to keep a clear distinction between Python and C. At some point I had two APIs: 1 in Python and the other in high-level C++ and they both shared the same low-level C API. I find this design quite clean and easy to work with if multiple languages are involved. When I'll get to perf I plan to experiment a bit with nanobind (https://github.com/wjakob/nanobind) and see if there's a noticeable difference wrt ctypes.

    • almostgotcaught a day ago

      The call overhead of using ctypes vs nanobind/pybind is enormous

      https://news.ycombinator.com/item?id=31378277

      Even if the number reported there is off, it's not far off because ctypes just calls out to libffi which is known to be the slowest way to do ffi.

      • nirw4nna 19 hours ago

        Thanks for pointing this out! I'll definitely have to investigate other approaches. nanobind looks interesting but I don't need to expose complex C++ objects, I just need the 'fastest' way of calling into a C API. I guess the goto for this is CFFI?

        • almostgotcaught 15 hours ago

          It's the same thing - both nanobind and cffi compile the binding. The fact that nanobind let's you expose cpp doesn't prevent you from only exposing c. And IMHO nanobind is better because you don't need to learn another language to use it (ie you don't need to learn cffi's DSL).

helltone a day ago

This is very cool. I'm wondering if some of the templates and switch statements would be nicer if there was an intermediate representation and a compiler-like architecture.

I'm also curious about how this compares to something like Jax.

Also curious about how this compares to zml.

  • nirw4nna a day ago

    You are absolutely correct! I started working on a sort of compiler a while back but decided to get the basics down first. The templates and switch(s) are not really the issue but rather going back and forth between C & Python. This is an experiment I did a few months ago: https://x.com/nirw4nna/status/1904114563672354822 as you can see there is a ~20% perf gain just by generating a naive C++ kernel instead of calling 5 separate kernels in the case of softmax.

kajecounterhack a day ago

Cool stuff! Is the goal of this project personal learning, inference performance, or something else?

Would be nice to see how inference speed stacks up against say llama.cpp

  • nirw4nna a day ago

    Thanks! To be honest, it started purely as a learning project. I was really inspired when llama.cpp first came out and tried to build something similar in pure C++ (https://github.com/nirw4nna/YAMI), mostly for fun and to practice low-level coding. The idea for DSC came when I realized how hard it was to port new models to that C++ engine, especially since I don't have a deep ML background. I wanted something that felt more like PyTorch, where I could experiment with new architectures easily. As for llama.cpp, it's definitely faster! They have hand-optimizing kernels for a whole bunch of architectures, models and data types. DSC is more of a general-purpose toolkit. I'm excited to work on performance later on, but for now, I'm focused on getting the API and core features right.

    • NalNezumi 20 hours ago

      If someone wanted to learn the same thing, what material would you suggest is a good place to start?

      • nirw4nna 19 hours ago

        You just need a foundation of C/C++. If you already have that then just start programming, it's way better than reading books/guides/blogs (at least until you're stuck!). Also, you can read the source code of other similar projects on GitHub and get ideas from them, this is what I did at the beginning.

  • liuliu a day ago

    Both uses cublas under the hood. So I think it is similar for prefilling (of course, this framework is too early and don't have FP16 / BF16 support for GEMM it seems). Hand-roll gemv is faster for token generation hence llama.cpp is better.

    • kajecounterhack 2 hours ago

      Unrelated: my man, I loved your C vision library back in the day.

rrhjm53270 a day ago

Do you have any plan for the serialization and deserialization of your tensor and nn library?

  • nirw4nna a day ago

    Right now I can load tensors directly from a safetensors file or from a NumPy array so I don't really have in mind to add my own custom format but I do plan to support GGUF files.

amtk2 a day ago

super n00b question , what kind of labtop do you need to do project like this? Is mac ok? or do you need dedicated linux labtop?

  • nirw4nna a day ago

    I developed this on an HP Omen 15 with an i7-8750H, a GTX 1050TI and 32GB or RAM with Linux Mint as my OS.

  • kadushka a day ago

    Any laptop with an Nvidia card

    • amtk2 a day ago

      does gaming labtop works with windows? have always used mac for development because tool chain is so much easier, wondering if there is a difference between windows and linux for cuda development

revskill 20 hours ago

Why not zig.

  • nirw4nna 19 hours ago

    Because I happen to know C++ and I just wanted to build something rather than learn a new language. Zig looks very interesting though, there are already other projects in this space that use it with great success (see: https://github.com/zml/zml).

einpoklum a day ago

It's very C-like, heavy use of macros, prefixes instead of namespaces, raw pointers for arrays etc. Technically you're compiling C++, but... not really.

No negative or positive comment on its usability though, I'm not an ML/Neural Network simulation person.

  • caned a day ago

    I've found adherence to C++ conventions in low-level software to be a rather contentious issue, mostly recently when working in an ML compiler group. One set abhorred the use of macros, the other any kind of polymorphism or modern C++ feature.

    Coming from a background of working with OS kernels and systems software, I don't mind the kind of explicit "C++ lite" style used by the OP. Left to my own devices, I usually write things that way. I would think twice if I was trying to design a large framework, but ... I try to avoid those.

  • nirw4nna 19 hours ago

    Yes! This was actually one of my initial goals! I actually like to work in a C-style-C++ let's say where I turn off C++ features I don't need and just use the one I actually need like templates, objects ecc... I find this style to be easy to reason about when it comes to performance.

    • pjmlp 18 hours ago

      The proper way to reason about performance is to use a profiler, not second guessing what C like code generates.