Looks like this uses mutation of global/shared state. For the example:
z=x*y+3,
what if there is another function that does:
w=x+2*y
and then both functions do a backward pass (simultaneously, perhaps in different threads or otherwise); then it seems dangerous to collect the results of the backward pass (partial derivatives) in the shared variables x and y and make them accessible through x.get_grad() and y.get_grad(). Imho, in a better design, you'd say z.get_grad(x) and z.get_grad(y), and w.get_grad(x) and w.get_grad(y) to get the partial derivatives.
I wanted to store the graph in a heap to be able to send it to the gpu later on, but then I got lazy and abandoned it. But you always learn something. :)
That sounds interesting; what do you mean by "in a heap"? Is the stack they're currently linearized into not GPU-friendly? I don't know much about GPU programming, so this might be a dumb question.
My idea was to make a Vec of nodes with pointers to indexes in the vec, so it would be easier to send this array into the gpu. I wanted to make a minimal version example of making a micrograd network run on the gpu, with wgpu or macroquad, but I didn’t complete it, so would be nice if someone else did it. :)
Looks like this uses mutation of global/shared state. For the example:
z=x*y+3,
what if there is another function that does:
w=x+2*y
and then both functions do a backward pass (simultaneously, perhaps in different threads or otherwise); then it seems dangerous to collect the results of the backward pass (partial derivatives) in the shared variables x and y and make them accessible through x.get_grad() and y.get_grad(). Imho, in a better design, you'd say z.get_grad(x) and z.get_grad(y), and w.get_grad(x) and w.get_grad(y) to get the partial derivatives.
That's a great point, it would be better to keep the gradients separate from the Scalars.
However, I think PyTorch does it the same way (?), at least they say something like this in their docs.
"This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it." - https://docs.pytorch.org/docs/stable/generated/torch.autogra...
The rust burn crate does it better, they store the backprop'd gradients in a separate container and return it: https://github.com/tracel-ai/burn/blob/af381ee18566fc27f5c98...
Nice! I made a small toy version myself to learn Rust and freshen up on ML. https://github.com/tnlogy/telegrad
I wanted to store the graph in a heap to be able to send it to the gpu later on, but then I got lazy and abandoned it. But you always learn something. :)
That sounds interesting; what do you mean by "in a heap"? Is the stack they're currently linearized into not GPU-friendly? I don't know much about GPU programming, so this might be a dumb question.
My idea was to make a Vec of nodes with pointers to indexes in the vec, so it would be easier to send this array into the gpu. I wanted to make a minimal version example of making a micrograd network run on the gpu, with wgpu or macroquad, but I didn’t complete it, so would be nice if someone else did it. :)
I see! I thought that was the stack.
Probably it would be good to put "backward-mode" in the title.