kengoa 6 hours ago

Amazing work! Reminded me of LLM Visualization (https://bbycroft.net/llm) except this is a lot easier to wrap my head around and that I can actually run the training loops, which makes sense given the simplicity of the original microgpt.

To give a sense of what the loss value means, maybe you can add a small explainer section as a question and add this explanation from Karpathy’s blog:

> Over 1,000 steps the loss decreases from around 3.3 (random guessing among 27 tokens: −log(1/27)≈3.3) down to around 2.37.

to reiterate that the model is being trained to predict the next token out of 27 possible tokens and is now doing better than the baseline of random guess.

krackers 15 hours ago

There used to be this page that showed the activations/residual stream from gpt-2 visualized as a black-white image. I remember it being neat how you could slowly see order forming from seemingly random activations as it progressed through the layers.

Can't find it now though (maybe the link rotted?), anyone happen to know what that was?

RugnirViking 5 hours ago

I was a little confused by "see, its much better" when the output is stuff like isovrak and kucey. What is it supposed to be generating?

  • b44 4 hours ago

    the untrained model is literally just generating random characters, whereas your examples are at least pronouncable. you can add more layers to get progressively better results.

  • lucrbvi 5 hours ago

    It's just hallucinating training data, the model is very small so it cannot be useful at all

ramon156 4 hours ago

My Android phone was not a fan of this site, but on my desktop it works great! Cool stuff

keepamovin 4 hours ago

I can't help but think there has to be a cheaper way to LLM.

umairnadeem123 9 hours ago

nice. visualizing the prompt->tool->output graph is underrated, it makes failure modes (and cost) obvious. do you track token/call cost per node + cache hits, or is it purely structural right now? also curious if you let users diff two runs (same prompt, different model/tool) and see which node diverged first.

youio 4 hours ago

really well done

kfsone 18 hours ago

Minor nit: In familiarity, you gloss over the fact that it's character rather than token based which might be worth a shout out:

"Microgpt's larger cousins using building blocks called tokens representing one or more letters. That's hard to reason about, but essential for building sentences and conversations.

"So we'll just deal with spelling names using the English alphabet. That gives us 26 tokens, one for each letter."

  • mips_avatar 16 hours ago

    Using ascii characters is a simple form of tokenization with less compression

  • b44 17 hours ago

    hm. the way i see things, characters are the natural/obvious building blocks and tokenization is just an improvement on that. i do mention chatgpt et al. use tokens in the last q&a dropdown, though

msla 17 hours ago

About how many training steps are required to get good output?

  • alansaber 5 hours ago

    Depends on the model size, batch size, input sequence length, ... etc. With a small model like this you'll never get a 'good' output but you can maximise its potential.

  • WatchDog 15 hours ago

    I trained 12,000 steps at 4 layers, and the output is kind of name-like, but it didn't reproduce any actual name from it's training data after 20 or so generations.

  • b44 17 hours ago

    not many. diminishing returns start before 1000 and past that you should just add a second/third layer

GaggiX 9 hours ago

Wtok and Wpos should be 26-dim along one of the axis but it shows a 16x16 matrix be default, fc1 instead 16x64 with the default settings (not 16x16).

  • b44 9 hours ago

    good catch - i intentionally cap node visualizations at 16 so it doesn't get super long, but the sidebar shouldn't have that