firefax 2 days ago

Can someone elaborate on the meaning of "7m model"?

I'm new to AI, and had an LLM spit out an explanation of why some of the "local" models don't work in Ollama on my Air, but... I don't know how accurate the AI is, heh.

It's my understanding most models are more like 1-30b (as in Billion)

  • magicalhippo 2 days ago

    They have just four small layers, rather than several dozen large layers. Off the top of my head, Gemma 3 27B has 63 layers or so. They're also larger since it has a much larger number of embedding dimensions.

    Hence they end up with ~7 million weights or parameters, rather than billions.

  • p1esk 2 days ago

    7 million parameters

    • firefax 2 days ago

      ty to you and the other poster.

ripped_britches 2 days ago

Wow this is legitimately nuts

  • koakuma-chan 2 days ago

    Why?

    • magnio 2 days ago

      ARC-AGI is one of the few tests on which human can complete easily while LLMs still struggle. This model scores 45% on ARC-AGI-1 and 8% on ARC-AGI-2, the latter is comparable to Claude Opus 4 and GPT-5 High, behind only Claude Sonnet 4.5 and Grok 4 Thinking, for a model about 0.001% the size of commercial models.

byyoung3 2 days ago

seems like they just stole the original HRM (just glanced at this though)