canyon289 2 days ago

Hey all, I created this model with a top notch team. I answered many questions last week when this hit the front page, and happy to answer more here as well.

https://news.ycombinator.com/item?id=44902148

Personally I'm excited that you all have access to this model now and hope you all get value out of using them.

  • WithinReason 2 days ago

    I would like to know your thoughts on using 2/3 of such a small the model's size for embeddings. What would be different if you used a byte-level vocabulary and spent the parameter budget on transformer parameters instead? I think you would lose performance (tok/s) but might gain accuracy.

    • canyon289 2 days ago

      At this small scale the embeddings indeed were a big focus. Consider this thought process.

      The tokens themselves are a form of compression. Lets say we have the word "WaffleHouse", character level this would be 11 tokens, but with an embedder this would be perhaps 2 or 3 tokens (I didn't actually run through the tokenizer but we could verify precisely). This matters a lot for on device processing especially.

      So while we could get more intelligence out of the model by bumping up the "knowledge" parameters, the device would need to process more input and output tokens.

      Another advantage on small devices is the embeddings are just a lookup table which requires little to no computation. Its the rest of the parameters that have the expensive matrix multplications, so if we increased those we'd also be increasing the number of FLOPs needed for a forward pass.

      This blog post explains it well. https://www.adamcasson.com/posts/transformer-flops

      So all this to say is there are definite tradeoffs between model size, performance on evals, and compute cost. We ran many internal experiments with different choices to see could work well, and then picked what we believed work will best for the open community.

      • Scene_Cast2 2 days ago

        How would this matrix get trained with PyTorch? I currently have a toy Transformer network - I ended up marking the matrix as sparse and using SparseAdam - gives a bit of a performance boost, but at the same time I can't use torch.compile() on the fetch from this matrix.

      • PoignardAzur a day ago

        Does Gemma use any specific scheme to compress embeddings? Which have you considered?

        For instance, it's well-known that transformer embeddings tend to form clusters. Have you considered splitting the embedding table into "cluster centroid" and "offset from centroid" tables, where the later would presumably have a smaller range and precision?

      • 3abiton a day ago

        Beautiful writeup! Thanks for your service!

  • riedel 2 days ago

    Very stupid question: why does the tflite model output only '[multimodal][multimodal]' when executed on GPU in the AI edge gallery app, while fully working on the CPU.

  • tarruda 2 days ago

    Thanks for your work, it is really an amazing small LM.

    Can you share what kind of hardware is necessary to train it, and how long it took?

    • canyon289 2 days ago

      Thank you!

      The Gemma3 technical report contains many details on training setup https://arxiv.org/pdf/2503.19786

      This was released with the initial batch of Gemma3 so it doesn't contain the 270m details, nonetheless you'll get a good idea of what it takes to build these models.

  • dcreater 2 days ago

    As a non MLE, what are the pros/cons of OP's PyTorch re-implementation?

    • hodgehog11 2 days ago

      It is extremely valuable for researchers that commonly prototype theories using PyTorch on less powerful devices. Many of my colleagues run theory experiments using GPT-2 models. This allows for an easy transition to testing on a SOTA model instead.

    • mdaniel 2 days ago

      I'm not a ML engineer, so I can speak to the "non MLE" bit from my perspective

      (literal tl;dr: learning and experimentation opportunity)

      1. Since it's just PyTorch, that means one can run it locally upon whatever accelerator you have that PyTorch supports. For quite a few people that includes Metal Performance Shaders: https://docs.pytorch.org/docs/stable/mps.html

      I can attest that building PyTorch from git is achievable in about 15 minutes on my M1 Pro, if you really want to chase the rabbithole. Cloning PyTorch is its own special 'please. wait.', but building it is fine

      2. Since it's (of the ones that I've looked at) approximately 500 lines long, it's much, much, much more digestable than a lot of the vomit that comes out of so-called production systems. Those systems usually have only heard about typed Python in passing, and they believe it is a fad that will blow over. The ones in this repo aren't stellar about it, but at 500 lines it's easily achievable to type hint the code yourself, which can serve as an excellent learning opportunity

      3. PyTorch offers some fun conversion tools, also, allowing one to compare-and-contrast how it executes under Torch versus ONNX <https://docs.pytorch.org/docs/stable/onnx.html>, TorchScript <https://docs.pytorch.org/docs/stable/generated/torch.jit.sav...>, CoreML <https://apple.github.io/coremltools/docs-guides/source/conve...>, or a bazillion other competing frameworks

      4. Related, one can play around with quantization and other "inference related" concerns (e.g. https://github.com/pytorch/ao#pytorch-native-training-to-ser... )

      5. Further related, one can play around with the fine-tuning mentioned elsewhere, to better understand what is and isn't possible to achieve using that process. Because the code is digestable, and the models are reasonably sized (Qwen 0.6B weighs only 1.4GB and is Apache 2), it brings FAFO opportunities in ways that gpt-oss-20b (or bigger!) won't

      I do appreciate that some of what I said may skate close to "ML engineer" concerns, so obviously your situation will be different, but for me having a better grip on how these things work enables me to have better conversations with my colleagues and also helps trip my bullshit detector when someone claims they're the second coming and are going to cure cancer or whatever

  • khalic 2 days ago

    I'm going to have so much fun tinkering with it, thank you!!!

  • owebmaster 2 days ago

    Does it have function calls? Can we use it with MCP?

    • canyon289 2 days ago

      It can possibly perform basic prompted FC but I wouldn't get your hopes up. It should be to be a solild FC model if trained on specific tools and format. I would not expect great MCP performance because the context window is 32k and most MCP servers I've see implicitly assume massive context windows.

  • bigyabai 2 days ago

    Thanks for making this! One of my favorite projects was having a Discord chatbot powered by the original BERT model - these 270M weights are a fine upgrade.

dboon 2 days ago

First, thanks for doing everything you do! I, and I’m sure countless others, genuinely benefit from you.

How would you recommend someone with a strong background in undergraduate level traditional ML get into deep learning? I use that as a broad term to encompass all the knowledge needed to understand how these models work, starting from the deep learning models of a decade ago, plus the practical ability to collect data or build RL gyms and fine tune them.

I understand ML math well enough that I’m confident I could follow a modern white paper after a lot of effort and research. But there are so many pieces — quantizations, flash attention, Mode, batch sizes, layer sizes, model sparsity. I feel very overwhelmed trying to piece together how all of the pieces arose, and even more overwhelmed trying to figure out how one even goes about fine tuning one. I (like most people here) am extremely technical, and it’s not often I feel this way about a field.

Thanks again! Best of luck on your work

  • hodgehog11 2 days ago

    As someone who has students that work in deep learning, I can say that it is unwise to approach deep learning in the same way as traditional ML. Most classical methods are strongly mathematically motivated and have excellent theory to accompany them. Deep learning is still alchemy; it is a matter of experience, trying things out and getting a feel for how the pieces fit together in a modular format. Once you are experienced with the common building blocks, you can develop an intuition for how they might be improved.

    I would start with training a basic MLP on tabular data. Then switch to CNNs: LeNet, VGG, then ResNet. Understand each of the new blocks that are incorporated into each architecture and how they improve stability and training efficiency. There are good PyTorch tutorials for these. Use these as a playground to understand what each of the training knobs do. Look at how their implicit biases induce double descent; this should give you confidence that overfitting is rarely an issue anymore. Give finetuning a try by taking a pretrained ResNet on ImageNet, adding layers to the start and end, and training only these to adapt the model to another image dataset. This should demonstrate the power of finetuning and why pretrained models are so powerful.

    Next, briefly consider a tutorial on LSTMs, recognizing the exploding and vanishing gradient problems and the traditional challenges with sequential data.

    Then move to transformers. Work with language first, starting from Andrej Karpathy's excellent YouTube tutorials. Train the model in full for a bit, then see about using an existing GPT2 checkpoint. Try adapting NanoGPT to a mathematical dataset as an exercise. Then take a look at llm.c to see how to really improve performance.

    Finally, take a look at ViT and DETR. Use pretrained models and finetune them on smaller datasets again.

    By this point, you should have a good grounding to start reading much of the surrounding literature and understand them. You should also understand that models are never built from scratch anymore, and every model is a collection of individual pieces built elsewhere for a particular purpose.

    • dboon 14 hours ago

      Thank you, that was a wonderful reply!

  • Quarrel 2 days ago

    > I’m confident I could follow a modern white paper after a lot of effort and research.

    Without having done it for deep learning, I'm sure it is like any other area of computer science. You get to exactly the level you're at now, and then you put in that effort following modern papers, and each one gets easier and easier. A year later you've done the literature review for your Phd. :)

shekhar101 2 days ago

Can someone (or OP) point me to a recipe to fine tune a model like this for natural language tasks like complicated NER or similar workflows? I tried finetuning Gemma3 270M when it came out last week without any success. A lot of tutorials are geared towards chat applications and role playing but I feel this model could be great for usecases like mine where I am trying to extract clean up and extract data from PDFs with entity identification and such.

lsb 2 days ago

That’s wild that with a KV cache and compilation on the Mac CPU you are faster than on an A100 GPU.

  • ladberg 2 days ago

    Given that the compiled version is slower than then eager version on A100, there's definitely something suboptimal happening there

    • ModelForge 2 days ago

      No the compiled version is actually faster.

      From that table, the A100 tok/sec (larger is faster) numbers are:

      - Eager: 28

      - Compiled: 128

      And

      - KV cache eager: 26

      - KV cache compiled: 99

      The reason that the KV cache is slower is likely because it's not GPU-optimized code. On CPU the KV cache is faster. To make it faster on GPU, you would pre-allocate the tensors on the device for example instead of `torch.cat`ting them on the fly

      • ladberg 2 days ago

        Ah yep read the labels backwards and meant that - ty for catching and for the explanation

  • punnerud 2 days ago

    Because on Mac the CPU and GPU share memory, but A100 need to transfer to RAM/CPU on the parts that’s not supported by GPU?

    (My first guess)

  • Weryj 2 days ago

    This would be because the GPU can’t fill its waveform and hide memory latency, no? I’m curious for a reason why

keeeba 2 days ago

What use-cases do you see for the 270M’s embeddings, and should we be sticking to token embeddings or can we meaningfully pool for sentence/document embeddings?

Do we need to fine-tune for the embeddings to be meaningful at the sentence/document level?

kace91 2 days ago

This might be a very basic question, but as a dev whose only interaction with models is using the main commercial ones (sonnet, ChatGPT and the like), what are some usecases for these smaller local models?

What usages can be reasonable to expect from them? Are there uses out of the box or does one have to go through some custom post-training to get useful behavior?

I feel like there is a huge gap between understanding models as a user of commercial tools and the kind of discussions happening in these threads, but I’m not sure what are the in-between steps.

  • canyon289 2 days ago

    Its a crucial question. I wrote up a long answer here. Let me know it helps

    https://news.ycombinator.com/item?id=44913558

    • kace91 2 days ago

      Thanks for the reply!

      It does help to figure out where in the space this model fits. I'm still a bit confused about this part:

      >since it needs to be shaped to match specific tasks, we did our best to design it to be a flexible starting point for LLM-style tasks and worked with partners to put it into the right frameworks and places for you all to be able to shape it to what you need it to be.

      What does shaping mean in this case? What tools are used, what requirements are there, both in terms of hardware and knowledge?

      I would like to go beyond being spoonfed by large companies' high usability products, both to improve my knowledge and not be a victim of potential future rug pulls. In the classic software world, I guess the equivalent would be someone who runs open source software navigating the extra complexity, and ocassionally collaborates with the projects.

      But I don't know what that looks like in the AI world. I've gone through some courses on machine learning but learning the basics about hessian matrices and gradient descent seems as detached from the practical point I'm searching as taking a compilers class is from learning React, so I think I've been looking in the wrong places (?).

      • canyon289 2 days ago

        > What does shaping mean in this case? What tools are used, what requirements are there, both in terms of hardware and knowledge?

        I'll try making an analogy to another task I like which is cooking. In cooking the chef has to make decisions like what is the overall meal going to look like, but then also detailed decisions like what the main course versus side, and even more detailed what's the proportion of side dish serving to main dish, what ingredients, how long to cook something etc.

        It's kind of the same with ML models, whether AI or not. When I build smaller bayesian models I make specific choices about the model architecture, which data I use, the array shape of the output etc.

        The tools used here are largely jax or pytorch, often in a framework like flax, or a NN higher level package. You often then pair it with libraries that which have NN optimizers, data loaders etc. Pytorch is more batteries included than the JAX ecosystem which separates these out.

        One of the best ways to get a grasp of all of this is implement some small models yourself. These pieces will start to be come more apparent and concrete, especially because as an end users you're not exposed to them, the same way most end users are not exposed to compilers.

  • ModelForge 2 days ago

    I'd say the common ones (besides educational) are

    - private, on-device models (possibly with lower latency than models via web API); also edge devices

    - algorithm research (faster and cheaper to prototype new ideas)

    - cheap tasks, like classification/categorization; sure, you don't need a decoder-style LLM for that, but it has the advantage of being more free-form, which is useful in many scenarios; or maybe a sanity checker for grammar; or even a router to other model (GPT-5 style)

  • barrkel 2 days ago

    Summarization, very basic tool use, without needing to go across the internet and back, and zero cost because of edge compute.

  • _giorgio_ 2 days ago

    Maybe also secrecy and privacy.

eachro 2 days ago

If you wanted to train it from scratch, how long would it take on a reasonable GPU setup?

  • rck 2 days ago

    For the sake of comparison, you can train a 124M model on a 3090 (see nanoGPT). In that case, each batch ends up having about 500,000 tokens and takes maybe around 10ish seconds to run forward and backward. Then the 6 trillion tokens that this model was trained on would take about 4 years, approximately. Or just "too long" for a shorter answer.

  • canyon289 2 days ago

    The world reasonable is vague but assuming you mean something that could be run in a residential unit it would long a very long time if training from pure scratch.

    This is part of the rationale for releasing this model. Now you don't have to start from scratch and finetuning is reasonable on a wide variety of hardware, including reasonable GPU setups (and smaller)

mattfrommars 2 days ago

Is this the same thing as people did the past with '<model> inference written in vanilla Go, Python, Java, etc" ?

n0vella 2 days ago

Do you think these very small models have some utility in the real world? Apart from learning and academic purposes of course.

  • canyon289 2 days ago

    Yes! To me the primary value is not just as a teaching or toy model. I see a lot o value in repeatable tasks if we think about enterprise and a local fast developer model for individual usage.

    Here's some examples that are inspired by previous roles I had outside of Google, where a business I was working in needed real time text processing.

    This tutorials were made with Gemma versions from a year ago, but could now be recreated with Gemma 270m

    https://developers.googleblog.com/en/gemma-for-streaming-ml-... https://www.youtube.com/watch?v=YxhzozLH1Dk

  • yawnxyz 2 days ago

    If you LoRa them you can make them VERY VERY good at a small narrow set of tasks, e.g.:

    - reply in a specific way, like a specific JSON schema, or in the voice of a character - be very good at classifying text (e.g. emails, or spam) - be a great summarizer for large amounts of text, e.g. turn emails into short titles or url slugs - adding tags/categories per your pre-defined rules (e.g. for communities, tagging content, marketing) - for detecting spam, or duplicates, or flagging things

    You won't be able to write code or prose with these, but they're great for a huge array of very narrow set of use cases

    What's neat about "stupid" models like this is that they're less likely to go off and dream up a bunch of irrelevant content, because they don't know much about the world / won't have too much context to pull from

  • colechristensen 2 days ago

    Sure, interacting with natural language without expectation that the model contains knowledge. Good for things like tool use and embeddings where the information is all retrieved.

    • throw310822 2 days ago

      Are these small models are trained to privilege "raw intelligence" over factual knowledge? Is there any indication of how much of current model is dedicated to the knowledge of multiple languages and tons of facts rather than pure understanding and reasoning?

      • canyon289 2 days ago

        The evaluations provide this indication. You'll see MMLU, GPQA, Big Bench etc in reports for many models. Those numbers provide the indication you're looking for.

        To answer a question you didn't ask. With small models especially we need to make choices as to which to focus on. For this model we focused on text summarization and instruction following, with the idea that users would finetune to gain performance on the task set that is relevant to them

  • numpad0 2 days ago

    It seems to be more often correct than wrong for multilingual translation tasks(source text from[1][2]). Rough, but probably useful as traveler's phrase books.

      You 
      Translate to English: Рошешуа́р (фр. Rochechouart) — місто, муніципалітет у Франції, у регіоні Нова Аквітанія, департамент Верхня В'єнна. Населення — 3637 осіб (2022)[1]. Муніципалітет розташований на відстані[2] близько 360 км на південь від Парижа, 34 км на захід від Ліможа.
    
      gemma-3-270m-it-qat
      Here's the translation:
    
      "Rochechouart is a city in France, a municipality located in the region of New Aeterna, department within the Upper Vienne Province. The population is 3637 people (2022). The city is situated at a distance of approximately 360 kilometers from Paris, 34 km from Lyon.
    
      You
      Translate to English: Bộ Công Thương đề xuất hỗ trợ hộ gia đình lắp điện mặt trời mái nhà, có hệ thống lưu trữ tối đa 2-3 triệu đồng, hỗ trợ lãi suất vay trong tối đa ba năm. Nội dung được Bộ Công Thương đưa ra tại dự thảo lần 3 Quyết định của Thủ tướng về chính sách hỗ trợ cho hộ gia đình lắp đặt điện mặt trời mái nhà tự sản, tự tiêu và hệ thống lưu trữ điện.
    
      gemma-3-270m-it-qat
      The Ministry of Industry and Trade proposes supporting households with solar panels in their homes, with a maximum savings of 2-3 million đồng, providing loan assistance for up to 3 years. The content of the Ministry's decision at the Third Review of the Government's Policy on Support for Households Installing Solar Panels in Homes, approved by the Prime Minister about the policy support for households installing solar panels themselves, and the system storage capacity is maximum 2-3 million đồng.
    
    1: https://uk.wikipedia.org/wiki/%D0%A0%D0%BE%D1%88%D0%B5%D1%88...

    2: https://vnexpress.net/lap-dien-mat-troi-mai-nha-tu-dung-co-t...

    • magicalhippo 2 days ago

      For comparison, here's what I got from the 27B variant:

        gemma3:27b-it-qat
        Rochechouart (French: Rochechouart) is a town and commune in France, in the Nouvelle-Aquitaine region, Department of Haute-Vienne. The population is 3,637 (2022)[1]. The commune is located approximately 360 km south of Paris, 34 km west of Limoges.
      
        gemma3:27b-it-qat
        The Ministry of Industry and Trade proposes supporting households installing rooftop solar power systems, with a maximum support of 2-3 million VND for systems including energy storage. This support would also include interest rate subsidies on loans for a maximum of three years. This content was presented by the Ministry of Industry and Trade in the third draft of a Decision by the Prime Minister regarding support policies for households installing self-generated, self-consumed rooftop solar power systems and energy storage systems.
quesne 2 days ago

Thought it was a new 3270 interface, bummed.

PunchTornado a day ago

anybody can help me with some tutorials on how to use this for mechanistic interpretability?