Show HN: Finetune Llama-3.1 2x faster in a Colab

colab.research.google.com

16 points by danielhanchen 3 months ago

Just added Llama-3.1 support! Unsloth https://github.com/unslothai/unsloth makes finetuning Llama, Mistral, Gemma & Phi 2x faster, and use 50 to 70% less VRAM with no accuracy degradation.

There's a custom backprop engine which reduces actual FLOPs, and all kernels are written in OpenAI's Triton language to reduce data movement.

Also have an 2x faster inference only notebook in a free Colab as well! https://colab.research.google.com/drive/1T-YBVfnphoVc8E2E854...

smcleod 3 months ago

Does it support multiple GPUs yet? That's been the reason most folks I know don't end up using Unsloth.

  • danielhanchen 3 months ago

    We're currently doing a beta with some of our community members for multi GPU!