batterseapower 16 days ago

The other recent improvement suggested for LoRA is DoRA: https://magazine.sebastianraschka.com/p/lora-and-dora-from-s.... It really does seem to strongly outperform LoRA - see also https://www.answer.ai/posts/2024-04-26-fsdp-qdora-llama3.htm...

  • josalhor 16 days ago

    I just skimmed over LoRA+ and DoRA and I see no reason why these improvements could not go hand in hand. Actually, LoRA+ seems to be about efficient training while DoRA seems about improving the ability to actually learn, making it significantly more robust. Although I still have my questions on how the improvements of LoRA+ would be applied to the magnitude vector.

  • WithinReason 16 days ago

    The two methods seem to be independent, wonder if you can combine them for even better performance.

    Interestingly both seem to indirectly modify the optimisation process, in my opinion effectively trying to fix a bad optimiser. Seems like we still have a long way to go after Adam...

    • neodypsis 15 days ago

      > Seems like we still have a long way to go after Adam...

      A preprint in arxiv suggests that Adam works better than SGD for training LLMs due to the issue of class-imbalance [0]. It appears that scaling the gradient step helps with the training, for example, see another approach suggested in [1].

      0. https://arxiv.org/pdf/2402.19449 1. https://arxiv.org/pdf/2402.02347

  • Ger_Onimo 16 days ago

    I've just started playing with DoRAs for fine-tuning TTS models towards particular styles of speech, and they're working extremely well!

    • allpaca 15 days ago

      Can you tell us more about it? Have you reported the results of your experiments in a post?

      • mysfi 15 days ago

        Count me interested here as well, specially if it is about the style of speech. I had a fun project in mind that involved the style of speech.

cuuupid 16 days ago

I’m struggling to understand from this paper whether the approach is better in the general sense (all cases, with wider models seeing greater benefits) or purely for wider models (with narrower models seeing detriment)?

If it’s the former this could effectively halve finetuning cost overnight which would go a significant way towards enabling a wider array of use cases for LoRA.

ironbound 16 days ago

I've had sucess with GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection https://arxiv.org/abs/2403.03507

  • Scipio_Afri 15 days ago

    This uses less memory so you can do fine tuning or hardware with less vram but at a cost of taking longer on training - there is a throughput penalty, the paper detailing the technique shows something like a 15% decrease in throughput.

yau8edq12i 16 days ago

What an unfortunate name... I initially thought this was about wireless communication. https://en.wikipedia.org/wiki/LoRa

  • sorenjan 16 days ago

    This gets mentioned here everytime an article about LoRA is posted. Sometimes acronyms means multiple things, they're not in the same field so the risk of confusion beyond short headlines is negligible.

    It's a bit like if someone reading a bicycling article and getting annoyed that FTP means Functional Threshold Power instead of File Transfer Protocol, or reading about machine learning and getting confused that MLP doesn't mean My Little Pony.

    • rakoo 16 days ago

      "computer science" and "bicycles" aren't the same domain, it's fine to have the same acronym.

      "computer science" and "tv shows" aren't the same domain, it's fine to have the same acronym.

      "computer science" and "computer science" are the same domain, it's not a good idea to use the same acronym.

      • dragonwriter 16 days ago

        > "computer science" and "computer science" are the same domain, it's not a good idea to use the same acronym.

        But “radio communication" is not “computer science”, even though people sometimes plug radio transceivers into computers, just like “tv shows” aren't “computer science” just because people sometimes view or store their shows on a computer, and “bicycles” aren’t “computer science” because sometimes people mount computers on their bikes.

      • WithinReason 16 days ago

        Large Models is in the title so it's obviously not about radio

        • GuB-42 16 days ago

          The acronym is also spelled out in the title: LoRA = Low Rank Adaptation.

        • rakoo 15 days ago

          Large models is not spelled in full, and doesn't explicitely says it's not about the communication protocol.

        • squigz 16 days ago

          Context is hard!

          • rakoo 15 days ago

            So instead of LoRa and anything else, everyone now has to say LoRa (the communication protocol) or LoRa (the large model thing). Having to add context all the time makes everything so much simpler !

            • squigz 15 days ago

              Or potentially include the necessary context in the title of the post.

              • rakoo 14 days ago

                Or just pick another name

                • squigz 14 days ago

                  That does seem to be more reasonable than expecting people to pick up on context

            • WithinReason 15 days ago

              Low rank adaptation is abbreviated LoRA

      • nostrademons 16 days ago

        "Computer science" isn't really one domain anymore - the field split into several subdomains in the 2010s. Just try to get a job as a "computer scientist" now - the recruiter would be like "No, are you a web developer? Mobile developer? Backend developer? Data scientist? Data engineer? Cloud engineer? AI engineer? Machine learning developer?"

    • the__alchemist 16 days ago

      I think the reason this keeps coming up is encoded in your second sentence, in conjunction with the HN medium: LoRa and LoRA are both, unfortunately, things that the target audience are likely to be interested in and/or knowledgeable with, but a general audience is not.

      Also, both use a non-standard case mix.

    • yau8edq12i 15 days ago

      > This gets mentioned here everytime an article about LoRA is posted.

      I wonder why!

    • IshKebab 16 days ago

      Yes but radio protocols and AI methods are a lot closer than most overlapping acronyms. This is obvious from the fact that it gets mentioned every time an article about LoRA is posted.

    • mattlondon 16 days ago

      But these are clearly both in the same field as everyone keeps saying mentioning it here! So clearly there is confusion. It certainly tricked me on first reading - "ah cool - efficient lora+ that sounds cool... Ah wait no it's just some machine learning spam"

  • kcorbitt 16 days ago

    This specific variant "LoRA+" described in this paper is even harder to search for. I was doing some research on this technique recently and it turns out that "Lora+" matches with "Lora" in Discord search, which is quite unhelpful. :)

    • SquareWheel 16 days ago

      Discord search is one of the worst I've ever used. They remap words like "localization" to "local", which makes it impossible to search for more specific terms.

  • rytill 16 days ago

    That’s LoRa. This is LoRA.

  • bee_rider 16 days ago

    The idea of low rank approximations is not new, truncated SVDs for example have been used for many decades.

    • yau8edq12i 16 days ago

      The acronym LoRA used in the context of deep learning (2021) is about seven years younger than the radio communication protocol called LoRa (2014). Type "lora" in a search engine and see what you get.

      • kleiba 16 days ago

        In the first 10 search results, I now get a mix of results for either of the two technologies when searching with Google.

  • blamestross 16 days ago

    Yeah I would prefer they didn't offusicate the actually useful search term

allpaca 15 days ago

This is old, having been released in February... Why do you talk about it now?

axpy906 16 days ago

In 2024 are folks still swapping out LoRA adapters? Is this still relevant?

  • ac2u 16 days ago

    Can’t tell if your tone is inquisitive or incredulous :)

    If the later please point out the alternatives.

  • bckr 16 days ago

    Why would it not be?