phildini 6 months ago

I got very briefly excited that this might be a new application layer on top of meshtastic.

  • robertlagrant 6 months ago

    Yes! I don't know what LoRA is, but I know what it isn't.

smcleod 6 months ago

Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)

  • porridgeraisin 6 months ago

    Mostly whatever the earliest version pytorch supports. While 3.9 is supported until the end of this year, torch wheels and other wheels in the ecosystem were always troublesome in 3.9. So 3.10 it is.

    3.9 would have been the preferred version if not for those issues, simply because it is the default on MacOS.

    • smcleod 6 months ago

      Yikes those a both very old. Python pre 3.12 had some serious performance issues. You should be aiming to run the current stable version which will contain any number of stability and interoperability fixes. The bundled OS python versions are often far behind and better suited to running the basic tools rather than being used for every application or script that you run where ideally you'd use a python version manager and isolated virtual environment.

      • porridgeraisin 6 months ago

        ML folks don't care (yes yes I'm generalizing...); They will upgrade whenever torch or one of their other favourite libraries tells them to.

  • electroglyph 6 months ago

    from pyproject.toml: requires-python = ">= 3.10"

    I still see quite a few people in the ML world using 3.10 as their default...probably just habit, but a closer look at the dependencies might answer your question better.

    • smcleod 6 months ago

      Ah well that's not as bad I guess, I saw in their readme they're recommending people use 3.10, which when I see it is often a bit of a red flag that the project in question may not be well maintained, but I agree I do see quite a few ML repos still noting the use of 3.10 to this day.

kixiQu 6 months ago

Can someone explain why this would be more effective than a system prompt? (Or just point me to it being tested out against that, I supposed)

gdiamos 6 months ago

An alternative to prefix caching?

npollock 6 months ago

LoRA adapters modify the model's internal weights

  • make3 6 months ago

    not unless they're explicitly merged, which is not a requirement but a small speed only thing

  • jsight 6 months ago

    Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.

dvrp 6 months ago

[flagged]

vessenes 6 months ago

Sounds like a good candidate for an mcp tool!