Ask HN: Why mining power is useless for LLM training?
I don't know anything about crypto, but why is it technically so difficult to transform hash function computations into (at least partially) decentralized LLM training? Have there been any attempts to make it at least somewhat useful?
It is in principle a good idea. There are plenty of ongoing attempts to do something similar but less cool than this with GPU networks like render etc. From what I've heard they all suck at AI workloads so far. For years I've heard of networks specifically designed for model training in the works but haven't noticed anything really make it that looks serious. But the details are tricky. You can immediately start finding a wall of interesting problems if you start thinking about doing this.
Hashing is extremely easy to verify and therefore extremely easy to pay out for. More complicated computations are more expensive and complicated to verify. You can in principle run verifiable compute (snarks, starks) on training procedures to be sure distributed trainers are doing something that deserves to be paid out for. But how do you break this down so that it is incremental? How do you sequence updates? How do you ensure availability of updated weights so that trainers aren't withholding to gain an advantage? i.e. where does the data go? You're probably not keeping 100 billion parameters on chain. How do you keep costs of all this from ballooning like crazy (verifiable compute is kind of expensive) ?
The little I know about these models the data is pretty important. How do you keep data quality high? How do you verify that data quality is good? Probably you are committing to public datasets and verifying against them. Is that good enough in this crazy world where the state of the art is training the entire public web? How do you get a commitment to "the internet" to prove against? How do you make sure trainers aren't redoing the same datapoint over and over?
I think you can solve all this with enough work. Especially as the cost curve for verifiable compute keeps descending down you will probably find doors opening. Or if you bite the bullet and trust hardware vendors you can maybe have something with a decent security model that is practical today using trusted enclaves. But you've got to solve a lot of problems.
> Have there been any attempts to make it at least somewhat useful?
To answer this part there are a few networks out there doing useful work. They are getting called depin these days (decentralized physical infrastructure). A lot of them have broken/silly security models but some of them actually work.
1. Different silicon / different physical computer bits that don't do the right operations. 2. Cryptomining tends to be massively parallelizable (network friendly), while most AI training ends up being bandwidth limited, and so there are more benefits to larger single nodes.
But also a lot of general purpose graphics cards that work for crypto are also decent at AI!
I hope someone makes a lmgtfy but with llm instead of g.