So, if I understood correctly, the article seems to say 'learning with errors' problems are now found to be efficiently solvable by quantum algorithms.
However, I seem to remember that many NIST post-quantum crypto algorithms are based on variants of LWE problems.
Is this the same sort of LWE problems, or does the new speedup not apply for some other reason?
I think the article is saying that the new class of problem was inspired by LWE, but adds some additional restrictions.
Edit: quotes showing this:
> Problems like this came to be called “learning with errors,” because the shove and the wind act like a source of random error on the original direction. There is evidence that it is hard to solve for both classical and quantum algorithms.
> Yamakawa and Zhandry tweaked the setup. They modified the strength of those starting shoves, making them more predictable. They also caused the wind to be determined by a random oracle so that it was even more random in certain cases but completely dormant in others.
> It involves calculating the inputs to a complicated mathematical process, based solely on its jumbled outputs.
Without going any deeper, this description of the problem is interesting because quantum algorithms are (by design) fully reversible, which isn't the case for classical computers (you lose information about the input when going through an OR gate, for example). Knowing that, it almost seems like a no-brainer that quantum computers would be better at this kind of problem than classical ones.
That's incorrect its a proof of probabilistic computers are unable to be faster than quantum computers for certain cryptographic problems (BPP < BQP) and has nothing to do with reversibility. The reason quantum computers must be reversible is quantum systems must be reversible.
Will quantum computing destroy blockchain and classical encryption algorithms as we know? So there will be a need for quantum blockchain and quantum encryption.
Until it’s not. Nobody has proven any of these schemes to be secure against quantum attack. Post-quantum (today) just means that nobody has proven that the problem is equivalent to a problem that quantum computing can solve efficiently. This article is about LWE being proven breakable. Others will follow. The security of these schemes rests mostly upon the lack of researchers that understand them and quantum computing well enough to approach an equivalence class proof. Example: elliptic curve isogenies.
wow this is pretty interesting stuff thanks for the link. I'm not sure why I get such a negative response to my question since I'm seriously asking and not sure I should be concerned if this is like a alan turing enigma type of situation we're dealing with.
IMO this is kind of P!=NP situation. While this statement is true, blockchain and classic cryptography keep existing. Maybe some minor improvements, like new hash algos or more significant bits, will be needed soon.
Classical asymmetric encryption algorithms like RSA and ECDSA (which Bitcoin uses) can be easily broken by a quantum computer. Brute-forcing symmetric algorithms like AES gets a speed up on a quantum computer, but not enough to consider the algorithms broken.
Bitcoin doesn't use them in a way that'd let you completely break it; the asymmetric keys aren't broadcast until someone does a new transaction with them, so you can't fake one for a wallet you've never seen the public key to.
How is the transaction signature verified without the public key? This link seems to indicate spender public keys are inside of the transaction: https://bitcoin.stackexchange.com/a/102667
Ah, I had the details wrong. If there’s a signed transaction from a wallet, then you have the compressed public key and it’s not quantum safe.
But if the funds are sent to a new wallet address and there’s no transactions signed by that wallet yet, it can’t be forged without also reversing the hash that created the address.
The previous commenter was being sarcastic, due to the use of the "easy" word (there is no implementation of Shor's algorithm yet that comes even near outpacing the classical version)
I think you mistyped, but just to be clear: it's showing that a regular probabilistic classical computer can't do certain cryptographic problems faster than a quantum computer.
I'm going to admit, I only skimmed the introduction of the paper, but this quote from the article appears very misleading
> They also caused the wind to be determined by a random oracle so that it was even more random in certain cases but completely dormant in others.
Assuming "random oracle" means what it typically does, that appears to be a pretty poor characterization of what a random oracle does to a problem.
In particular, a (non-random) oracle is a public string O such that both the algorithm and the function one is trying to compute can depend on O. One valid oracle (probably the most famous interesting one) is the halting oracle: the string that at position with index x tells you whether or not the TM with binary encoding <x> ever halts on an empty input. While access to this string would give you an avenue to solving the standard halting problem in finite time (the time taken to do a single lookup), the halting problem relative to this oracle is still uncomputable because you're now additionally tasked with solving the halting problems for machines that also know what O is, and therefore can pull off the same self-simulation trick to fool you as in the standard proof.
An algorithm existing relative to a random oracle means that if each digit of O were sampled from independent fair coin flips, then with probability 1 the algorithm could solve the problem. This is different from the standard definition of randomized complexity classes in that the correct solution to a problem can still directly depend on O itself.
Relative a random oracle, it is an exercise to prove some results of otherwise immense magnitude such as P≠NP, #P≠PSPACE, IP≠PSPACE, and so on. But the problems underlying these separations tend to be contrived, e.g. "does the string O contain a substring from some set A", where A is chosen specifically so that (i) it contains a string in O with an (a priori) 50% chance, (ii) one class information-theoretically cannot identify this string without enumerating beyond its available resource constraints, and (iii) the other class can exploit its added power to easily solve the problem. For example, perhaps a "P algorithm" would need to brute force 2^N positions for such a string, but an "NP algorithm" can nondeterministically guess where in the oracle to look to find it.
Random oracles are great for proving separations, but their applicability to the underlying classes is limited. Concretely, the result I mentioned above of IP≠PSPACE is straight-up wrong in the unrelativized world. The oracle isn't just playing with the randomness involved, it's fundamentally asking which questions one is allowed to ask, and opens an avenue toward posing a whole bunch of "junk" problems that become uninteresting as soon as the pure noise oracle string O is discarded.
In light of this, I'm not sure how to view this result. Is it to be read as Yet Another Random Oracle Separation, or is there something deeper here that I should take away from the result?
So, if I understood correctly, the article seems to say 'learning with errors' problems are now found to be efficiently solvable by quantum algorithms.
However, I seem to remember that many NIST post-quantum crypto algorithms are based on variants of LWE problems.
Is this the same sort of LWE problems, or does the new speedup not apply for some other reason?
I think the article is saying that the new class of problem was inspired by LWE, but adds some additional restrictions.
Edit: quotes showing this:
> Problems like this came to be called “learning with errors,” because the shove and the wind act like a source of random error on the original direction. There is evidence that it is hard to solve for both classical and quantum algorithms.
> Yamakawa and Zhandry tweaked the setup. They modified the strength of those starting shoves, making them more predictable. They also caused the wind to be determined by a random oracle so that it was even more random in certain cases but completely dormant in others.
> It involves calculating the inputs to a complicated mathematical process, based solely on its jumbled outputs.
Without going any deeper, this description of the problem is interesting because quantum algorithms are (by design) fully reversible, which isn't the case for classical computers (you lose information about the input when going through an OR gate, for example). Knowing that, it almost seems like a no-brainer that quantum computers would be better at this kind of problem than classical ones.
That's incorrect its a proof of probabilistic computers are unable to be faster than quantum computers for certain cryptographic problems (BPP < BQP) and has nothing to do with reversibility. The reason quantum computers must be reversible is quantum systems must be reversible.
Will quantum computing destroy blockchain and classical encryption algorithms as we know? So there will be a need for quantum blockchain and quantum encryption.
Post-quantum cryptography is a thing:
https://en.wikipedia.org/wiki/Post-quantum_cryptography
Until it’s not. Nobody has proven any of these schemes to be secure against quantum attack. Post-quantum (today) just means that nobody has proven that the problem is equivalent to a problem that quantum computing can solve efficiently. This article is about LWE being proven breakable. Others will follow. The security of these schemes rests mostly upon the lack of researchers that understand them and quantum computing well enough to approach an equivalence class proof. Example: elliptic curve isogenies.
I'm wondering whether it avoids this new type of problem that QCs should be able to solve.
Going by their description of the problem, it looks like it's distinct from how all the mainstream post quantum algos (LWE, NTRU, SIDH, etc) work.
Edit: I've finished the article lol. Now I'm not so certain that this is 100% distinct from something like LWE.
wow this is pretty interesting stuff thanks for the link. I'm not sure why I get such a negative response to my question since I'm seriously asking and not sure I should be concerned if this is like a alan turing enigma type of situation we're dealing with.
IMO this is kind of P!=NP situation. While this statement is true, blockchain and classic cryptography keep existing. Maybe some minor improvements, like new hash algos or more significant bits, will be needed soon.
Classical asymmetric encryption algorithms like RSA and ECDSA (which Bitcoin uses) can be easily broken by a quantum computer. Brute-forcing symmetric algorithms like AES gets a speed up on a quantum computer, but not enough to consider the algorithms broken.
Bitcoin doesn't use them in a way that'd let you completely break it; the asymmetric keys aren't broadcast until someone does a new transaction with them, so you can't fake one for a wallet you've never seen the public key to.
How is the transaction signature verified without the public key? This link seems to indicate spender public keys are inside of the transaction: https://bitcoin.stackexchange.com/a/102667
Ah, I had the details wrong. If there’s a signed transaction from a wallet, then you have the compressed public key and it’s not quantum safe.
But if the funds are sent to a new wallet address and there’s no transactions signed by that wallet yet, it can’t be forged without also reversing the hash that created the address.
Yes, it was a good idea to do that. I didn't realize that addresses were essentially a hash of the public key, but it makes sense.
Oh cool, can you tell me who has successfully and easily achieved this result?
Peter Shor; though whether easily, I can't say for sure. https://en.wikipedia.org/wiki/Shor's_algorithm
The previous commenter was being sarcastic, due to the use of the "easy" word (there is no implementation of Shor's algorithm yet that comes even near outpacing the classical version)
> Will quantum computing destroy blockchain and classical encryption algorithms as we know?
Short answer: the currently popular ones? Yes. All that we know? Probably not. See eg: https://ieeexplore.ieee.org/document/8967098
I'm surprised there aren't quantum blockchains already.
> I'm surprised there aren't quantum blockchains already.
There are:
> https://www.quantumblockchains.io/
> https://quantumblockchaintechnologies.co.uk/
:-)
Oh sweet. Looks like quantum PoS is just .5 + .5i months out
I think you're missing a couple of sqrt()s ;)
TLDR; This is showing that a regular probabilistic computer can't do certain cryptographic problems faster than a quantum computer ( BPP < BQP).
Others have commented here that this is due to the fact that Quantum computers must be fully reversible operations which is in correct.
I think you mistyped, but just to be clear: it's showing that a regular probabilistic classical computer can't do certain cryptographic problems faster than a quantum computer.
I'm going to admit, I only skimmed the introduction of the paper, but this quote from the article appears very misleading
> They also caused the wind to be determined by a random oracle so that it was even more random in certain cases but completely dormant in others.
Assuming "random oracle" means what it typically does, that appears to be a pretty poor characterization of what a random oracle does to a problem.
In particular, a (non-random) oracle is a public string O such that both the algorithm and the function one is trying to compute can depend on O. One valid oracle (probably the most famous interesting one) is the halting oracle: the string that at position with index x tells you whether or not the TM with binary encoding <x> ever halts on an empty input. While access to this string would give you an avenue to solving the standard halting problem in finite time (the time taken to do a single lookup), the halting problem relative to this oracle is still uncomputable because you're now additionally tasked with solving the halting problems for machines that also know what O is, and therefore can pull off the same self-simulation trick to fool you as in the standard proof.
An algorithm existing relative to a random oracle means that if each digit of O were sampled from independent fair coin flips, then with probability 1 the algorithm could solve the problem. This is different from the standard definition of randomized complexity classes in that the correct solution to a problem can still directly depend on O itself.
Relative a random oracle, it is an exercise to prove some results of otherwise immense magnitude such as P≠NP, #P≠PSPACE, IP≠PSPACE, and so on. But the problems underlying these separations tend to be contrived, e.g. "does the string O contain a substring from some set A", where A is chosen specifically so that (i) it contains a string in O with an (a priori) 50% chance, (ii) one class information-theoretically cannot identify this string without enumerating beyond its available resource constraints, and (iii) the other class can exploit its added power to easily solve the problem. For example, perhaps a "P algorithm" would need to brute force 2^N positions for such a string, but an "NP algorithm" can nondeterministically guess where in the oracle to look to find it.
Random oracles are great for proving separations, but their applicability to the underlying classes is limited. Concretely, the result I mentioned above of IP≠PSPACE is straight-up wrong in the unrelativized world. The oracle isn't just playing with the randomness involved, it's fundamentally asking which questions one is allowed to ask, and opens an avenue toward posing a whole bunch of "junk" problems that become uninteresting as soon as the pure noise oracle string O is discarded.
In light of this, I'm not sure how to view this result. Is it to be read as Yet Another Random Oracle Separation, or is there something deeper here that I should take away from the result?
the article goes on to say
> they then adapted their algorithm to solve a real-world version of the problem, which replaces the oracle with an actual mathematical equation.
which probably negates what you said I think
> researchers invented a fundamentally new kind of problem that a quantum computer should be able to solve exponentially faster
Telling of the status of the field.
I think the idea is that you might be able to reduce other problems to this problem space.
Is that a dig? I think it sounds promising and exciting
Ironically enough, the research itself is “quantum”