What does quantum *actually* break?

·7 min read

This post was originally an X thread so it's more a train of thought vs a well formed blog post

On March 31st Google Quantum AI published updated resource estimates showing that breaking 256-bit elliptic curve cryptography requires roughly 20 times fewer physical qubits than previously thought, and hours later a team from Oratomic and Caltech demonstrated that Shor’s algorithm can run at cryptographically relevant scales on as few as 10,000 neutral atom qubits. Both papers have been covered extensively. However, there has been a lot of confidently wrong analysis for what this means for the cryptography that underpins digital assets.

A large fault-tolerant quantum computer running Shor’s algorithm solves the elliptic curve discrete logarithm problem, which is the mathematical problem on which ECDSA and EdDSA digital signatures depend on. It also solves integer factorisation (breaking RSA) and the finite field discrete logarithm problem (breaking classical Diffie-Hellman). These are public key primitives and they are the only class of cryptographic construction that a quantum computer meaningfully threatens.

Hash functions do not break. SHA-256, RIPEMD-160, Keccak, BLAKE2, BLAKE3 all remain secure under quantum computation. Grover’s algorithm provides a quadratic speedup against brute-force preimage search, which means a 256-bit hash offers 128-bit equivalent security against a quantum adversary, which is still well above any practical attack threshold. Symmetric key cryptography similarly requires at most a parameter doubling. Bitcoin’s proof of work is SHA-256 and is not meaningfully threatened because Grover’s quadratic speedup is negligible relative to the parallelism advantage of classical ASICs.

The most common claim since March 31st has been some version of “quantum breaks everything” or “this is fake, quantum isn’t real” and both are wrong. It breaks a specific and well-understood class of problems that share a common algebraic structure: everything built on the hardness of discrete logarithms or factoring. Everything built on hash functions or symmetric primitives is not at risk.

The second most common claim is the deflection: if quantum breaks Bitcoin it also breaks banks, SWIFT, TLS, military communications, and every HTTPS connection on the internet, so blockchains are not uniquely exposed. This is true in the narrow sense that the same mathematical primitive is used everywhere however, it’s practically irrelevant. A bank can push a software update and layers sensitive information with defence in debt, a certificate authority can rotate its keys, browsers can turn on post quantum TLS. Best way to think about this, both Google and Cloudflare brought forward their PQC migration deadline to 2029 in the last 2 weeks with a blog post announcement. It’s obviously hard work but they can just decide to do that. Centralised systems have a single decision-maker, a deployment pipeline, and the very often ability to force-migrate their entire user base on a schedule.

Blockchains have none of these things. Protocol changes require forks that take years of debate and community consensus and individual users must migrate their own keys. The centralised internet will migrate quietly and on time (this is already happening - 62% of Cloudflare TLS is PQ). Whether decentralised systems can do the same is a coordination question that is genuinely open.

A third camp has declared the problem already solved, pointing to prototype proposals and early research; BIP-360, the QSB hash-to-signature puzzle, zk-STARK recovery prototypes are all interesting ideas at various stages of maturity, but none of them are deployed on mainnet and none of them have been through the multi-year consensus process that Bitcoin requires for any protocol change. The engineering distance between a working concept and a network-wide migration is enormous, and treating early research as a finished defence creates complacency that makes the problem harder to solve.

Adam Back argued recently that the prudent approach is to give users roughly a decade to migrate their keys to quantum-resistant formats, and I think that is directionally right. However, if migration takes a decade at the tail end, and the most optimistic credible estimates for a cryptographically relevant quantum computer are now in the late 2020s to early 2030s, then the migration timeline and the threat timeline have already begun to overlap (Mosca’s inequality). The implication is that migration planning needs to be decoupled from Q-Day predictions entirely. Waiting for consensus on when quantum computers will be capable enough is itself the risk, because the migration is the long pole regardless of when the threat materialises.

The question of migration ordering is also more complicated since the Google paper. The conventional wisdom had been to prioritise wallets with public keys already exposed on-chain: P2PK outputs, reused P2PKH addresses, and Taproot (P2TR) outputs where the public key is visible by default. This makes intuitive sense because those keys are standing targets that an attacker can work on offline with no time constraint. But the Google paper’s resource estimates show on-spend attacks are feasible. Their low-gate circuit variant can execute in roughly 9 minutes from a primed state, which is within Bitcoin’s 10-minute block interval and well within slower confirmation windows on other chains. If on-spend attacks are feasible, the distinction between exposed and unexposed keys somewhat collapses and every key becomes vulnerable at the moment of transaction, not just those already sitting on-chain. This assumes that we get fast clock architectures before slow clock architectures which is still unknown. The consequence is that the migration priority probably needs to be reframed around economic exposure (how much value is at risk) rather than purely around address-type vulnerability. Large holders need to migrate first, regardless of what address format they are using. Basically, we need to migrate the highest value wallets first which is not typically how we approach migrations.

Ethereum is in the strongest position of the major ecosystems. The Ethereum Foundation has a dedicated post-quantum team, a published roadmap targeting L1 protocol changes by 2029, and more than ten client teams running weekly PQ interop devnets. The general direction is to replace BLS validator signatures with hash-based alternatives under leanSig, use STARK-based aggregation for scalability, and leverage account abstraction for incremental user migration.

Bitcoin has no coordinated programme, no funded team, no fork schedule, and a governance model that makes imposing deadlines on its user base very difficult. This isn’t implying that Bitcoin is screwed and can’t migrate, it’s just the facts. I personally believe we will get the migration done in time but we just need to start asap.

Solana public keys are used directly as addresses, which means 100% of accounts are quantum-vulnerable rather than a subset. We’ve done some work with Solana on options for PQ signature selection and PQ testnet creation. I’m hoping to release this soon.

The NIST post-quantum standards are finalised. ML-DSA, SLH-DSA, and ML-KEM are ready. The signature schemes and parameter sets are well understood and implementations are shipping across the non-blockchain internet. The remaining work is integration, migration tooling, and deployment.

Finally, at this point we should assume that the resource estimates will keep falling. Google’s paper notes that ECDLP has received relatively little optimisation attention compared to RSA, and the Chevignard group at INRIA published a further reduction at EUROCRYPT 2026. The efficient frontier is being pushed from both the algorithmic side and the architectural side simultaneously. No machine exists today that can execute either paper’s construction, and we do not know a timeline with much certainty but the engineering specification for a cryptographically relevant quantum computer now sits within the published roadmaps of multiple hardware labs.

For anyone deploying new cryptographic infrastructure on ECDLP curves without a concrete migration plan, I think it’s increasingly difficult to construct a reasonable justification. For existing systems that only question is whether the work starts now or later.