Albert Anastasia
16.5K posts

Albert Anastasia
@AlbertAnaBoss
Make AI Great Again






If President Trump wants to take the Falkland Islands away from Britain, then we must reclaim the United States. Perfect way to celebrate the 250th anniversary of American Independence. King Charles can announce it to Congress next week.





Stabilizer codes have dominated the quantum error correction conversation for a decade. Surface codes, LDPC codes, the Google and IBM roadmaps — all stabilizer-based. The family traces back to the mid-1990s, when Peter Shor's 9-qubit code and Andrew Steane's 7-qubit code proved for the first time that quantum information _could_ be protected from noise at all. This is genuinely non-obvious, since the no-cloning theorem in quantum information doesn't allow us to use the classical trick of just copying the data. Stabilizer codes work, they're well-understood, and everyone knows how to benchmark them. But they have real weaknesses. They struggle with 1. energy gradually leaking out of the system (amplitude damping) 2. qubits that get lost mid-computation without the system knowing which one went missing (deletion errors). And the "logic gates" you can build cheaply on top of stabilizer codes are limited (transversal gates in the Clifford hierarchy). More complexity, expensive workarounds, and that eats into efficiency gains of having the code at all. A new paper from Yingkai Ouyang (Sheffield) and Gavin Brennen (Macquarie and BTQ) — "A theory of quantum error correction for permutation-invariant codes" ([arxiv.org/abs/2602.13638](arxiv.org/abs/2602.13638)) opens a new theory for an alternative. This development complements earlier work from the same team showing that permutation-invariant (PI) codes, used together with more conventional stabilizer codes, can activate the notoriously challenging non-Clifford logical gates for universal quantum computing, and with lower overhead than using stabilizer codes alone.

















