a Couple of days ago the network leaked a draft of the article from Google about their achievement of quantum supremacy in a superconducting quantum computer. The text was quickly removed, and around the multiplying rumors and assumptions, including the erroneous. The author of the post — Professor Scott Aaronson is one of the main experts on quantum algorithms, and maintains excellent blog. In the last post he answers the questions of quantum superiority.

From translator. I’m not an expert on algorithmic complexity and quantum algorithms in particular. But to me the post seemed too interesting not to discuss it on habré, and for any errors in terms or unusual use please kick me a PM.

You’ve seen stories in *the Financial Times*, *Technology Review*, *CNET*, Facebook, Reddit, Twitter, or somewhere else — which tells about a group of Google, which has achieved quantum supremacy with 53-Kubany superconducting computer. These stories are easy to find, I won’t attach them here, as neither one of them should not still exist.

As it is now known around the world, Google is actually preparing a major statement on the quantum of opportunity that is being planned, together with the scientific article in cool magazine (what? the choice is between one of the two). Hope it will happen within a month.

Meanwhile, NASA, which employs the authors of the article inadvertently published an older version of the article on a public website. She was in there for very long, but long enough to be in *the Financial Times,* my email and a million other places. Speculation without facts about its expected value multiplying.

It seems that the world was deprived of the net moment of the “moon landing”, where the thesis extended Church-Turing is destroyed the experimental evidence during the press conference. It will be more like flight of the Wright brothers — which rumors and half-truths leaked to the public space between 1903 and 1908, when the will and’orville finally decided to show its flight. (This time, fortunately, everything cleared up much faster!)

This post is not an official statement or confirmation of something. Although lightning already visible, the thunder belongs to the group of Google, time and place of their choosing.

Rather, I want to stop the flow of misinformation and offer here, being in the role of blogger and the “public intellectual”, your Scott’s Supreme Quantum Supremacy FAQ. You know, in case you suddenly curious about quantum superiority or always wanted to know what would happen if suddenly some kind of search company from mountain view or something hypothetically could claim the achievement of quantum supremacy.

Without further ADO, here it is:

**B1. What is quantum computing superiority?**

Often shortened to just “quantum supremacy”, the term refers to the use of quantum computer to solve a specific set of tasks which on a classical computer using any well-known algorithm would take orders of magnitude more time — and not for some random reasons, and because of the asymptotic quantum complexity. The emphasis here is on how to be more confident that the problem really has been solved by the quantum method and do trudnodostizhimye classically, and ideally will achieve the acceleration you calculate in short time (with a noisy, non-universal quantum computers, which we already have, or soon will be). If the task is still useful for something, the better, but not necessarily. The Wright plane and the woodpile Fermi was not useful by themselves.

**B2. If Google really has achieved quantum supremacy, whether it means, what now no code is uncrackable, as recently stavitel the candidate in US presidents from Democrats Andrew young?**

No, it doesn’t (although I still like the candidacy of the young).

There are two issues. First, the devices created by Google, IBM and others, have 50-100 qubits, and no error correction. To crack RSA using Shor’s algorithm will need a few thousand logical qubits. With known methods of error correction, this could easily require millions of physical qubits, and better quality than we have now. I don’t think anyone close to it, and we have no idea how soon this will be able to approach.

And the second problem is that even in a hypothetical future QC with correction of errors we will only be able to break some codes, but not all, at least as far as we know now. By an unhappy coincidence, the public keys that can be crack, include most of the algorithms that we use to secure the Internet: RSA, Diffie-Hellman, elliptic cryptography But cryptography based on private keys should not affect. And there is also a cryptosystem for public key (e.g. based on grids), which nobody knows how to hack with KK even after 20+ years of trying, and there are attempts to adopt these systems. For example, see my letter to Rebecca had lived.

**B3. Which calculations you intend to do (or already did) Google, which are considered classically sophisticated.**

I can certainly tell you, but feel silly. The calculation is: the experimenter generates a random quantum circuit With a (i.e., Random sequence 1-Kubanych and 2-Kubanych — nearest neighbor — of the valves, with a depth of for example 20 acting on a 2D network of n=50-60 cubits). After that, the experimenter sends With a quantum computer, and asks him to apply to the initial state of 0 and measure in the basis {0,1}, to send back the n-bit sequence of observed (row) and repeat a few thousand or million times. Finally, using its knowledge of S, the experimenter conducts a statistical test for the conformity of the result with the expected output from a quantum computer.

This problem is not from the category of tasks with the only true answer, in contrast to the factorization of numbers. The resulting scheme has a statistical distribution (call it D_{C}) n-bit strings from which to choose samples. In fact, usually D_{C} may correspond to 2^{n} lines — so many that if QC works as expected, it never will give the same string output twice. It is important that the distribution of D_{C} is not uniform. Some of the lines arose as a result of constructive interference of amplitudes, and therefore are more likely, and others are destructive, and are less likely. And although we receive only a small part of all possible 2^{n} samples, we can check the distribution of these samples statistically to the expected uneven distribution, and to make sure we got something barely played classically.

TL;DR, a quantum computer would like to use a random (but known) sequence of quantum operations — not because we are interested in the result, but because we are trying to prove that KK could clearly perform this task better than a classical computer.

**V4. But if a quantum computer simply performs some garbage code whose only purpose is to be challenging for classical simulations, who cares? Isn’t it great barehipani pie… nothing?**

No. This, of course, not a cake all delicious, but at least the pie with something.

Have at least a modicum of respect for the immensity of what we’re talking about, and what engineering genius it took to make it happen. To the quantum of superiority (KP) KP-skeptics just laughed that even after billions of dollars and 20+ years of experience as a quantum computer still can’t do something faster than your laptop. At least not using kvantovoi. In the world after the achievement of quantum supremacy, they no longer laugh. A superposition of 2^{50} or 2^{60} complex numbers were counted using much less online time and data volume than 2^{50} or 2^{60}

I’m talking about the airplane Wright only because the chasm between what we’re talking about, and denial, something I see in different corners of the Internet, which is completely unattainable. It’s as if you believed that the fly is basically impossible, and then I saw the flimsy wooden airplane — this can be, and is not going to shake your faith, but certainly should not be strengthened.

If I was right, worrying many years ago that the constant hype about much less achieve KK will bore people and they will give a damn when something worthwhile finally happens?

**B5. Many years ago, you reproach the masses for their enthusiasm about D-Wave and their applications on and off the quantum advantage in optimization problems using quantum annealing. Now you reproach them for the lack of enthusiasm about the quantum of superiority. Why are you so fickle?**

Because my goal is not to shift the level of enthusiasm in a clearly chosen direction, and to be right. In hindsight, perhaps I wasn’t right about mostly about D-Wave, even when my statements have made me very unpopular in some circles? Well, now I’m trying to be right about the superiority of quantum too.

**B6. If you calculate the quantum of superiority are just based on samples from a probability distribution, how can I check that they are made right?**

Thank you for asking! This is precisely the topic on which I and others have developed many of the theoretical foundations in the last ten years. I already told you the short version in my answer to Q3: you test it runs statistical tests on samples, which gives a quantum computer to check that they are concentrated around the peaks of the random probability distribution D_{C}. One convenient way to do this, which Google calls a “linear cross-entropy test” is to simply add up all the Pr[S outputs s_{i}] for all samples s_{1},…,s_{k}, which brought back KK, and then declare the test successful if and only if the amount exceeds a certain level — say, bk/2^{n}, where b is a constant between 1 and 2.

To be honest, to apply this test, you need to calculate all probabilities Pr[S outputs s_{i}] classical computer — and the only known way to do this is too much for the time ~2^{n}. But is that someone confused? No, if n=50, and you Google and can count the number of type 2^{50} (though it does not 2^{1000}, which exceeds googol, cough, cough). Expelling a large cluster of classical cores for, say, a month, you will get the end result that KK was given a couple of seconds — at the same time make sure that the QC on a couple of orders of magnitude faster. However, this means that experiments based on sampling, almost specifically created for devices with ~50 qubits. Because even with 100 qubits we have no idea how to assure the result, even using all the computing power available on Earth.

(Note separately that this problem is specific to experiments with sampling, the type that are being held now. If you have used Shor’s algorithm for factoring a 2000 digit number, you can check the result simply by multiplying all the factors and testing them simplicity. Exactly the same if KK used to simulate complex biomolecules, you can check the result by performing the experiment on the molecule.)

**V7. Wait. If classical computers can only check the results of the experiment on quantum superiority only in the regime where classical computers are still able to simulate the experiment (albeit very slowly), how can you talk about the “quantum supremacy”?**

Come on. 53-Kubany chip you see acceleration in several million times, and yet can still verify the result, and check that the acceleration grows exponentially with the number of qubits, just as predicted by the asymptotic analysis. It is not insignificant.

B8. Is there a mathematical proof that no fast classical computer can’t beat the results of the experiment for CP, based on sampling?

At the moment, no. But it’s not the fault of the researchers of the quantum supremacy! As long as the theorists can’t even prove the basic hypothesis, the P≠NP or P≠PSPACE, and without this there can of course exclude the possibility of a fast classical simulation. The best we can hope for is a conditional complexity. And we have some results about it, for example an article about sampling with bosons or article Bouldand et al. about average #P is the complexity of the calculation of the amplitudes of the random schemes, or my article with Lijie Chen(“Complexity-Theoretic Foundations of Quantum Supremacy Experiments”). The main open question in this area in my opinion is the best proof of the conditional complexities.

**B9. Is there some application in quantum of superiority on the basis of sampling?**

Until recently, the answer was obviously “no”! (And I know because I was myself one of those people). Recently, however, the situation has changed — for example, due to my Protocol of certified randomnessthat shows how KP is based on sampling can be independently used to generate bits that can be proven random, even for a skeptical third party (if the assumptions for the calculations). This, in turn, is the app in the bitcoin and other cryptographic protocols. I hope that soon there will be more such applications.

**B10. If the experiments on quantum superiority does not generate random bits, why is that even interesting? Is it not trivial to convert qubits in random bits of just measuring them?**

The point is that QC generates uniformly distributed random bits. Instead, she creates a sample of some complex correlated probability distribution over bit strings 50-60. In my Protocol deviations from uniformity have a Central role to how KK will convince the skeptic that he actually chose random bits and did not use any pseudo-random generator in secret.

**Q11. Is dozens of years of the quantum-mechanical experiments, for example, with the violation of bell’s inequality, quantum did not demonstrate this superiority?**

This is a purely lexical matter. Those experiments demonstrated other types of quantum superiority: for example, in the case of bell’s inequality, they can be called “quantum correlation and superiority”. They have not shown superiority of quantum computing, i.e. the ability to calculate something not readily available classic the PC (where the classical simulation has no restrictions on the spatial locally, etc.). Currently, when people use the phrase “quantum supremacy,” they mean a quantum computational supremacy.

**B12. Even so, isn’t there innumerable examples of materials and chemical reactions that are difficult to simulate classically, and a specially constructed for this quantum simulators (such as those in the Lukin group at Harvard). Why they are not considered quantum computational superiority?**

In the definitions of some people, they are! The main difference with computer Google is that they have a fully programmable computer — you can enter any arbitrary sequence of 2-Kubanych valves (for neighboring qubits), just entering the command with a classical computer.

In other words, KP-skeptics can’t derisively be noted that, of course, a quantum system is hard to simulate classically, but that’s just because nature generally difficult to simulate, and you can’t reprogram a random molecule found in the field, in a computer to simulate itself. By any reasonable definition of the superconducting device, created by Google, IBM and others are computers in the full sense of the word.

**B13. You (Scott Aaronson) invented the concept of the quantum of superiority?**

**(PR. lane seemed to me quite boring, so I didn’t bother to translate. If the request – happy to do)**

*par excellence*—albeit, one that’s still (in 2019) much too hard to perform.

The key idea of instead demonstrating quantum supremacy using a

*sampling problem*was, as far as I know, first suggested by Barbara Terhal and David DiVincenzo, in a farsighted paper from 2002. The modern push for sampling-based supremacy experiments started around 2011, when Alex Arkhipov and I published our paper on BosonSampling, and (independently of us) Bremner, Jozsa, and Shepherd published their paper on the commuting Hamiltonians model. These papers showed, not only that “simple,” non-universal quantum systems can solve apparently-hard sampling problems, but also that an efficient classical algorithm for the same sampling problems would imply a collapse of the polynomial hierarchy. Arkhipov and I also made a start toward arguing that even the

*approximate*versions of quantum sampling problems can be classically hard.

As far as I know, the idea of “Random Circuit Sampling”—that is, generating your hard sampling problem by just picking a random sequence of 2-qubit gates in (say) a superconducting architecture—originated in an email thread that I started in December 2015, which also included John Martinis, Hartmut Neven, Sergio Boixo, Ashley Montanaro, Michael Bremner, Richard Jozsa, Aram Harrow, Greg Kuperberg, and others. The thread was entitled “Hard sampling problems with 40 qubits,” and my email began “Sorry for the spam.” I then discussed some advantages and disadvantages of three options for demonstrating sampling-based quantum supremacy: (1) random circuits, (2) commuting Hamiltonians, and (3) BosonSampling. After Greg Kuperberg chimed in to support option (1), a consensus quickly formed among the participants that (1) was indeed the best option from an engineering standpoint—and that, if the theoretical analysis wasn’t yet satisfactory for (1), then that was something we could remedy.

After that, the Google group did a huge amount of analysis of random sampling circuit, both theoretical and numerical, while Lijie Chen and I and Bouland et al. supplied different forms of complexity-theoretic evidence for the problem’d’s classical hardness.

**B14. If quantum superiority is achieved, what does this mean for the skeptics?**

Wow, I would not want to be in their place! They can fall back on the position that, well, of course quantum superiority is possible (and who said otherwise? well of course not them!), but the whole difficulty has always been in quantum error correction. In fact, some have said so from the beginning. And the other, my good friend Gil Kalai among them, on the record, right here in this blog, predicted that the quantum superiority can never be reached on the fundamental causes. Now they don’t wriggle.

**Q15. And then what?**

If this is indeed achieved quantum supremacy, I think the Google group has all the hardware to demonstrate my certified Protocol for generating random bits. And it’s actually one of the following things in their plan.

And beyond that, the next obvious step is to use programmable QC with 50-100 qubits for some useful quantum simulations (for example, with the condensed state of matter) is much faster than any classical method. And the second obvious step is to demonstrate the use of error correction to hold a logical qubit alive longer than a life time of physical qubits on which it is based. There is no doubt that IBM, Google and other players will chase each other for supremacy in these steps.