Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Case Against Quantum Computing (ieee.org)
138 points by niccl on Nov 15, 2018 | hide | past | favorite | 85 comments


The core argument here, concerning the number of variables that need to be manipulated, isn't very clear to me - I can't tell if that's because it's an unclear argument in general, or if the author tried to simplify it for laypeople to the extent that it became unconvincingly vague for people who know a thing or two about the subject area. Given the status of the author, I'm willing to give him the benefit of the doubt (and also assume that some clarity was lost in translation).

Here's an easier-to-follow argument I've heard before (also simplified for laypeople) that might be close enough to the one the author intends to make to be useful:

Quantum algorithms are supposed to be much more efficient than conventional algorithms at certain kinds of tasks. That efficiency is calculated with the assumption that each system starts in a state where it's prepared to execute the algorithm. For conventional computers, that makes sense for a number of reasons (e.g. data/program input is linear). For quantum computers, this assumption doesn't necessarily make sense because it takes a significant amount of work (in terms of measurements, etc.) to get the system (the machinery and physical representation of the initial data) into the right quantum state. In fact, the amount of work required scales horribly with the size of the input (in terms of qubits). If you think of those state preparation steps as part of the cost of the algorithm, the quantum algorithms won't, in general, outperform the classical ones.

I've been out of the quantum computation loop for nearly a decade, so I have no idea whether that's a fair way of presenting the problem, but hopefully it helps someone understand the claims being made in the article.


The argument here is nonsensical.

For example, "With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!" This is completely wrong. Of course the field has studied this problem; people aren't idiots.

But your concern is also wrong. Preparing the initial state is easy; quantum algorithms generally start in the all-0s state. Even quantum devices like the D-Wave machine, for which there is considerable skepticism about its workings and power, can easily be initialized in the all-0s state.

(With error correction, one needs to prepare the encoded all-0s state, which is indeed difficult to prepare. But in principle it is doable.)


Doable in principle? My impression was that the author doesn't argue against this. He claims it's unlikely to be doable in practice. That's the whole point of the article.


His argument is pretty poor. He refers to the threshold theorem which does rely on certain assumptions, and then claims that these assumptions cannot be satisfied in practice. This is apparently because "continuous quantities can be neither measured nor manipulated exactly", however this ISN'T an assumption of the threshold theorem. In fact the whole premise of the threshold theorem is that they can't. It's a straw man argument, the same one he's been making since the 1990s.


> unconvincingly vague for people who know a thing or two about the subject area.

I feel like this phrase could be applied to many things I come across... unconvincingly vague


Gate-based quantum computers are similar to classical computers in that you can decompose any operation into a combination of a small set of universal gates (like single-bit NOT and two-bit NAND for classical computers). Typically you combine arbitrary single-qubit rotations along the X, Y, Z axes with a two-qubit gate like CNOT or iSWAP to obtain a full set of universal gates. You can then decompose your algorithm into these gates for running it, which is often doable efficiently (e.g. the Grover algorithm can be implemented easily using the gate set described above). You can also efficiently implement any classical gate operations on a quantum computer using a set of universal, reversible gates (e.g. single qubit NOT and classical CNOT). Finally, you need to read qubit states so you have to provide a readout of individual qubits, typically you implement the projection operator along the Z axis.

Now, for each of these operations you can characterize your error rate. For the processor I built during my PhD those error rates were around 9 % for qubit readout, 10 % for the universal two-qubit gate and around 1 % for single qubit gates. These errors stemmed from qubit decoherence (dephasing and relaxation), limitations of the readout and limitations of the signal precision for the manipulation. Errors are often different for each qubit and depend on the operation and the state the qubits are in (there’s crosstalk and different decoherence depending on the qubit frequencies as well), but you can usually give an upper bound for the error. Systems today are much better already btw. Given the different error rates you can very easily see how much error you will accumulate during a given gate sequence and what algorithms you will be able to run. We were for example able to perform the 2-qubit Grover algorithm with more than 50 % fidelity, which demonstrated quantum speed up for that very simple case. For a small subset of your qubits you can also perform quantum state tomography and quantum process tomography to statistically characterize their state and the quantum evolution they went through (including errors and decoherence), which allows you to characterize the errors in your system experimentally in a more precise way. Again this technique is limited by the fidelity of single qubit operations and readout, but it can nevertheless provide very good data for error analysis. For larger qubit systems tomography becomes intractable but there are other techniques that can allow you to characterize errors for these systems as well (e.g. randomized benchmarking).

So please stop telling people that it’s not possible to quantify errors, it is simply not true. Quantum computers are like classical computers in the sense that they obey the laws of physics and have various error sources that can be measured and characterized. The precision that’s required is higher than for classical computers but there’s no fundamental reason why quantum computers can’t work, it’s just a though engineering problem.


Here is a way to see the fallacy of the OMG, its 10^300 variables, thats crazy style of “argument”.

Consider a probabilistic classical algorithm on 500 bits. Perhaps a Monte Carlo simulation of an Ising model for example.

Note first that the most general probability distribution over the 500 classical bits takes 2^500 real numbers to specify. (You have to specify P(000…0) and P(000…1) and… P(111…1)). [You should compare this to the 2^501 real parameters it takes to specify the quantum state of 500 qubits.]

To generate the most general such distribution perhaps you are restricted to using circuits where the gates act on at most n-bits at a time. Each gate can be described by a 2^n x 2^n bistochastic matrix comprised of (n-1)^2 real parameters. [You should compare this to the n^2 real parameters it takes to specify a 2^n x 2^n unitary matrix for a quantum gate acting on n qubits.]

Obviously its nuts to imagine you can generate all classical probability distributions over the 2^500 real parameters, particularly if you’re so mad as to think you are going to do it using a circuit comprised only of these n-bit gates!

Therefore useful classical monte carlo computing is obviously decades away.

(Oh and please trust me, I'm well know in stuff that isn't quantum computing.)


Let me present it a different way.

Someone comes to you with two formal models of computing. Both models involve representing the state of the computer as a vector of real numbers, they both involve finite dimensional subsystems combined with a tensor product, both involve gates defined over the reals also combined via the tensor product and so on. That is, both models are just about evolution of a vector in some (very high) dimensional real vector space according to gates acting on a small number of subsystems. In fact these two models are identical, except for the fact that in model A the readout procedure involves computing a property of the output vector with the 1-norm, while in model B the 2-norm is used.

This is not an analogy, these are valid mathematical formulations of classical and quantum computing, the correspondences (and differences!) are well understood and rigorous.

Now you read an IEEE article that vociferously objects to the feasibility of building a computer based on model B, but all the objections are to do with properties of model B that it fully shares with model A. And model A you know can be very well approximated already in the physical world, which means reality was somehow was not inhibited by those objections. To try and refute the physicality of model B with an argument based on premises already satisfied by model A is silly.

(Note that even if it were the case that complex numbers were necessary for quantum computing, which they are not - see eg. my book Q is for Quantum - you can map the quantum density matrix on n qubits to a real vector over the basis of Hermitian matrices).


Hey I'm not familiar with this way of comparing classical and quantum computation. Can you point me to some more details? I have Nielson's book but don't remember seeing this analogy before!


I presume its explained in Scott Asronsons book, its implicitly there in Nielsen and Chuang. But the best way to understand it is by example - try to write out how you would describe classical probabilistic computation on two classical bits to mimic the quantum circuit type of picture, and if you succeed the generalization will be obvious.


> ... as a vector of real numbers...

Using IEEE754?


Likewise, you don't ever explore the entire state space when you perform quantum computation. As it happens, most states are inaccessible. As shown by Poulin et al. in Physical Review Letters, 106, 170501 (2011), you can only ever explore a very small fraction of quantum states in polynomial time.

Edit: tezthenerd has said pretty much the same thing below. I just included the reference in case you'd like a formal proof.


Monte Carlo simulation doesn't generate all classical probability distributions though. Attempting to do that would be nuts. Monte Carlo simulation only samples from a single, fixed distribution (or maybe a small number of distribution).

I'm not super convinced by the argument based on number of parameters either, but your analogy doesn't refute it at all.


Its not meant to be an analogy.

We also will not attempt to generate all quantum states on a quantum computer. As with classical monte carlo, we will only ever generate a tiny fraction of the possible quantum states/distributions, and will also sample from a fixed distribution (whatever the quantum circuit outputs, we measure it in a fixed basis and always draw samples from that single, fixed distribution).

We will also achieve robustness against the tyranny of the real numbers in our gate parameters in a very similar way that a classical computer does when it approximates some idealized Monte Carlo algorithm.


I mean, heck, you don't even need computers, just play a game of Go. Suddenly you're juggling "variables" in a humongous parameter space.


"While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1)."

If nothing else, that seems to me to be the clearest description of quantum computing I've seen.


[typo note its 2^N quantum amplitudes]

A clear description but potentially very misleading and one that will certainly screw up your intuition about what to expect from quantum computers.

As pointed out below, a classical probability distribution of N bits is a 2^N dimensional vector of real numbers. There are many very good reasons to think of the quantum state as "more like" this classical vector than the physical state of the "N classical bits" themselves.

Here is one:

- If I send you the physical systems which encode N classical bits then sure enough, I can communicate to you only N classical bits of information despite it taking exponentially many real parameters to specify the distribution/state I prepared them in.

- If I send you the physical systems which encode N quantum bits then sure enough, I can communicate to you only N classical bits of information despite it taking exponentially many real parameters to specify the distribution/state I prepared them in.

There are many other reasons to think of quantum states as not "inherently real" and more like the classical probability distribution. A key one is that both "instantaneously collapse" when you get information about the outcome of an observation.

The issue of course is that while we know the "real states of the world are" of a conventional computer, nobody agrees on what (if any) they are for quantum systems (though many constraints on such purported real states are known, I write about some of them in _Q is for Quantum_)


Actually, if you send a physical system which encodes N quantum bits you can send up to 2N classical bits.

(check out https://en.wikipedia.org/wiki/Superdense_coding)


Shame on this article. We should be researching everything within moral limits. Everything. Research all of it so hard that there's no long any debate about whether this point or that has any standing because it's so well understood that we can just fucking move on. Regardless of the topic, if we aren't sure if it's of value - we learn as much as we can until we can allow ourselves as a species to move on. That's the only way to prove any point in the realm of curiosity that satisfies me. Articles that argue the validity of investgating an unknown for any reason other than "it would cause significant harm" don't hold any weight and should be disregarded as either being based in fear, self interest, or simple ignorance.


He said that the experimental research was useful, so shame on you. What kind of a silly strawman are you trying to build.

I can see from the responses here that the field is obviously quite volatile and full of big egos with practicioners taking criticisms so personally :)


Completely agree. Imagine people had given up on the transistor. It took many attempts until there was a workable transistor. And then manufacturing still wasn't clear.


The article isn't entirely worthless, the illustrations are really nice.


I still need to gain the intuition for why a quantum computer can operate on some kinds of things "faster". I've read some of the math, but that did little to satisfy me (I need to study it more clearly).

But all of this seems in a tragic state at the moment. Allowing rampant misinformation and hype as to what these machines are actually capable of.


I'm going to butcher a lot of the physics here, but here's the gist of it that should avoid some quantum myths:

The quantum state of a system is not described by a simple real probability but by a complex number. This means that there is an additional degree of freedom that you can adjust in a quantum state that doesn't change its probability of occurring. The nature of entanglement means that there is constructive and destructive interference when you go through certain quantum operations.

Traditionally, a quantum word can be thought of as being a probability distribution over the various possible values of the word. You can build quantum gates that act like classical gates on the binary values, and it's this property that often causes people to describe a quantum computer as executing every possibility in parallel. The difficulty is that the result you get is sampled by the probability distribution, which means you need some sort of interference between the results to allow you to boost the probability of picking the answer you want. A quantum speedup is only possible when you can find the interference. In the case of Shor's algorithm (for factoring), the interference is basically a quantum variant on FFT; in the case of Grover's algorithm (finding x such that f(x) = y), the interference is something that redistributes the probabilities among a plane of the sphere.


OK, yes. This builds on the parts of my intuition that already exist. It's about building a system (from combinations of quantum operators) that maximizes the probability distribution of the result you want. I'm still unsure of the timeline of any quantum operator, and the "finding the interference" seems equivalent to saying something like "find the function" or something more classical.


With Shor's algorithm, the quantum part of the algorithm comes down to finding the period of 'a mod N' where N is the prime number you have, and a is a number less than N that is not a factor of N. Here, the period is the smallest value of x where 'a^x mod N = 1'.

This is done applying the Quantum Fourier Tranform on each 'a^x mod N' using x+1 complex roots of unity for that value of x. You can visualise this as x+1 arrows from the centre of a unit circle with an angle 2pi/(x+1). So, for x=3 you have 4 arrows pointing up, down, left, and right.

When you stack the arrows end to end, they will loop back to their starting point (3 will form a triangle, 4 a square, etc.). The key is that when multiplting this quantum fourier transform with the value of 'a^x mod N', the 1s (at the period) will occur at the same point around the unit circle only for the cases where the number of complex roots is a multiple of the period.

This has the effect that for the correct answer, the arrows are lined up together in a single direction, amplifying the number and thus the probability of selecting that number. For the others, the arrows cycle and loop back on each other, so stay fairly small, decreasing the probability they will be selected.


Yes I would really like to understand if this is different than curve fitting. How isn't that classical ? What about causality ? Quantum mechanics to me are just an effective model for dealing with uncertainty AKA probabilistic/meta probabilistic reasoning. But it is about statistically predicting the state of a particle/particles (position, spin, mass, momentum, etc). How can this model uncertainety in algorithms is something I don't understand at all. My brain is very tempted to claim that quantum computing is bullshit and that is what it seems to a layman like me because i've never read a great explanation. Even if quantum computing worked it would be soft computing ? By having a set of probabilities as results, that would be an approximation thus effectively allowing to be less than exponential for np hard and less correct. How does that differ from classical heuristics algorithms that already have massive Speed like SAT solvers ?


Firstly, you're going to run into walls if you try to fully understand why the mathematics of quantum mechanics are how they are. Physicists have been going at it for decades and from what I can tell the picture is not so clear yet. What has been done though is many thousands of experiments showing that the predictions the mathematical theory makes (even the wild, unintuitive ones) are correct.

Secondly, I think you're going too deep on this whole 'prediction' thing. The algorithms in quantum computing aren't trying to model something any more than a classical algorithm does. Currently, the notion of a quantum algorithm is just a series of special logic gates that act on qubits instead of bits. What the previous poster said above is correct. Quantum computers are like classical computers with randomization except instead of real probabilities it has what are called complex amplitudes. The way the complex amplitudes behave allow for the states to interact slightly different since they can now 'interfere' with each other to cancel each other out in very specific situations. This power gives quantum computers only slightly more power the classical ones (with randomization). (Most) experts don't think quantum computers can solve NP-hard problems. This power from the interference only helps in very specific problems.


> This means that there is an additional degree of freedom that you can adjust in a quantum state that doesn't change its probability of occurring.

This gave me a real 'aha' moment - I've been trying to get an intuition for complex numbers for years, thanks.


It's not that they can execute a computation faster in the traditional sense. It is that you can exploit the quantum behaviour in some types of computation (e.g. factoring primes) to make the answer more probable, and thus more likely to be "observed", when you read the state of the quantum computer and thus make it easy to locate the correct answer compared to traditional algorithms.

PBS Infinite Series has several videos on quantum computing and Shor's algorithm: 1. https://www.youtube.com/watch?v=IrbJYsep45E "The Mathematics of Quantum Computers" 2. https://www.youtube.com/watch?v=12Q3Mrh03Gk "How to Break Cryptography" 3. https://www.youtube.com/watch?v=wUwZZaI5u0c "Hacking at Quantum Speed with Shor's Algorthm"


Scott's book referenced is very good, but you might also want to see if his layman's blog explanation of Shor's Algorithm is useful to you: https://www.scottaaronson.com/blog/?p=208


This was quite good at filling in some intuition.


If you are specifically interested in building up the mathematical intuition, check out "Quantum Computing since Democritus" by Aaronson - it excels at presenting the intuition. For less comp-sci and more physics look at the standard references: Nielsen and Chuang's book and Preskill's lecture notes.


Thanks for the links. I'll start here: http://www.theory.caltech.edu/%7Epreskill/ph219/index.html#l... and probably get the books too (hey, they're cheap on amazon... god I hate myself).


The Aaronson book is mostly available on his academic website as lecture notes.


There's a lot of misinformation in this space, so be careful who you listen to. I hope to say as few false things as possible, but I can offer no guarantees.

One source of confusion is that there are different types of proposed quantum computers. Some, like the D-wave machines, use stochastic measurement processes to arrive at probably-correct solutions. This is called "annealing", and it's a lot like what many machine learning algorithms do to try and find global minima in error function spaces, only the quantum mechanics gives it an edge in not getting stuck in local minima for various reasons.

These aren't universal computers - they only implement very specialized algorithms in a kind of abstract sense. It's sort of like how if you take a box of different sized rocks and shake it for a while, you'll eventually find that the small rocks have made their way to the bottom and the bigger rocks have drifted to the top (this is called granular convection). There's a sense in which this is like a search algorithm that helps you find the biggest rocks. The quantum version of this (very loosely speaking) would be if the rocks sorted themselves faster because the small rocks would sometimes just tunnel through the big rocks.

Using quantum effects to speed up annealing lets you do annealing faster than a classical simulation - but most problems aren't annealing problems. So present day quantum computers are specialized machines that do a kind of computation using quantum effects, but they can't run general purpose quantum algorithms. Think of them like co-processor chips that aren't quite CPUS.

That said, it's sort of hard to define what a general purpose quantum computer would even be (for various reasons), so maybe that's unfair. Anyway, D-wave machines can't run Shor's algorithm, which is one of the examples of the "exponential speedup" over classical computation.

The (claimed) algorithmic efficiency of quantum computing as it applies to quantum algorithms has to do with how much computation is done in each step.

We represent information in physical systems and process that information by making those physical systems interact in a way that affects a state space transformation. The amount of processing we can do in "one step" depends on the size of the state space of our data representation and the kinds of operations we can affect via physical interactions. Conventional computation involves a bunch of parallel systems that have 2 distinguishable states interacting pairwise through networks of logic gates that perform a limited number of transformations.

Systems of qubits have many more than 2^N distinguishable states (technically, an infinite number), and operations on them can be arbitrary linear transformations. So each quantum logic gate can do a lot more computation than a classical binary gate. Quantum algorithms basically translate information in the limited classical state space to the bigger quantum state space, do fancy operations, then project back into the classical space. It's sort of like data parallelism in the sense that if you can do a series of small operations or one big SIMD operation, the latter will generally be faster. Operations on qubits make use of physical parallelism that's only possible in the much bigger non-classical state space.


Had to read to the bottom to get to the (really weak) "case against Quantum Computing":

> I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That’s because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What’s more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.

I was really hoping for a real technical argument, not speculation about bubbles or politics in academia. (Especially given that most of the real research is done in industry.)


The technical argument is absolutely clear: quantum computing cannot work because it relies on manipulating and measuring an absolutely astronomical number of continuous variables with near-infinite precision.

The argument may or may not be correct, but it deserves something more thoughtful than a dismissive response that doesn't even recognise the basic point the author is making.


This is one of the standard complaints (the gist of it being that quantum computing is some form of analog computing, i.e. requiring near-infinite precision). For researchers in the field it becomes rather frustrating to have to repeat the same response without being heard, so I can understand the annoyance expressed in the parent comment. For what is worth, here is a good explanation of how this complaint misrepresents the whole premise of quantum computation (in particular, point 6): https://www.scottaaronson.com/democritus/lec14.html


> the gist of it being that quantum computing is some form of analog computing

It absolutely is - at least, in the only practical, real-today, working instantiation of it which is in the form of quantum annealing.


This is an extremely misleading statement. Quantum annealing is most certainly not what quantum computing is about (it is not particularly "quantum" either). At most, you can argue that it is an important first step (which is also doubtful). Most of the interesting hardware currently being developed has nothing to do with quantum annealing and most researchers are fairly annoyed at D-wave for originally pushing this nonsense (D-wave is the company that started selling quantum annealing machines, which are little more than a classical analog computer).

See the ion traps developed in the UK/Maryland or the transmon qubits at Google/IBM/Yale for some well known prototypes of quantum computing hardware (which are admittedly quite far from being practical or usable).


> Quantum annealing is most certainly not what quantum computing is about

Near as I can tell, it's the only form of practical quantum computation that's available to end-users in any meaningful way. Everything else so far is vaporware with ridiculously high error rates.

> (it is not particularly "quantum" either)

The qubits - josephson junctions and superconducting loops - are indeed in quantum superposition with each other if their claims are true. This is a macroscopic quantum-mechanical effect; to call it "not particularly quantum" belies a lack of understanding as to what they're doing.

> most researchers are fairly annoyed at D-wave for originally pushing this nonsense

D-Wave is making their system publicly available for free. Have you signed up and taken a look at their demos? Admittedly it's a bit beyond my mathematical ability but it's clear the basic seed is there for something that works; they just need a bigger graph and more connections between qubits. There's no reason to assume they won't have their own equivalent of Moore's Law at some point where these things grow year over year.

> transmon qubits at Google/IBM/Yale for some well known prototypes of quantum computing hardware (which are admittedly quite far from being practical or usable).

Do they have _any_ results worth caring about yet? I don't think they're anywhere near something working on a practical scale. You can buy a D-Wave machine, or time on it - it's not vaporware, even if most people don't find it useful. For the first few years of gate model machines people aren't really going to find them all that useful for day to day things either - just like computers in the 1940s and 50s, it was very specific, high-dollar applications first, and things for the common man decades later.


You mentioned that some of the details are beyond your math skills. The reason I am so vocal against claims that Dwave have a quantum computer is because I work in this field (Yale Quantum Institute) and these details are what my work is about.

The "qubits" Dwave has most certainly are not entangled and even they have never claimed it. Dwave themselves have stopped claiming they have a quantum computer, and have introduced this term "quantum annealing" which is something that you can more efficiently do on a classical computer.

Yes, you can buy something from Dwave, so it is not vaporware, it is just snake oil. This is why it is frustrating for me to respond to your comments: you are dismissing the very measurable progress my colleagues have done over the last two decades (just look at qubit lifetimes, "break even" error correction procedures, generation of entangled pairs, etc) while claiming that a device that is incapable of sustaining any particularly interesting quantum state is to be held in high regard.

And while Dwave's engineering efforts should be praised, their misleading PR has led to false misleading claims like yours and for that at least their PR department deserves to be shunned.


> that's available to end-users in any meaningful way. Everything else so far is vaporware

You can't just call all research "vaporware" because it happens to not exist today at this moment...

> ridiculously high error rates. You know that D-Wave's qubits have very high error rates, right? They have always espoused the "more qubits, more error" strategy.

> but it's clear the basic seed is there for something that works; they just need a bigger graph and more connections between qubits.

D-Wave has been trying for quite some time to increase the connectivity of the Chimera graphs with "pegasus" and so forth, it's a Hard Problem.

> There's no reason to assume they won't have their own equivalent of Moore's Law at some point where these things grow year over year.

Yes there is, adding more qubits is not a simple task like adding more transistors. Plus, more qubits doesn't necessarily help you unless you have the necessary connectivity as well, which is even superficially speaking at least quadratically difficult.

> Do they have _any_ results worth caring about yet? You... you do realize that nothing D-Wave has shown so far is even faster than a laptop, right?

And btw, I don't mean to be hating on D-Wave either, I think they get a lot of unnecessary hate.


That also invokes negative connotations of analog computing that occur because of it's problems in the old days.

On another note: There is also a sense of analog quantum computing when you use bosonic modes instead of qubits.


Calling bosonic modes analog (in the constext of computing) seems misleading. Most of the big proponents of bosonic modes (like my institution, Yale) use them to implement digital logic on top of them.

And the problems with analog computers is not just from the old days. Analog computers fundamentally can not scale because they do not permit error correction procedures.


The argument that quantum computing relies on manipulating continuous variables with "near-infinite precision" is flatly incorrect, if my understanding of the threshold theorem is correct. So no, quantum computers do not require extreme amounts of precision or extremely low error rates because it is possible to correct errors by making the computer larger.

https://en.wikipedia.org/wiki/Quantum_threshold_theorem


Amusing historical note: The same arguments were made about classical computers and a similar theorem (by Von Neuman) exists there.


Do you have a more precise citation of this theorem? I'm curious.


Here is PDF of a lecture he gave on the topic http://www.sns.ias.edu/pitp2/2012files/Probabilistic_Logics....


> it relies on manipulating and measuring an absolutely astronomical number of

"Manipulating" yes, that is the whole point. But you don't measure all those astronomical number of variables. The measurements stay sane, the state does not.

> it deserves something more thoughtful than a dismissive response

To me this article is itself a dismissive response to quantum computing.


> I was really hoping for a real technical argument

Please read the body of the article, not only the bottom. There's plenty of technical arguments in there.


There's certainly a bubble here; trivially obvious, but there are also very good technical arguments in that article. FWIIW Dyakonov has been saying the same thing since the bubble got started in 2000 or so.


Sounds like Fusion. That’s still going on after 60 odd years.


We know that fusion is possible. We have H-bombs. We have various things that do small scale fusion. Getting out more power than is put in remains way out of reach.


I take the point, while noting H-bombs put out more power than is put in. Perhaps "getting more useful power out..."?


With one major difference. We know that fusion in some form is possible (for example, in the sun, or in a hydrogen-bomb). The difficulty is doing it at a reasonable scale. It's less clear that quantum computing is possible.


I mean we already have quantum computers too. https://quantumexperience.ng.bluemix.net/qx/experience The trouble here is also scaling, but in the upward direction.


The "quantum computers" we have are classical computers with very weird hardware. The thing everybody is trying to achieve is quantum supremacy - this hasn't been achieved or demonstrated yet, and might never will be.


For at least a very, very long time, I see Quantum Computers as simply being "accelerators" that do very niche tasks. Like GPUs or FPGAs today. I'm sure one day we'll have a whole other paradigm, but I doubt Quantum Computing as we envision it today will be the thing that replaces x86-compatible silicon.


They are not classical computers. We've had actual quantum logic gates since 1995. https://en.wikipedia.org/wiki/Timeline_of_quantum_computing And as linked above, IBM is renting out time on real quantum computers up to 17 qubits now. Intel, Google, and IBM had all announced plans for 49 or 50 qubit computers, which would have achieved quantum supremacy at the time, but IBM improved their simulator to 56 qubits last year. https://en.wikipedia.org/wiki/Quantum_supremacy So we're only one or two hardware generations away now.


They are classical computers in the sense they are are no stronger than classical computers - they provide a different physical implementation, but computationally can't achieve anything that a classical computer can't. Everything those "quantum computers" do can be simulated with a classical computer. The entire motivation behind the development of QC is supremacy, and the existence of those "quantum computers" alone doesn't provide any proof that supremacy is possible - so it is definitely not just a question of scale.

Google already has a 72 qubit processor (Bristlecone), but whether they or anybody will be able to demonstrate supremacy in the current, next, or some future generation is entirely conjectural. The main limit is error correction, and there are theoretical arguments (Gil Kalai) for why achieving supremacy might be impossible.


Well I don't want to get into a full-on argument in the comments here, but I 100% disagree with your definition of a quantum computer. If a computer uses quantum logic gates then it's a quantum computer.


It’s not particularly important how you define the term “quantum computer”, nor have I presented any definition of a "quantum computer" that you can disagree with. The point is that, as I wrote before, regardless of your definition - currently there is no discernible advantage of quantum computers over classical computers in the only metric that matters: computational power. This is what people refer to when they talk about the impossibility of QC. Nobody has demonstrated that physically realizable quantum computers can solve BQP problems in polynomial time, or provide any superpolynomial speedups over classical computers, and there is certainly no proof that this is simply a matter of scale.


Grover's algorithm runs faster than any classical algorithm can https://en.wikipedia.org/wiki/Grover%27s_algorithm and it was first run on quantum hardware in 1998 according to the timeline I posted above. Literally the only difference between the state of the art and quantum supremacy is scale.


Now you are simply posting incorrect or misleading information. There is no existing implementation of Grover's algorithm on quantum hardware that demonstrated any speedup over classical computers. In fact the largest implementation of Grover's algorithm to date, on ibmqx5, has an unusable error rate.

> Literally the only difference between the state of the art and quantum supremacy is scale.

This is simply incorrect. At the moment it's unclear that quantum supremacy can be achieved at any scale whatsoever, and it's certainly possible that it's unachievable even in principle.


You might say that's the case with Fusion as well :)


You can literally sign up for D-Wave Leap and run problems on a quantum computer immediately. It's not gate-model, but it's a real quantum annealer that functions and can compute useful things.


Quantum annealers are not quantum computers. They are not more powerful than classical computers. You either need the "gate model" or the equivalent "adiabatic quantum computing model" to achieve something infeasible on classical computers.

It is a common misconception to think that adiabatic quantum computing (which is equivalent to the gate model) and quantum annealing is the same thing. It is not, and quantum annealing is mostly a buzzword that D-Wave use to sell their devices.


If something sounds too good to be true, it probably is.


Sulfa. White swans do come along, they're just incredibly rare.


Are they rare, or do they just happen over longer timeframes than we want? Just about any mainstream technology today would appear to be too good to be true to someone ~100 years ago. Air travel, television, computers, phones.


The telephone was invented in 1876, practically old hat a hundred years ago. Air travel was 1903, was around longer than Facebook has been around in 2018. I guess it comes down to your definition of rare, but it's certainly rare enough.


Black swans are rare.


I'm wondering what will be first: practical fusion reactors or practical QC.

Also, will we at some point be able to linearly convert energy into compute power. In that case: would we still need QC?

(You might say that this conversion is already possible, but it requires human interaction)


Fusion reactor development was slowed down by the problem of superconductivity, which seems to be close to solved now (theoretically, of course).


Many of the comments have pointed out this flaw in the argument. I will try to post my own summary of this flaw:

The problem is that this case against quantum computing due to the sheer scale of the number of parameters is also a case against randomized computing, but randomized computing certainly exists in the real world (a computer with a coin flipper). A 'state' in randomized computing over N bits is specified by a probability being assigned to each of the 2^N possible states of the N bits.

For example, to describe the distribution after flipping 100 coins, you require 2^100 real numbers to specify the probabilities for each outcome, since there are 2^100 combinations of heads and tails you can get. But it is very clear that we can do this in real life. The point is, you don't need 2^100 parameters with full precision to flip a coin 100 times.

To be more precise, we can write the state as a 2^N length vector, and our operations (think logic gates) on the vector are called stochastic matrices (this is the type of matrix required to so that the operation maps states to states).

Quantum computing is no different, except that instead of using real numbers for probabilities, complex numbers are used and they are called amplitudes instead. Also, instead of stochastic matrices for operations, they are now unitary matrices. The point is, the scale of the number of parameters is the same, the only thing that's changed is that it is complex numbers instead of real numbers.

Now an argument might be that it is not possible to create gates with a very specific probability distribution with just a discrete and not very precise coin flipper, but it's actually possible to show that for randomized computing, you only need a constant factor more flips to get exponentially close to the distributions you want. A similar result was proved pretty early for the quantum case, and it is a foundational result. So this is not really a problem either.


In addition to the technical arguments against this article, I say: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

https://en.wikipedia.org/wiki/Clarke%27s_three_laws


This read a lot like Clifford Stoll’s 1995 essay "Why web won't be nirvana" https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...


Has Google said what the error rate is yet for Bristlecone?


This is a lame article, bordering on click-bait.

It is totally reasonable to predict a quantum computing winter, but all the supporting arguments here seem to be about how "hard-headed engineers" know best.

> So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 2^1000, which is to say about 10^300. That’s a very big number indeed.

Yes this is the whole point: if it wasn't this big, we could just simulate it on a regular computer.

> To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

Yep.

> it’s absolutely unimaginable how to keep errors under control for the 10^300 continuous parameters that must be processed by a useful quantum computer.

I do imagine it, every day. But I'm a professional, so maybe that doesn't count?

> How many physical qubits would be required for each logical qubit? No one really knows...

There are plenty of such estimates around. Obviously it depends on the noise rate of each qubit.

> all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? ... Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!

There are boatloads of papers (and conferences) that discuss exactly this. Obviously it is a vital question to answer!


I think your reply is knee-jerk and "lame". Why don't you enlighten us, instead of taking it so personally when somebody criticizes your field.

I thought the article was excellent and made good arguments.


How much working knowledge of quantum computing do you have? I can see why someone not connected to the field might think the article makes good arguments but if you are connected at all to the literature, one can see the article is bullshit.

Many of the criticisms they’re levying here are either made up or misunderstood.


It looks like Dyakonov has been at this for a while:

https://www.scottaaronson.com/blog/?p=1211


>I do imagine it, every day. But I'm a professional, so maybe that doesn't count?

Can englight us?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: