In the preprint, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.
IMO the whole IBM blog is interesting, your quote isn't just a summary.
Basically what IBM is claiming is that Google's circuit doesn't do anything useful really, it is just meant to be very complicated to do in traditional computing systems.
And even in this idealized position, Google's quantum computer doesn't transform an unsolvable problem into a solvable one.
According to IBM, "Quantum Supremacy" means accomplishing something useful to the society, that simply wouldn't be possible to do otherwise. Google's article doesn't show any sign of that, IBM claim.
"Quantum Supremacy" is a technical term, not a colloquial one. It refers to showing that there exists a problem that a real, physical quantum computer can solve quickly that a classical computer cannot.
Reading the IBM article, they are fully aware of what "quantum supremacy" means in a technical sense, and they are urging the media not to use that term, since it will be misunderstood by the public. Their claim that Google has failed to achieve supremacy rests solely on their claim that they can simulate the circuit far faster (and scale the simulation linearly) using better classical algorithms.
That's a strong claim, and I'm interested in seeing what Google responds with.
Disclosure: I work at Google, but hahaha, no, I'm not cool enough to work on this.
Physicists here. Note that they say it scales linearly in circuit depth (which is a trivial fact, and has always been true for classical simulations of quantum computers which are optimal in that regard --in fact, that is the case when doing it in the most naive way), not the number of qubits which is the quantum speed up referred in "quantum supremacy".
Another thing, this is actually Martinis' decades long work. I know Google recently started raining money down on his lab a couple years back, helping with the classical aspects, design etc, and media loves reporting as Google's Quantum Computer, but the actual quantum computer, the nitty gritty physics isn't Google's work. Martinis already had a working setup with ~10 qubits when Google started supporting him ~5 years ago.
This IBM "rebuttal" sounds a bit like cheating on multiple aspects, and the timing of the announcement is interesting.
Note that they don't tell you how the memory requirements grow with the number of qubits either (which is exponential as well). I expect the response will be new toy computation proposals which will also be prohibitively expensive in classical memory (not just classical CPU with limited memory) in current supercomputers as well. If the experimentalists can roll out more qubits faster though (less likely), the "concern" will be addressed as well.
Thanks for the detailed response; I am not a physicist, so I didn’t catch the sleight of hand in the linear scaling claim. The timing of the “rebuttal” is almost certainly intentional, and possible because of the accidental pre-publication last month. I hope the rebuttal is indeed specious, because it’s an exciting advance; I’m sure time will tell.
It's pretty easy and requires no physics, actually.
Here's a simple, extended version.
A quantum gate is, mathematically speaking, a matrix. For a given physical system, of fixed number of qubits, obtaining that matrix on a classical computer takes (on average) a fixed amount of time, let's say T seconds. A quantum "circuit" is a sequence of quantum gates, applied consecutively in time, and you simply multiply them all to get their overall effect.
So if your circuit is made of 10 gates, the total CPU time is 10T, plus the time for 10-1 matrix multiplications. If it is 20 gates, then it is 20T plus time for 20-1 matrix multiplications.
Since multiplying two matrices of the same dimension also takes a fixed amount, on average, the simulation time grows linearly with circuit depth.
The quantum supremacy is related to how T grows as you increase the number of qubits, n (which is exponential, it's a 2^n by 2^n matrix).
No idea where exactly you got that idea (feel free to quote any part of the paper), but no, it isn't.
Even the brute force "simulation" of a quantum computer is like UN...U2.U1 where Us are unitarity matrices. The hard part is obtaining those unitaries (whose dimensions grow exponentially with the number of qubits). For fixed number of qubits though, once you have N unitaries, you do N matrix multiplications. If you double N, it'll take twice long on a classical and roughly twice on a quantum computer (different gates take different amount of time to implement). But on an actual quantum computer, there are tricks you can do (if the Hamiltonian allows) which may allow you to do it in fewer unitaries.
Circuit depth is still important because it is important 1) for modelling the noise in the device and extracting gate fidelities, that's basically how randomized benchmarking works although they're doing something else for fidelity estimates it still is a function of circuit depth 2) for doing anything meaningful when using a given set of basic building block gates.
They're using a "clever trick" to approximately evaluate the overall gate from this paper https://arxiv.org/abs/1807.10749 which is computationally cheaper than doing a "brute force" simulation (which scales linearly in the number of gates), but it quickly becomes worse as you increase the number of gates. That's basically what it says.
It looks like Martinis' group thought a "brute-force" simulation for 54 qubits is impossible, and this appoximate and "clever trick" is the only way to go at this number of qubits, but IBM says that with some different tricks, 54 qubits is still doable (I'm just guessing what they were thinking, and this is the only plausible explanation I can think of).
Overall, a discussion which has nothing to do with quantum supremacy really...
Whether it is a factor of a million or thousand though, the gap between a quantum computer will increase exponentially as the number of qubits is increased. This is fact, assuming quantum mechanics is correct.
Actually, physicists have been trying to deal with this painful fact for quite a long time: it is also the reason why many body physics is so hard computationally and we spent almost a century to develop approximate methods to calculate even the simplest idealistic situations even with hundreds or thousands of atoms using density functional theory, quantum monte carlo etc etc.
The whole idea of quantum computation is to turn this difficulty upside down and try to use it into our advantage.
> The gap between a quantum computer will increase exponentially as the number of qubits is increased. This is fact, assuming quantum mechanics is correct.
I agree, but then there is no need to prove quantum supremacy after all. This entire business is about whether quantum mechanics is correct or not.
Quantum mechanics is about a 100 years old, and no violation has even been observed in laboratory, particle accelerators or outer space. The quantum theory is the most accurate theory we ever had in the history, tested to less than 1 in a billion precision. Even classical computers rely on it. Physicists don't have doubts about the quantum theory, we know it is possible, the problem is an engineering problem of attaining precise control over quantum systems, which is a very very hard engineering problem but there is nothing in physics which says it can't be achieved.
There is still a need, but it is for an entirely different reason: not everyone (people with money and funding agencies, in particular) is physicist.
General relativity is also 100 years old, no violation etc. Still, discovery of gravitational wave was very welcome, because test of general relativity in strong force regime was not very good.
Quantum computing is analogous: test of quantum mechanics in "strong computational regime" is scant. You seem knowledgeable, but your comments on current claim of quantum supremacy is akin to, say, when claim of discovery of gravitational wave was made and then disputed, replying, "gravitational wave will be discovered, this is a fact assuming general relativity is correct, general relativity is 100 years old, no physicists doubt the theory" etc. All true, but rather pointless.
It's a very different thing. I'm not just talking about the age of the theory. I'm talking about the length of the period during which it was tested so many times, to the level of precision that no other theory got tested and stood.
General relativity was, and still has never been tested to anywhere near that level of precision, and that many times.
And in fact, we still have strong reasons to doubt general relativity because there may or may not be deviations from it observed in galaxies and large scale universe.
General relativity may be correct in that scale (with the ad-hoc addition of a cosmological constant) but to be consistent with those observations, one requires the existence of black holes, dark energy and and dark matter, things we never truly observed and don't know for sure exists (although it is our best explanation at this moment).
We don't really understand how gravity behaves in very small scales, extremely large scales, or in the presence of very strong energy densities. One thing we know for sure is, general relativity is not the ultimate theory of gravity, it spectacularly fails in very small scales.
We would like to stress-test all aspects of general relativity to 1 in a billion precision as well, but we can't.
This is basically because gravity is very weak and you can't design all sorts of controlled experiments to test it. The best you can do is to make observations in the vicinity of readily massive things like Earth, Sun or a black hole, which you have no control over. You can't make two black holes, pit them together and see what happens in the lab. A situation very different from the quantum theory.
Physicists did expect to observe gravitational waves, and it wasn't a shocker to anyone. The thing that makes is very big deal for physicists is that we now have a whole new way probing things that we couldn't before, in particular things which we don't understand yet, including the violations of general relativity which we do expect to see.
We don't expect to see deviations in quantum theory (unless you bring a black hole nearby your quantum computer).
The theory of epicycles very accurately explained observed phenomena, and though the conditions for science at that time were very different, its popularity and accuracy very much comparable to those of quantum theory.
In the Hipparchian and Ptolemaic systems of astronomy, the epicycle (from Ancient Greek: ἐπίκυκλος, literally upon the circle, meaning circle moving on another circle[1]) was a geometric model used to explain the variations in speed and direction of the apparent motion of the Moon, Sun, and planets. In particular it explained the apparent retrograde motion of the five planets known at the time. Secondarily, it also explained changes in the apparent distances of the planets from the Earth.
(...)
Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favour with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.
So the epicycles worked very well to explain and predict observations, but for reasons irrelevant to what really caused the motion of the planets. It's still possible that quantum mechanics will fall in the same way.
Note I don't have a horse in this race. I have no opinion on whether quantum mechanics is right or wrong.
>Quantum mechanics is about a 100 years old, and no violation has even been observed in laboratory, particle accelerators or outer space. The quantum theory is the most accurate theory we ever had in the history, tested to less than 1 in a billion precision.
sorry, i think you're doing a slew of hands here. Quantum computing relies not just on QM, it relies on Copenhagen interpretation of it - superposition being a physical reality, not just statistical description. That interpretation is tested by the Bell experiments and granted where have been a bunch of them which do look like confirming the Copenhagen.
>Even classical computers rely on it.
all that confirms QM, not the Copenhagen interpretation.
Wrt. Google supremacy demonstration - it would work the same in statistical aggregate interpretation too thus actually not showing anything quantum computing.
No. An interpretation is just that. No matter what interpretation you use, the experimental measurement results are the same.
If they don't give the same results, it won't be interpretations: you'd have two competing theories and one of them will be wrong since it can be ruled out experimentally.
This is also why majority of physicists don't care much about such philosophical aspects. You can argue that they should, are there are a few people working on foundations of quantum mechanics, but most physicists (including me) see it as semantics and choose to spend their time on practical physics. At least that's what my field (condensed matter physics) is about, which also encompasses the realization of these quantum computers. You can't change the conductivity of a material, or the measured charge state of a transmon qubit by using a different interpretation.
>No matter what interpretation you use, the experimental measurement results are the same
Sorry, no. Bell experiments do produce different measurements for different interpretations thus ruling one of them true. As it stands now they seems to confirm Copenhagen for pretty much everyone.
Then I don't know which crackpot "interpretation" (that doesn't even agree with the experiments, unlike MWI etc) you are referring to, but you can rest assured that nothing in these experiments or condensed matter physics in general depend on it.
"Ensemble interpretations of quantum theory contend that the wave function describes an ensemble of identically prepared systems. They are thus in contrast to “orthodox” or “Copenhagen” interpretations, in which the wave function provides as complete a description as is possible of an individual system."
Bell experiments seems to almost everyone to rule it out, while myself unfortunately, as i really want to wholeheartedly jump on magical bandwagon of superposition and quantum computing, see gigantic holes in those experiments which allow all those other, compatible among themselves interpretations - local realism, pilot wave and ensemble - in and actually pretty much rule Copenhagen out.
I don't think most critics are doubting quantum mechanics. The question is whether quantum computers can reasonably (as an engineering challenge, as a factor of cost, etc) be scaled and can be adapted to take on important real-world problems better than classical computer systems in practice.
We are getting more and more proof points and eliminating a lot of the doubt but this has not been shown yet.
Actually, we had people who claimed for about four decades that it is fundamentally impossible to have quantum speed up, basically equating it to a perpetual motion machine. We still have such famous people around (who aren't physicists, of course), now a loud minority, and "quantum supremacy" was coined because of/for them.
What you're describing is mainly the new generation of people who grew up hearing about quantum computers on the news about experimental realization of small-scale (a few qubits) quantum computers.
Eh, I haven't really heard them. I'm 40 and have followed this from near the beginning (starting with reading Science News in the late 80's-- even then the criticism was pretty muted).
I am not positive we are going to get quantum computers with error correction on boolean qubits that can do all the meaningful tasks we hope quantum computers can do. I think it's more likely than not, but it is not close to happening and may never happen. I am not even 100% certain (but it is very very likely) that it is physically realizable.
In my view, this current milestone is kind of contrived.
And even if we do, it's not clear what subset of tasks currently performed on classical computers will be superseded by quantum computation. That's perhaps one of the biggest problems: normal computing has had a whole lot of use cases to pay the research and capital costs.
Well, I'm envious that you didn't have to deal with those people.
I am involved on the theory side of the implementation of different kinds of solid state qubits, so you may say I'm biased, but the question really isn't whether we will ever get it or not, the question is when. We already have had exponential growth in single qubit coherence times in the past decade, we have very good entangling gates, and there isn't any fundamental reason why the number of qubits can't be increased. It's not like there is an invisible great barrier ahead of us, and nothing in the physics of these devices say we can't.
By the way, they aren't using quantum error correction methods right now, basically because it's not worth it: you need a lot of physical qubits to encode a high quality logical qubits.
Right-- so if we need 10-1000 qubits per high-quality error corrected boolean-ish qubit and 20,000 error-corrected qubits to really tackle interesting real world problems, we need 3-5 orders of magnitude improvement.
The fact that we have exponential improvements to parts of the process is nice. But exponential improvement doesn't continue forever. Extrapolating from the current status 5 orders of magnitude out seems reckless.
I don't know what exactly your belief is rooted on (clearly not physics), but of course you're free to put your money wherever you want, just be aware that:
- Qubits are not boolean. Classical error correction does not work in quantum computers. You need quantum error correction.
- The exponential improvements are in single qubit coherence times, and a stagnation in those improvements doesn't prevent an increase in the number of qubits. (it'd be nice to have, though, because then eventually we wouldn't even need error correction)
- No, it doesn't take 1000 or even 100 physical qubits to have a high fidelity qubit (it can actually be 1:1 with dynamical error correction) or 20000 error corrected qubits to tackle real world problems (Shor's algorithm isn't the only interesting thing out there, simulating some quantum systems requires far less qubits and probably more interesting for people in natural sciences)
- Even if we were 3-5 orders of magnitudes away in # of qubits, it's still a matter of time, and there is still nothing fundamental in physics or material science that prevents having millions of physical qubits.
- Now I feel like I'm dealing with one of those non-physicists people again who claim quantum computers are a pipe dream, only this time instead of a faulty physical argument, it has no real technical basis at all, so this is where I'd like to stop. Enjoy the rest of your day
Exact quote from the paper: "algorithm becomes exponentially more computationally expensive with increasing circuit depth". See also figure 4b, where circuit depth scaling is graphed.
That sentence actually reads "Schrödinger–Feynman algorithm becomes exponentially more computationally expensive with increasing circuit depth" which is true (because the paths in a path integral in a discrete setting would grow combinatorically, but don't have to sort to path integrals to approximate the unitary in a "quick" and dirt way, which clearly doesn't scale well --in fact, if you avoid such "clever" tricks [which is only beneficial in some limited regime] and do it in the naive way, it will scale linearly). It's not the only game you can play on a classical computer, as IBM points out (for which the upfront cost is much higher).
Figure 4b is about error estimation, They use XEB which is exponentially faster than, say doing full quantum process tomography, which is also true. That's the whole reasoning behind XEB, which gives far less information about the error channels, but you still have a fair estimate on the overall fidelity.
None of these have anything to do with the complexity of the actual computation done on the quantum computer though.
Indeed these don't have anything to do with quantum computers, but it does have something to do with quantum supremacy, because quantum supremacy is a claim about both quantum computers and classical computers.
Google chose an algorithm exponential in circuit depth as the best classical algorithm in order to establish quantum supremacy. IBM demonstrated (as you agree) it is in fact not the best classical algorithm. IBM is entirely correct to point this out.
It is a trivial fact that on a classical computer, you can simulate a quantum computer in time that grows linearly in circuit depth, in principle (as in the case of the "naive" way I mentioned above). No one in doing quantum algrothims ever claimed this was the case, and claiming otherwise is just a silly mistake.
Preskill coined the word "quantum supremacy", not Martinis. Even if someone from Martinis' lab misspoke, you can't pin it on Preskill.
Take Shor's algorithm, which has been the poster boy of quantum computing for decades. It gives exponential speed up in the number of qubits, not circuit depth.
See other examples here: https://quantumalgorithmzoo.org/ The complexity is in the number of qubits, "cirucit depth" is a non-concept in quantum algorithms.
No one cares about the circuit depth in quantum algorithms, because in principle, you can always reduce the circuit depth to 1 on a quantum computer: quantum computers aren't necessarily made of basic logic gates, you can implement any unitary in by for example smoothly pulsing the fields in a correct way in a single go.
"Depth" is a concept which usually comes into play when you try to measure the quality of the implementation of some given gate U. You repeat U many many times say M, and see how some measured quantity decays as a function of M, which gives a measure of the fidelity. But this has nothing to do with the quantum algorithm's complexity, it's just for "benchmarking" a physical implementation of the gate.
Thank you. I am not sure why people are talking about 2.5 days, since the key claim is "linearly scaling". If simulation is linearly scaling in this regime, the entire proof is indeed questionable.
I think IBM didn't elaborate because it is kind of obvious if you are into this, and if you are not, linearly scaling graph from 10 to 30 is more than good enough. But since we have niche audience here, let's elaborate.
Quantum supremacy is O(n) claim. Then what is n? The answer is that there are two parameters, not one. In Google paper, n is number of qubits and m is circuit depth. n is well known to be difficult to increase. m is not easy either, because if your qubit isn't stable enough, you can't run deep circuit. What Google did is to run n=53 and m=20.
Then, why do you need n=53 and m=20? After all, you could see whether it's exponential by, say, trying n and m from 10 to 15, it doesn't need to take days and years. The answer is that there is time-space tradeoff available and if your n is constant, exp(n) space (but still constant space, since n is constant) poly(m) time algorithm is available, and if your m is constant, exp(m) space (but still constant space, since m is constant) poly(n) time algorithm is available. So if you want to show exponential speedup, you need to be able to exclude these algorithms by increasing n or m enough such that exp(n) or exp(m) space is not realizable. Google chose n=53 such that exp(n) is not realizable, and ran scaling experiment on m.
This is what they mean whey they say in the paper "Up to 43 qubits, we use a Schrödinger algorithm, which simulates the evolution of the full quantum state... Above this size, there is not enough random access memory (RAM) to store the quantum state". Now what IBM is saying is becoming clear. There is not enough RAM, but there is enough disk, so exp(n) where n=53 is realizable, and simulation runs linear in m. It's not a new algorithm, it's exactly the same algorithm Google ran up to n=43. So 43 qubits clearly can't demonstrate quantum supremacy. For the same reason, 53 qubits can't either.
Thanks for this clear explanation of the issue! This looks to me like a convincing rebuttal of the specific claim Google made about classical runtime at this particular size, but does it rule out Google claiming supremacy by using the problem constrained such that n == m? Or would Google's device have exponential runtime growth in that variant?
Thanks for the breakdown!
But so in other words, it’s still not all that far off that (realistic amounts of) disk storage can’t contain the state and scaling reverts to how they assert it.
But from the Google blog it sounds like the “problem” they chose boils down to “emulating a quantum computer”. Which doesn’t sound like it’s exactly in the spirit of the test, since the quantum computer can obviously emulate itself—and with zero wasted operations!
It's exactly the spirit of the test; the missing element is that the circuit they're emulating can scale in qubits. So the question is, if you add one cubit, what happens? Can classical computers keep up? The tests were run on a range of cubits with 50+ being the largest number, which is where the 10,000 year claim comes from. Anything further than that just isn't feasible to compute on classical computers, because they don't scale like that.
So why is this in the spirit of the test? Because this means that there are some problems that can only be efficiently solved by quantum computers. So this establishes "supremacy" in the sense that while a quantum computer can efficiently solve any classical computing problem, a classical computer cannot solve any quantum computing problem.
The distance between having this proof-of-concept and having meaningful speedups on real problems that cannot be matched by classical computers is very large; all this tells you is that it's possible, and looking more might not be a waste.
> Disclosure: I work at Google, but hahaha, no, I'm not cool enough to work on this.
I often wonder why people feel compelled to write this. There nothing in your profile or in your post history to prove unequivocally you do or don’t work anywhere in particular so why even mention it?
Edit; responders are citing ethics and company policy; but does anyone really think vague hand wavey speculation on a public message board is relavant to anthing?
Sure if you work in the ads group and you posted "wow, our division is in trouble, competitor Z is really killing it, and our quarter earnings are going to miss big time!" Yeah that matters, but GP? Really?
First, it’s a professional ethics requirement. If there is the possibility of a conflict of interest in our statement, we need to disclose it to avoid misleading anyone.
In addition, there are plenty of ways to figure out where an HN member is employed besides looking at their post history. It’s not necessarily found in publicly available data, but it can be found.
Also, you have to account for future acts. Disclosing early can help insure against accusations about misconduct if you were to out yourself — or someone were to out you — later.
Company policy. This prevents headlines like "Google had employees secretly astroturf to give traction to their quantum supremacy claims" if someone decides to truly dig. I theoretically could pretend I don't work here, but I find life is easier if I just follow reasonable rules.
(I did forget to mention I didn't speak for Google, but it should be pretty obvious from context)
I agree. Possibly to just give a bit more weight to what they say. So far, I've mostly seen this with Google employees here. Can't remember seeing anyone else mentioning this except when promoting a product.
1. Regarding policy, I think a simple disclosure like this is my personal opinion not that of my employer would be good enough.
2. Indicate possible bias: nope, one should always understand any opinion is biased due to many different reasons. So, unless this is a rigorous analysis of something, this is quite uncalled for.
> According to IBM, "Quantum Supremacy" means accomplishing something useful to the society, that simply wouldn't be possible to do otherwise. Google's article doesn't show any sign of that, IBM claim.
I thought that the whole point was to show that what you'd done definitely wasn't just build a complex classical processor. The usefulness of the calculation is besides the point.
Will just point out it's 25 years since Deep Blue and 10 since Watson/Jeopardy. And boy were lot of claims made back then. Good ol'IBM has yet to deliver anything.
Regardless of what they say, Quantum computer solved it in 200 seconds and IBM states that their super-computer solves it in 2.5 days (Google says 10,000 years.) Even if we take IBM's claim at face value, 200 seconds vs 2.5 DAYS is still a lot of improvement, semantics aside. Not to mention that quantum computers have just started
They say linear in time for increasing circuit depth not increasing qubits. They also don’t address the classical algorithm being exponential in memory as well.
What if it is linear but angle is so steep that you require all the sands in the world to achieve parity with a classical computer, does it still count?
Of course. Quantum supremacy is a claim about computational complexity. 200 seconds vs 2.5 days is 1000x slowdown. In practical terms, x^10 is even worse than 1000x (linear), but if quantum computer could be simulated in x^10, that would definitely disprove quantum supremacy.
I think in that case you would probably be able to formulate the problem in a different way to show that the angle arises through an exponential process (or more) and that the problem isn't truly linear.
Maybe but 1) that's hard to prove and 2) it's hard to achieve unless you're looking at some sub-problem where the quantum and classical algorithm actually do scale differently.
All of that is true, but according to IBM's own definition of "quantum advantage" I think Google at least achieved that.
All in all, it seems more of a battle of semantics, and the two companies talking past each other. It's kind of how everyone seems to interpret Moore's Law to mean "any progress in computational performance whatsoever" these days, as opposed to the original definition of "transistors doubling in the same space every 18-24 months," and then using that to "prove" that "Moore's Law is still alive." (it's not, and hasn't been for a long time).
I think in the end what's important here is that it was proven that a quantum computer can do "something" (yes, even anything at all is a big milestone) significantly faster than a classical system.
Because this is what matters in the end that quantum computers can do some tasks significantly faster than classical systems. They don't have to do only tasks that are "impossible" for classical systems to be useful. It's also why we use GPUs, FPGAs, and ASICs for some tasks, instead of just using CPUs for everything.
I do agree that it was in Google's interest not to work so hard on optimizing the classical system for the simulation, to make its quantum computer look that much more impressive. But if the quantum computer is still faster, then it doesn't really matter.
No, if your standard is "faster", even D-Wave achieved that. Quantum supremacy is about exponential speedup, and in light of IBM's claim, Google's claim of quantum supremacy is very much in doubt.
No, the classical simulation should give you the whole wavefunction, from which you can get the full probability distribution. One run should be enough.
Does it scale linearly? Is it 5 days on a half as good super computer? They just need to show that it can be done in any reasonable finite amount of time, not 2.5 days.
The main bottleneck is memory/storage capacity, not speed. Beyond that (if you have the requisite storage capacity and bandwidth), yes: if you have half the FLOPS and double the time, you get the same result.
Is this the case? I thought that parallel speedup is not linear. so it might actually not be 2x slower but maybe 1.5x or something depending on IPC overhead and all that
As far as I understand, the 2.5 days classical simulation gives you the total wavefunction, from which you can read the probability distribution. You only need to run it once. It's not clear to me whether the 200 seconds quantum computation is for getting (or rather sampling) the probability distribution, or for just one measurement. My point is, the classical simulation actually gives you much more data than the quantum computation -- you get the actual full probabilty distribution, not just an approximated sample of it (guess that's why IBM claims "higher fidelity").
Quantum supremacy is about scaling (big-O notation) - taking a classically O(2^N) problem and making it polynomial. The empirical experiments are just to show that the scaling works, and the uncertainty comes from the fact that we still can't fit very large N problems into current quantum computers. Effectively, they don't have enough RAM to even load the problem (i.e. enough available quantum states).
That's not quantum supremacy. It's about doing something on a quantum computer that would be infeasible on a standard computer. This is because in some scenarios a single calculation by a quantum computer could be equivalent to a vast number (many trillions) of calculations made by a standard computer.
They use different "algorithms", so solving a problem can be done in very different ways for a quantum and a classical computer. Comparing the number of operations per second does not make sense because the advantage of quantum algorithms lies in the fact, that they can achieve the same results with (exponentially) fewer operations.
We can keep moving the goalposts and I can rephrase my claim to:
> it's a state when a quantum computer can achieve results that would require more operations on a binary computer than a binary computer has proven to be able to do
but the point will still stand.
We can't say we achieved quantum supremacy for this one thing because binary computers still have supremacy over everyting else.
I guess we can agree here that quantum supremacy was definitely not achieved since we are not clear on the definition of said quantum supremacy.
No, quantum supremacy is about quantum computers being better at one thing. Then there will be a reason to use them. It's like making a screwdriver when you already have a hammer. Sometime a hammer and nails is better and sometimes a screwdriver and screws is better. It's about picking the right tools for the job.
It isn't about them being better in the sense of faster than a classical computer, but rather scaling. Quantum computing is attractive because we expect that some operations scale much better with size on one compared to classical computing. ie if for a task the quantum algorithm is O(n) while the classical version is O(n!)
If you use Windows, I suggest NVDA - it's a free screen reader that's similar to the most-popular-but-expensive one (JAWS):
https://www.nvaccess.org/download/
IMHO the best screen readers are on your phone / tablet. This might sound crazy (how can a touchscreen work when you can't see it?) but they're much better designed than their desktop equivalents. Mobile software tends to be simpler, resulting in a better experience:
It’s not a substitute for real-world testing, but it should help you appreciate the basics without having to learn a full screen reader.
We’re also working on some accessibility games right now (scores challenges for you to complete in a browser with a simulated disability), which sound a lot like what you’re suggesting.
Pathological myopia can't be corrected with glasses.
TBH we call it "myopia" in the plugin because it's short and what we think most users would understand. Many visual disabilities cause a similar blurring effect, such as cataracts.
This is great. Until now I've been using ChromeVox, but found it quite difficult to use. Many esoteric keyboard shortcuts, some of which conflict with other browser functionality.
Toolbar seems a lot easier to to use for sighted developers. Less of a learning curve.
Oddly M doesn't seem to jump to the main content for me. I wonder if I've done something wrong. I'm going to play with it.
You never want laws to be ambiguously applied and to presume exemption because "it wouldn't happen to me". People will abuse laws: there will be spam scams aplenty selling snake-oil solutions, and ambulance chasers threatening to sue companies because they can. Excess legislation comes at a cost.
I am in favour of better privacy but I fail to see how counting unique visitors on a blog should become a crime. By setting the standard that everyone is violating the law, you encourage everyone to ignore your law.
Author here. What’s new is the ICO has changed their guidance: they had explicitly stated analytics was acceptable in the past. The law itself hasn’t changed, but what most people thought was compliant is now unambiguously not.
https://insites.com/ crawls your website in a cloud-based Chrome for both mobile and desktop, so you can check spelling, broken links, JS errors, layout etc.
It really sucks that I have to provide my domain name, click test while expecting actual test and then I am faced with sign-up form. Too bad that there're so many startups embracing this irritating "growth" hack. I am immediately closing this site and moving over.
Things that are popular are incendiary to those who don't like them.
If an obscure artist makes some music you don't like, nobody cares. But if that artist sells a million records, you'll see an online explosion of rage.
It seems we're wired to react when our tastes don't match those of the groups we hang around in.
Author here. Originally I had a lot of blather in that section, but given:
1. It's a long post
2. Most of my readers are presumably >15 yrs old
3. I honestly think most people up to age 15 have limited control over their lives
I decided to trim it down. Obviously I am not literally saying that a 14-and-11-months year old has no influence over their destiny but a 15 year old does. But superfluous precision is the enemy of effective writing; you end up with a legal document.
That's because you're playing it on creative mode though. Starts you out as a white dude born in the USA and also turns off the mobs. Things get a lot more interesting in Survival mode where you could be born in a slum in and you start seeing mobs like "Birth Complications" right out the spawn zone.
Thanks! I've not really posted much on HN before, although I've had a few of my posts turn up in here by themselves, and I thought it would be worth trying myself.
I think some crossing [of fingers] might be in order. HN has a weird hatred of Quora, so if anyone ever finds out how internet-famous you are there, there could be trouble!
Vibe coded in 3.5 hrs (content was older, natch), wrote a block-based, WYSIWYG block based CMS from scratch to power it.