Hacker Newsnew | past | comments | ask | show | jobs | submit | tatqx's commentslogin

I am trying to do something similar but inside a GUI - https://lilicad.com/ This is still very basic. Also 100% vibe coded and therefore not as robust as I would like it to be.


Eagerly awaiting a demo video


The nexus font is too big on the back of the phone.


Does anyone know or can explain simplistically why if we can get supercomputer like power from 100 qubits then why not a desktop computer like power with 1 or 2 qubits? I assume that the computational capacity increases exponentially with the number of qubits, but how?


Well, we don't really know if it's exactly exponential, we also don't really know that a quantum computer is more powerfull than a classical one (and we don't know if P == NP, what's related).

That said, yes, the observed speedup on some [1] problems is exponential. It's easy to wave hands and say that when you place the qbits in a coherent state together, the operations you do on them apply to all of the values you they can possibly carry, and those grow exponentialy with the number of bits. But the long explanation makes things much more clear, and is worth reading if you want to really get it. Wikipedia is a good starting point [2][3].

[1] Not all by a wide margin. Quantum computers will be specialized machines for a long time, if not forever. But cryptography will take a huge hit.

[2] http://en.wikipedia.org/wiki/Quantum_computer

[3] http://en.wikipedia.org/wiki/Shor%27s_algorithm


A quantum computer isn't really much like a super-powered classical computer. There are certain specific things it's amazingly good at (most notably, factorizing large integers), some other things it's quite good at (e.g., some kinds of search operation), and lots of things for which no one knows how to make quantum computers any faster than classical ones.

So, if you have a quantum computer, you're probably going to want to use it for breaking RSA or similar cryptography by factorizing large numbers.

There are (ordinary, non-quantum) algorithms for factorizing large-ish numbers. They work pretty well for numbers of modest size. But they don't scale well as the numbers get very large. Something like exp(log(n)^1/3 log(log(n))^2/3) to factorize a number near to n. Or, in terms of the number of bits, something like exp(b^1/3 log(b)^2/3).

So this works fine for numbers up to, let's say, 200 bits on a single machine or 1000 bits for a team with a substantial network. (Note: all numbers here are kinda made up but of the right order of magnitude.)

With a quantum computer, you can factorize a b-bit number with something like 2b qubits, b^3 quantum gates, and a runtime that scales more manageably with b.

So there's not much point in using a quantum computer until the number of qubits it has gets large enough to use for factorizing numbers bigger than you can do on your PC. Which means somewhere on the order of 100 qubits. As the number goes up, the advantage of the quantum computer over the classical computer increases. The transition from "slower than your PC" to "faster than any classical computer that will ever be built" takes place over a fairly small range of qubit-count.

Down with much smaller numbers of qubits, though, the quantum computer can maybe manage to find the factors of, say, 15. Which is, shall we say, quite a lot less than the computer on your desktop can do.


I'm confused, you say you could factor a b-bit number with "something like 2b qubits", but then go on to say you'd only need 100 qubits to factor bigger numbers than a PC.

Factoring a 50-bit number is not that big a deal. I used my Python command-line calculator program (all the hard stuff is done by mpmath, pyprimes and other open-source code I didn't write) to factor a 50-bit number just now:

c:\>rpn -t 2 50 7 + factor

[ 37, 30429727211963 ]

29.569 seconds

I'm sure Mathematica is much faster.

It seems to me several hundred cubits would be necessary to outperform classical algorithms on commodity hardware.


Why on earth would it take 30 seconds to factor a random 50 bit number? You just check all numbers between 2 and 2^25 (about 32 million). This should take less then a second, maybe two in python.

The parent poster likely was thinking 100 bit number (200 qubits), which, while technically tractable, is quite a bit of computing power (assuming that we don't end up with huge coefficients with quantum computers).


As I said, all the concrete numbers in what I wrote are crude order-of-magnitude approximations. The original question was whether a quantum computer with just a few qubits would be comparable in power to a conventional desktop computer; the answer is a clear no whether the point at which quantum computers become genuinely useful is at (say) 100 or 1000 qubits.


With 1 or 2 qubits you wouldn't even be able to compute 2 + 2 because storing 4 in binary requires 3 bits. 100 qubits is also not particularly useful, not because they aren't fast but because most of the interesting stuff simply doesn't fit. RSA keys are thousands of bits long, for example.

You need enough qubits to store the problem, and the working state of the algorithm you're using, and the overhead introduced by error correcting codes, and the overhead from your gates not being ideal, and so on. Austin Fowler gave a talk a few months ago where he goes through the overheads in a particular problem (finding molecular ground state energies) and ends up optimistically requiring tens of millions of qubits [1].

1: https://www.youtube.com/watch?v=XbFoXT73xVQ&feature=youtu.be...


So, the deal is that quantum computers don't do everything for you. They can do some things faster than any classical algorithm, but they're not known to, say, do NP-complete problems in polynomial time. (Friendly reminder: we don't even know whether classical computers can do that or not.)

Supposing that you've got, say, three qubits which can all be 0 or 1, a classical computer can be in one of eight states 000, 001, ... 110, 111, and we can easily store one of those numbers in memory. For a quantum computer, we have to instead store 16 doubles in memory, because it is allowed to be in any state:

    |S> = a0 |000> + a1 |001> + a2 |010> + ... + a7 |111>
such that the complex numbers a0, a1, ... a7 square-sum to 1. In other words, for n bits you often need to store 2^n numbers.

There are rough-and-ready ways to get around a bit of this in simulation with a sort of 'lossy compression' approach, it's stuff like: you form the "state matrix" (basically, take your row-vector and form an outer-product with its conjugate transpose), diagonalize it, and keep only the eigenvectors for the largest eigenvalues. If your system is simple enough you might be able to just reduce the bulk of the calculation to "what's happening to these eigenvalues?" and you can leave out any explicit further representation of the vectors themselves.

Now, here's where this gets interesting: the world is quantum, especially when we're talking about molecular chemistry. So, we have these chemicals which obey quantum mechanics and they have this huge quantum state which is cumbersome to compute with. But with a quantum computer, we can just program those rules and that behavior into the computer and see what it does with the physics that our classical computers can only barely simulate. So that's why these quantum computers, when they're doing what they're good at, beat out our supercomputers.

But why can't they beat out our supercomputers unilaterally? Because that's just not what they do, and for the most part these 2^n complex numbers which tell us the quantum state are not directly observable.

Let me put it like this: at the end of the day, when you measure the |S> above in the computational basis, you will get a classical 8 bits of information and the quantum state will be destroyed: you measured, you found |010>, that is the state of the system now. Prepare millions of these and then you will find some statistics for |000> and |001> and those statistics will give you a few decimal places for |a0|, |a1|, |a2| respectively, with no information about the phase factors of the complex numbers involved. It's not terribly enlightening of itself.

But then there are some generic problems, like "what is the only n-bit input to this set of logic gates which makes the result bit come out 1?" where you can do better. The answer to that is only n bits, but our only classical algorithm searches through all 2^n settings of those bits to find the answer. It turns out, if you can make your logic gates act their normal way on a quantum state, then you can pass one of these a-little-of-everything superpositions as input, then do a little magic to the input bits based on the resulting output bit, some 'yadda yadda' about a quantum fourier transform, and poof: when you measure the bits you will, with high probability, get the bits which set the logic gates to 1. That's with only the one query actually sent to the quantum versions of those logic gates. You peek inside it once and the resulting state knows enough about how it was created that you can extract out something interesting about the function.

So for some algorithms like factoring integers and searching for special inputs to a process, you really do get "exponential speedups" from quantum computers. However, those algorithms are really special-purpose stuff; quantum computers are not known to make arbitrary computations faster. They will make some specific computations, like chemistry calculations, faster.

With four qubits we have been able to factor the 4-bit number 15 into 3 * 5. That's the biggest that I know we've built, although some companies (like D-Wave) claim that they have built bigger systems under a different model of quantum computation (and hence can't do the factoring algorithm straightforwardly). The problem right now is that you need these big experimental apparatuses to cool, manipulate, and read out values from these quantum computers, and while you're doing those tests the clock is ticking -- states don't stay 'coherent' for more than a fraction of a second, after which they act pretty classical. There's just not much we can do with these systems until we get them to be always-online and stuff.

Finally, 1 and 2 qubits don't really show off all of the diversity of quantum mechanics; for one thought-experiment that I like to use to show people the strangeness of quantum mechanics ("Betrayal"), I determined that you needed three people playing the game together, no less, to really get the paradoxical result.


Around 6 months back, when there were hardly any books on AngularJS and related JavaScript technologies (Jasmine, Karma, etc), I was got a running start on my project after going through several courses on AngularJS and JavaScript on Pluralsight. Writing a book takes way more time than creating a video tutorial and, I think, in this rapidly developing programming landscape Pluralsight provides the right edge when it comes to learning new libraries, frameworks and techniques.


We know next to nothing about the historical architecture and culture of tropical countries because they made house out of bricks and straw, and wrote on clay tablets. Historians lament this fact and descendants like us have little to lean on and build upon culturally.


High-DPI support in Ubuntu 14.04 is pretty good and it works great on the retina MacBook Pros.


I installed 14.04 yesterday and was unable to get retina support. How do you enable it?


There's a slider in the display settings.


OT but I think that there is a place for a new programming language designed around making testing and TDD easy. Maybe the test code can be written in a higher level DSL or dialect of the same language.


I love how the rounded window corners are now (finally!) properly anti-aliased [1].

http://www.webupd8.org/2014/01/unity-7-to-get-new-window-dec...


Now if only they had a better way to design UIs than using clunky XAML. WinJS + CSS I think is a little better, but still not comparable to what iOS or even Android has.


Can you elaborate what you find ugly in xaml? I am using right now winJs but, in my opinion, xaml allow to be really fast in ui design and binding data. Used in combo with Blend (and I know too many people who totally ignore even the existence of that sw) is a win-win


After building UI's for the two major platforms, I love using the layout and binding engine that MS brought to devices. I have background with ASP.net which is only so far removed from XAML, but it was a very small step to get up to speed and helps me be more productive than the model Android uses and, definitely, less annoying than the tooling available for iOS.


What does Blend offer over the tooling in Visual Studio?


Similar workflows and designer support as flash and director tooling to design the UIs, coupled with code editing support.


It has a very polished UI studied to focus on the UI-UX design with some advanced tools specific for that.

It is a fantastic experience for a designer compared to what you get using only visual studio


It might be verbose but at least it is logical and well suited for GUIs as opposed to HTML+CSS which were designed to produce linked documents that may occasionally contain dancing monkeys (JS) and now need to be patched with frameworks such as Angular and Bootstrap.

PS: Apple and Google don't care about developer's experience as much as Microsoft does, they just care about customer's experience.


Android? I don't know what iOS uses for its UI building, but you must be joking for saying that Android with it XML layout files is any good.


At least they are way better than using a document format (HTML) to write applications.


You can't be serious. Android's layout XML somehow manages to combine the general awkwardness of HTML+CSS with the overbearing verbosity of enterprise XML formats.

It's probably because Android's format was originally designed for simple keypad-operated mobile layouts, and then was painfully extended for touchscreen applications.


Loads of HTML5 applications around the web say otherwise. While the HTML approach is indeed rather awkward to use to build applications UI, its flexibility more than makes up for it. Compared to Android XML Layout files, I still prefer HTML.


And what iOS and even Android has?


I am a big fan of Dropbox. I have tried other cloud storage/sync providers at various points of time and had a sour experience - Box: couldn't figure out how to sync, SkyDrive/OneDrive: refused to sync, Google Drive: was so slow that I tried to speed it up by copying over existing data from another machine and it created two copies of everything! In over 3 years I haven't faced a single problem with Dropbox.

Selling to the enterprise is a very different thing though. Quality comes way down in their priority list. The bureaucratic strongholds in enterprises require compromised quality at every level in order to maintain their fiefdom. I know Dropbox needs to make more money, but maybe this is not the right direction in which to probe?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: