Hacker Newsnew | past | comments | ask | show | jobs | submit | chronolitus's commentslogin

10 years? These days ten years feel like an absurdly long time in the future.


“Attention is all you need” was published 9 years ago, and gpt3.5 burst onto the scene 4 years ago.

Progress is slower than people seem to think. Of course AI as a field is half a century old.

But on the other hand, AI had a period of rapid acceleration 40-ish years ago and was then hit by an AI winter. We might hit that winter again in a year and all predictions made today are off the table.


I had been working on a 4-th Dimension renderer project, when yesterday I saw the hackernews post where Gemini predicts the front page 10 years from now[1], and one title was "Visualizing the 5th dimension with WebGPU 2.0".

So I figured, what the heck, might as well make the implied previous article real, and fulfill our collective destinies.

[1] https://news.ycombinator.com/item?id=46205632


"Visualizing the 5th dimension with WebGPU 2.0"

I feel so seen (my last post to hn was literally about visualizing the 4th dimension with threejs - and now working on the webGPU version)

https://dugas.ch/funderstanding/4d_camera.html


Moreso genuine curiosity than as a gotcha: A lot of comments are saying this was the wrong choice. I'd find it really interesting to hear who the nomination should have gone to instead, in your opinions.


They are not under an obligation to award the prize. They could have just said "sorry, we can't think of anyone."


For those like me who are curious what it looks like: https://www.designboom.com/technology/exolung-underwater-air...


The idea of using transparent container maybe a good idea. Some indicator when you are sucking only water.

Because I am fool, I tried to make my own. But gave up because breakages were indicated only by lung full of water, which tend to be fatal.


Adding one datapoint here: I made several projects with Shotcut and am glad it exists, but I had to learn to expect a crash and work defensively; That's how often it happened. The most usual crash was while moving clips around, the application died and the window closed, work lost.


I recently used Shotcut and it did crash a few times, but both times it still had my changes after reopening.

Not being used to the concepts of video editing, I found it easy to find a workflow that worked for me (cutting out mistakes and stitching remaining ends).


have you tested kdenlive recently? it used to get a lot of flack back when but over the last year it has improved and lots and lots of bug fixes


but, since it is somewhat like a random walk, isn't it guaranteed that if you go on for infinitely many tosses, at some point you will be very rich?

[1] https://math.stackexchange.com/a/493446


It's exactly like a random walk... with downward drift. If you take the log of the wealth, the steps are: log(1.5) = 0.18, log(0.6) = -0.22. So at each step, there's a 50% chance you go up 0.18, and a 50% chance you go down 0.22.

Random walks with downward drift don't inevitably go up arbitrarily high like ones without drift.


if true, an interesting question is - on average - how many tosses are needed for you to reach some arbitrary value (e.g. $1M)?


You put the finger on exactly what I find incredible about the recent progress in ML - the reason I wrote this post was to see how much I could de-mystify these state-of-the-art models for myself, and the conclusion is that (after the model is trained) it all really boils down to a couple of matrix multiplications! All the impressive results we see, they're not coming from an extremely complicated system ('complicated' like a fighter jet is, with many different subsystems, which you'd need to read many books to memorize).

Of course, there's all the secret sauce to actually getting the models to learn anything, and all the empirical progress we make to make the training more efficient (ReLUs, etc). But how many of those are fundamental, vs. simply efficiency shortcuts? And: if you'd asked me 10 years ago what I thought it would take to get the kind of output these large models are getting these days, I would not have guessed anything nearly as simple as what those models actually are.


Good point, will do!


Hi! I'm the author (Daniel). I used OneNote on some old surface tablet I had lying around, but these days I'm not sure I would use it again (for example because it doesn't support exporting parts of a page to .svg)


Thanks, yes that does sound like an unfortunate limitation but I wanted to say your graphics really look fantastic! I really enjoyed reading this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: