Hacker Newsnew | past | comments | ask | show | jobs | submit | causticcup's commentslogin

So many top-level comments and replies here are so laughably wrong.

Pro tip: if you want technical info on research-related topics, don't ask HN. Tech bros can't handle telling themselves they don't know something, so everyone will give their "take" on the subject at hand.


>that basically means people that get attention from jumping on someone else's creative bandwagon.

That isn't what the term Andy means in the Twitch world. It's just a way to refer to the style of content that someone makes on Twitch. For example, a "React Andy" is someone who makes reactionary content, a "1k Andy" is someone that is stuck at 1k average viewership, a "Crypto Andy" is someone who is obsessed with crypto.


Almost everything stated here is simply wrong or misinformed.

>For example, I went to an AI talk about 5 years ago where the guy said that any of a dozen algorithms like K-Nearest Neighbor, K-Means Clustering, Simulated Annealing, Neural Nets, Genetic Algorithms, etc can all be adapted to any use case. They just have different strengths and weaknesses. At that time, all that really mattered was how the data was prepared.

How do you suppose KNN is going to generate photorealistic images? I don't understand the question here

>I guess fundamentally my question is, when will AGI start to become prevalent, rather than these special-purpose tools like GPT-3 and Dall-E 2?

Actual AGI research is basically non-existant, and GPT-3/Dall-E 2 are not AGI-level tools.

>Personally I give it less than 10 years of actual work, maybe less

Lol...

>I just mean that to me, Dall-E 2 is already orders of magnitude more complex than what's required to run a basic automaton to free humans from labor.

Categorically incorrect


I appreciate your sentiment but can't agree with it. What I mean is, if I had the resources to not have to work for 10 years, I give myself greater than a 50% chance of building an AGI. So I don't understand why the world is taking so long to do it.

The flip side is that these narrow use cases progressed so quickly that we have to worry about stuff like deep fakes now.

Something's not right here.

As a programmer, I feel that what went wrong is that we invested too much in profit-driven endeavors, basically stuff that's mainstream. To be blunt, the academic side of me doesn't care about use cases. I care about theory, formalism, abstraction, reproducibility, basically the scientific method. From that perspective, all AI is equivalent, it just takes input, searches a giant solution space using its learned context as clues, and returns the closest solution it can in the time given. It's an executable piping data around. The rest is hand waving.

And given that, the stuff that AI is doing now is orders of magnitude more complex than running a Roomba. But a robot vacuum actually helps people.

To answer your question, a KNN could solve this if the user reshapes the image data into a different coordinate system where the data can be partitioned (all inference comes down to partitioning):

https://en.wikipedia.org/wiki/Change_of_basis

Tensors are about reshaping data into a coordinate system where relationships become obvious, like going from rectangular to polar coordinates, or using a Fourier transform:

https://en.wikipedia.org/wiki/Tensor

My frustration with all of this is the same one I have with physics or any other evolving discipline. The lingo obfuscates the fundamental abstractions, creating artificial barriers to entry.

Edit: I should add a disclaimer here that my friend and I worked on a video game for like 11 years. I'm no expert in AI, I'm just acutely sensitive to how the realities of the workaday world waste immeasurable potential at scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: