Hacker Newsnew | past | comments | ask | show | jobs | submit | NateEag's commentslogin

There are two you can take with you into a classroom:

* laptop

* smartphone


None of that answers the question:

How will this technology be good for humanity as a whole?


I answered the more important question of a seemingly lost youngin and how to deal with the stress of inheriting a world in a bit of turmoil.

That said, trivially we already see it advancing math and science research as an assistive tool, development and more. Extrapolate it out a few more generations and it helps us unlock a whole bunch of things on the skill tree of life so to speak.


Like everything genAI, it was amazing yet surprisingly crappy.

Yes, the bear is definitely dancing.

But a few feet away there's a world-class step dancer doing intricate rhythms they've perfected over twenty years of hard work.

The bear's kind of shuffling along to the beat like a stoner in a club.

It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.


>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.

No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.


OK, but don't you see where this is going? The trajectory that we're on?

Please don't post LLM output and pretend it's your writing.

I never thought I'd long for the days when people posted "$LLM says" comments, but at least those were honest.


No, you should make your goal to teach AndrewKemendo to appreciate his existence as the inscrutable gift it is, and to spend his brief time in this universe helping others appreciate the great gift they've been given and using it to the fullest.

See how it works?


AndrewKemendo (based on his personal website) looks to be older than me. If he hasn't figured out the miracle of getting to exist yet, unfortunately I don't think he's going to.


Only looked at his website because it was mentioned and wow. This is not quite at timecube levels but it’s closer to timecube than it is to coherence.

The man seems unwell if he has kids based on his other comments and is still talking about “civilization suicide” and “obviating humans”


So why are you wasting your time being a miracle on anything other than building the successor to us?


Because I don't believe humans need succeeding by machines? You're obviously a Curtis Yarvin / Nick Land megafan. I'm of the opinion that these people are psycopaths and I think most people would agree with my sentiment.


I don’t know either of those people

I know Ray Kurzweil and Ben Goertzel though so maybe start there


I'm familiar with Ray Kurzweil. He's a Luciferian and transhumanist. You're obviously also a Luciferian, since you are so gung-ho about transhumanism, but I suppose you're probably in good company on HN. There are lots of deranged people on this website.


Man I sure am learning a lot about myself from you

Thank you Internet person for all of this wisdom about my life that I didn’t have


Lucifer is an archetype. Transhumanists are all about one-upping God, which is exactly what Lucifer was all about. If you're a Kurzweil devotee, then you're a Luciferian whether you know it / want to admit it or not.

I think the person you're trying to berate is having a good chuckle at your expense.

Cheers :)


Ignorance is bliss. Cheers :)

I’m a father of three I already know all about that there’s nothing you’re gonna teach me there I’m fully integrated


Somebody probably ought to take your kids away from you if they haven’t already


Well go ahead and try


I think they meant "Nobody knows why LLMs work."


same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.


Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.


My goodness. Please introduce me to this "plenty of people". I'm in the field, and none of them work with me.

But I can tell you that statistics and parametrized functions have absolutely nothing to do with it. You're way out of your depth my friend.


Yes, yes, no one understands how anything works. Calculus is magic, derivatives are pixie dust, gradient descent is some kind of alien technology. It's amazing hairless apes have managed to get this far w/ automated boolean algebra handed to us from our long forgotten godly ancestors, so on & so forth.

No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.

Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.

Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.


I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.


omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.

Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.

This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.

Understand?

There's no confusion. Just people who don't what they are talking about (you)


I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.


The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.


I didn't ask for your help so it's probably better for everyone if you spend your time & efforts elsewhere. Good luck.


Well don't ask me to help you then. I read your profile and it has this snippet in there:

"Address the substance of my arguments or just save yourself the keystrokes."

The substance of your argument was complete ignorance about the topic, so I addressed it as you requested.

Please remove that sentence from your profile if that is not what you want. Thank you.


I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.


There is no other interpretation for that other than what I said. If you disagree then that’s a misinterpretation of the English language.

I am addressing the substance of your argument and that substance is lack of knowledge there is zero other angle to interpret it.


As I said previously, I don't think this is a productive use of time or effort for anyone involved so I'm dropping out of this thread.


u come across ungrateful to someone who was just trying to help


I've developed a new hobby lately, which I call "spot the bullshit."

When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.

I've found something every time I looked, since starting this routine.


Yep - it honestly reads like an LLM's summary, which often miss critical nuances.


I know, especially with the bullet points.

The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.


> You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

Anyone semi-literate can write down what they're feeling.

It's sometimes called "journaling".

Thinking through what they've written, why they've written it, and whether they should do anything about it is often called "processing emotions."

The AI can't do that for you. The only way it could would be by taking over your brain, but then you wouldn't be you any more.

I think using the AI to skip these activities would be very bad for the people doing it.

It took me decades to realize there was value in doing it, and my life changed drastically for the better once I did.


I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: