Hacker Newsnew | past | comments | ask | show | jobs | submit | Almondsetat's commentslogin

>This barely mentions Windows Forms

Apparently, you do too, since what you said is basically the same as what the article said (.NET wrapper for C#, fastest prototyping to date)


Can you build a computer at home?

There is absolutely nothing self-sufficient about computer hardware


Or generate electricity? Or grow enough food to survive? Medicines?

"Self-sufficiency" arguments coming from tech nerds are so tiring.


No, and that's the reason we're now paying twice what we paid a couple years ago. But I can write software at home.

We're already vulnerable to enshittification in so many areas, why increase the list? How does that work in my favor at all?


When people talk about software or computers being "fun" in the past, it reminds me how advertisements about children's foods talk about how their cereal brings "fun" to the breakfast.

What does that even mean? Seems like empty words to me from people too accustomed to tv commercials.


"fun" and "play" are ambiguous words in English.

There is the trivial meaning, where the subject of the sentence is apparently whiling away time, achieving nothing of note except pretending perhaps to be in an imaginary land.

Then there is another sense, one that includes the thrill of experimentation, the disappointment of failure, the doggedness of persistence, and the satisfaction of victory and success when the puzzle is complete, understood, and the whole thing is working as desired or expected. This is why we call programming "fun" and if you are having fun doing it for yourself, you should perhaps be very careful where you end up doing it for work, if you do.

You could do that on computers of the 1990s, and still have the feeling of a broad system, but one which was not unfathomably deep. That's because those systems could be completely understood by one human brain, and being able, striving to be able to do that, was indeed enormously engaging, but people who waxed lyrical about such things were often seen as weirdos, and humans don't like that, generally, so instead they reach for a word that has universal meaning: "fun". Of course, words that have universal meaning, and for which everyone has their own interpretation (though they may not be aware of it), in this manner ironically tend to lose all shared meaning in the strictest sense.

What's sometimes overlooked in the Smalltalk story is that Alan Kay was leading the "Learning Research Group", which is why he refers to educational theorists like Jean Piaget. In some of Alan's talks he goes into some detail showing how children can learn about calculus by watching and visualizing the acceleration of a ball as it falls and bounces. This sort of thing is a serious kind of fun because it actually has a positive benefit, much like sport does for many people.

On the other hand, the use of the word in "making breakfast fun for children" in the advertising sense is a disgusting perversion, and is no way reasonable comparable to the idea of "computers being fun in the past".

Now, if you'll excuse me, I'm going to have my breakfast consisting of dippy eggs and soldiers, and marvel at the viscosity.


Because the true goal is AGI, not just nice little tools to solve subsets of problems. The first company which can achieve human level intelligence will just be able to self-improve at such a rate as to create a gigantic moat

There’s no evidence that the current architectures will reach AGI levels.

Of course OpenAI wants you to think they will rule the world but if we’ve reached the plateau of LLM capabilities regardless of the amount of compute we throw at them then local models will soon be good enough.


> The first company which can achieve human level intelligence will just be able to...

They say prostitution is the oldest industry of all. We know how to achieve human-level intelligence quite well. The outstanding challenge is figuring out how to produce an energy efficient human-level intelligence.


There's no particular reason to assume a human level AI would be able to improve itself any better than the thousands of human level humans that designed it.

Sure, but: that single human with the intelligence of a top tier engineer of scientist will have immediate access to all human knowledge. Plus, what do you think happens the moment its optimizes itself to run in 2, 4, 8, 16, etc. parallel instances?

Well, A) "top tier engineer/scientist" is a significant step above generic human, B) the human engineers/scientists also have immediate access to the same database, C) The humans have been optimizing it for even longer, so what makes us think the AI can optimize itself even a couple percent?

For example, if the number of AIs you can run per petaflop started to scale with the cube root of researcher-years, then even if your researcher AIs are quite fast and you can double your density in a couple years, hitting 5x will take a decade and hitting 10x will approach half a century.


At the same time, processing is much cheaper than memory

Without memory you have no data to compute on. Memory and compute scaling only makes sense in tandem.

If Apple doesn't offer a Linux product, they cannot be used seriously in headless computing task. They are adamant in controlling the whole stack, so unless they remake some server version of macOS (and wait years for the community to accustom themselves with it), they will keep being a consumer/professional oriented company

AI does not have a physical body to make experiments in the real world and build and use equipment


Designing a good fitness function, a tale as old as time...


Not royalty free, unfortunately.


That's not even remotely satisfactory if we're talking about understanding what we're doing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: