Hacker Newsnew | past | comments | ask | show | jobs | submit | PrayagBhakar's commentslogin

Sure, but on the other hand what happens if Gen AI bursts after all of this infra development?

- price of electricity goes wayyyyy down

- stigma around using atomic elements goes down

- price of excess available compute goes wayyy down (or the second hand market becomes side project budget range)

With excess electricity we can do a lot of things like Carbon Capture, water purification, power hadron colliders, replace gas stoves, etc. and with price of computer going down it makes more sense for certain industries to double down on tech. Essentially we are able utilise cheaper electricity + compute to patch over problems. On the other hand if the AI tech bros’s magic konch shell can lead innovations then I wouldn’t mind that either.


I run a basement compute server[^1], what’s Nvidia gonna do? Not let me buy their hella expensive H100s? At least now I get to learn ML skills without my failed experiments exponentially scaling on the cloud.

[^1]: https://prayag.bhakar.org/apollo-ai-compute-cluster-for-the-...



lol I just remembered about this last night [1] while pushing some code on GitHub. Idk what the long term goal of this project is. Will they update the catalogue as time goes on or is my bad college code stuck there?

[1] https://x.com/prayagbhakar/status/1750440884250792445?s=46


The goal was to put data in the cold for fun! It's already done!


I look forward to having Cydia as an AppStore on my phone again


If we’re talking about model distillation[0] I don’t think the student can ever be better than the teacher as optimising for speed and smaller model sizes inherently means that there will be precision loss. Even if the student is as big as the teacher, there is still data loss.

[0] https://arxiv.org/pdf/2210.17332.pdf


> The language is called Go. The "golang" moniker arose because the web site was originally golang.org. (There was no .dev domain then.) Many use the golang name, though, and it is handy as a label. For instance, the Twitter tag for the language is "#golang". The language's name is just plain Go, regardless.

https://go.dev/doc/faq#go_or_golang



Isn’t there storage limitations for mobile PWAs?


But isn’t this also a step towards AGI? The model being able to find issues and self correct?


As described in the essay "optimality is the tiger, and agents are its teeth".

https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...


From the linked LW post:

> It is doing this for no greater reason than that an optimiser was brought into reach, and this is what optimisers do.

> All the agent piece has to do is pump the optimality machine.

It’s like putting the mic too close to the amplifier.


lol posted the same thing above. I’m glad I’m not the only one who thought that was an extremely powerful read! As someone currently trying to imbue silicon with an eternal soul, it’s very sobering.


What does AGI man to you? Is it something they define or you think is good enough, And if the latter when?


AGI feels like a marketing term to me now. I mainly see it as just research into how can we improve and scale the current model architectures we have to be better than the last one


Eh regardless of all that the answer is just “yes” imo: self-improvement is a necessary step to meaningful superintelligence. Perhaps not AGI but it obviously seems like a massive help if not absolutely strictly necessary.

In a way, that’s what modern multi-stage-trained foundational models already do: improve their weights intelligently. Having the same results in human readable code (what this is a first step towards) would be a lot more powerful…


Yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: