Hacker Newsnew | past | comments | ask | show | jobs | submit | largbae's commentslogin

I think you're right that heavier elements can be made, it's just energy negative to do so. But without a nova they would never leave the inside of the star to find their way into a new planet.

But they do leave. Stars not large enough to go supernova do still form planetary nebulas when the more gradually lose their outer layers to space. Only the core is left behind to form a white dwarf. This will be the Sun's eventual fate.

Isn't the conclusion just that the context window doesn't include the current date?

Since the initial response contains the (correct) current year, it must have entered the context at some point, most likely before the first (wrong) output token was generated.

GFW does indeed have man in the middle capabilities per the recent leaks of Geedge tech used in it. Your laptop might throw a warning for the fake signed cert, but devices in China that trust Chinese root CAs would not.


I think this is true but I don't feel like atrophied Assembler skills are a detriment to software development, it is just that almost everyone has moved to a higher level of abstraction, leaving a small but prosperous niche for those willing to specialize in that particular bit of plumbing.

As LLM-style prose becomes the new Esperanto, we all transcend the language barriers(human and code) that unnecessarily reduced the collaboration between people and projects.

Won't you be able to understand some greater amount of code and do something bigger than you would have if your time was going into comprehension and parsing?


I broadly agree, in the sense of providing the vision, direction, and design choices for the LLM to do a lot of the grunt work of implementation.

The comprehension problem isn't really so much about software, per se, though it can apply there too. LLMs do not think, they compute statistically likely tokens from their training corpus and context window, so if I can't understand the thing any more and I'm just asking the LLM to figure it out, do a solution, and tell me I did a good job sitting there doomscrolling while it worked, I'm adding zero value to the situation and may as well not even be there.

If I lose the ability to comprehend a project, I lose the ability to contribute to it.

Is it harmful to me if I ask an LLM to explain a function whose workings are a bit opaque to me? Maybe not. It doesn't really feel harmful. But that's the parallel to the ChatGPT social thing: it doesn't really feel harmful in each small step, it's only harmful when you look back and realise you lost something important.

I think comprehension might just be that something important I don't want to lose.

I don't think, by the way, that LLM-style prose is the new Esperanto. Having one AI write some slop that another AI reads and coarsely translates back into something closer to the original prompt like some kind of telephone game feels like a step backwards in collaboration to me.


I love that this is just above the post on https://rein.pk/over-regulation-is-doubling-the-cost

It's almost as if the world is complicated and every issue has multiple perspectives.

Good luck "aligning", AGI.


Wouldn't this be a great use case of any agentic AI coding solution? A background agent scanning code/repo/etc and making suggestions both for and against dependency updates?

Copilot seems well placed with its GitHub integration here, it could review dependency suggestions, CVEs, etc and make pull requests.


They can still use it to learn your preferences and tighten their profile of you for all the searching and other ad-enabled activities you take.


How do they hold back questions in practice though? These are hosted models. To ask the question is to reveal it to the model team.


They pinky swear not to store and use the prompts and data lol


A legally binding pinky swear LOL


with fineprint somewhere on page #67, that there are exceptions.


Who needs fine print when there is an SRE with access to the servers who is friends with a research director who gets paid more if the score goes up?


If it doesn't have a browser, how will you visit radiant.computer on your Radiant Computer?


You wouldn't I don't think (assuming this thing ever got off the ground - huge assumption), but is that really a problem? I think the web page is more to make normalish people aware that this hypothetical ecosystem would be out there. From within that ecosystem they could have a different page.


Would this not defeat the purpose of responsible disclosure? As a bad actor I could learn of secret vulnerabilities from this channel.


You have google to blame. GrapheneOS tried very hard to make sure they have those security patches as google delays publishing the source tree and it's only available to OEMs


These patches are available to all vendors who chose not to protect their users yet.

Releasing binary patches is allowed, this is why GOS have added the security preview channel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: