Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does this apply to, say, Signal?

You know, it's funny. A while back people would've been building cURL alternatives/wrappers/collecting client header stacks designed to sidestep paywalls on web content.

With purl, the web gets just a little less punk. Which is nothing new, unfortunately. I miss the times when people would put in stupid amounts of effort to stick to their principles in hobby tech.


Hm, speculating a bit, but it feels like NTSYNC is essentially a beginning of NT Subsystem for Linux, or maybe ntoskrnl as a kernel module. Feels like the most clean and fast way to port Windows, since the rest of the interfaces are in the user space in real Windows. Essentially should be almost without overhead: user: [gdi32.dll,user32.dll,kernel32.dll -> ntdll.dll] -> kernel: [ntoskernel.ko]

This is simple but cool! I am definitely finding it useful. Thanks for building this.

My girlfriend keeps sending me AI generated tiktoks, despite me complaining about them. To be fair, I've seen literally nothing on tiktok that isn't garbage, so the competition is pretty low. Your point "It looks cool for a while" might have some merit - I think I've seen less and less interest in these things over the last year which fits the news articles I've seen mentioning people got bored of using Sora pretty quickly.

https://finance.yahoo.com/news/openai-sora-app-struggling-st...


What drove the choice to use Rust for an official frontend?

probably the latter imo, it’s not like they are going to delete all their SORA work

Try again. Anyone can write a browser that uses the existing web engine and connect it to any search engine they want. Make it popular enough, and they can made a deal with Google.

Having Safari is a default browser is another (valid) issue, but that is separate concern from the web engine.


Amusingly, one of the ads on the page for me is a very obviously AI generated image of a man with sciatica. I say very obviously because his hands are on backwards..

I don't expect one. This kind of attack is pretty common nowadays. The xz attack was special for how long the guy worked for it and how severe it could have been

Does anyone know if this is mainly going to affect people in Cary, NC?

I rarely come across people who flat out say "it's not useful". They exist, but IME they're the minority.

Rather, I hear a lot of nuanced opinions of how the tech is useful in some scenarios, but that the net benefit is not clear. I.e. the tech has many drawbacks that make it require a lot of effort to extract actual value from. This is an opinion I personally share.

In most cases, those "big productivity gains" are vastly blown out of proportion. In the context of programming specifically, sure, you can now generate thousands of lines of code in an instant, but writing code was never the bottleneck. It was always the effort to carefully design and implement correct solutions to real-world problems. These new tools can approximate this to an extent, when given relevant context and expert guidance, but the output is always unreliable, and very difficult to verify.

So anyone who claims "big productivity gains" is likely not bothering to verify the output, which in most cases will eventually come back to haunt them and/or anyone who depends on their work. And this should concern everyone.


You're being purposefully obtuse.

People have been hearing for the last three years about how a specific acronym, "AGI", is the final frontier of artificial intelligence and how it's going to change the entire economy around it. They've been hearing about this quasi-theoretical, very specific thing, and a lot of them don't even know what the "G" stands for.

People haven't been hearing for years about a mythical "copilot", and as such I think people are much more likely to think it's not anything more than a cute nickname.

Are you suggesting that this is just a coincidence? The acronym AGI doesn't even make sense for Agentic AI Infrastructure, which should be AAII; they're clearly calling it AGI to mislead people. I refuse to think that the people running Arm are so stupid that they didn't even Google the acronym before releasing the chip.

You think it's a "comical misinterpretation", but I don't think it is. When I saw the article, I thought "shit; did they manage to crack AGI?", and I clicked the article and was disappointed. I suspect a lot of people aren't even going to read the press release.


> not be able to quit

Source? Because that isn't true; they can quit like any other civilian government job.


We’re just replaying the CGI debate from the 2010s. It was popular to hate on CGI because it was obvious and bad and low quality and practical effects were better because of…

We learned two things from this debate:

1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.

2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.

I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.


Wasn't video generation one of their big stepping stones towards AGI? "Simulating worlds", reasoning about physics and real world interactions and all that?

Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?


I might as well answer my own question, because I do think there are some coherent arguments for fundamental LLM limitations:

1. LLMs are trained on human-quality data, so they will naturally learn to mimic our limitations. Their capabilities should saturate at human or maybe above-average human performance.

2. LLMs do not learn from experience. They might perform as well as most humans on certain tasks, but a human who works in a certain field/code base etc. for long enough will internalize the relevant information more deeply than an LLM.

However I'm increasingly doubtful that these arguments are actually correct. Here are some counterarguments:

1. It may be more efficient to just learn correct logical reasoning, rather than to mimic every human foible. I stopped believing this argument when LLMs got a gold metal at the Math Olympiad.

2. LLMs alone may suffer from this limitation, but RL could change the story. People may find ways to add memory. Finally, it can't be ruled out that a very large, well-trained LLM could internalize new information as deeply as a human can. Maybe this is what's happening here:

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...


Yeah exactly. Apple has a history of quietly changing undocumented behavior between point releases. Building on top of that is asking for a fun debugging session six months from now. The GATT handshake adds a couple seconds of latency on reconnect but at least I know it won't break with iOS 19.

Ten hours a day, six days a week, and forced resignation at fifty six. I doubt it pays good enough with the amount of burnout a job like that brings.

Zswap is enabled by default in Arch. Wont do anything without a backing disk swap though

Is it still accessible in any of their apps, though? I don’t see it in ChatGPT.

True, I did try to make some useful 1 minute videos, and found it really difficult to arrive at a finished product

Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video


I agree. I am not naive! I would not be doing it as a lifestyle choice though. I'd do it because I need to. I have worked in a factory before so culture shock wont be there at least. I get my pay would half (luckily I am not on the US West Coast monster TC so merely it would half).

Fair point. The article doesn't mention LinuxToaster's products — the curiosity here is about the future of open source, not promotion. For what it's worth, toastd does what LiteLLM does in C with no Python supply chain, which is part of what got me thinking about this topic in the first place. But that's not in the post.

I gave Claude a spin, given all my likings ;) https://github.com/ssp-data/neomd

> Sorry, HOW?!?

It's me. I have accumulated several dozen free games over the years through the Epic Store. Sorry Tim Sweeney!


Maybe I am just slow.

Bank 1 has the CYA clause and a cartel uses them for a decade for illegal purposes.

Bank 2 does not have the clause and a cartel uses them for a decade for illegal purposes.

In neither case does the clause prevent the illegal activity or make the bank any more or less aware of what customers are doing. They have to do KYC regardless of what the TOS says.


> I hope

I sincerely hope too but the man is lunatic.


Property-based testing is nice, but making it coverage-driven is a game changer. It will explore code paths that naive random inputs will not trigger in a thousand years. In Rust this works very well with libFuzzer and the Arbitrary crate to derive the generators.

i guess the disney deal falling through was the impetus rather than vice versa

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: