Hacker Newsnew | past | comments | ask | show | jobs | submit | arendtio's commentslogin

I don't get the downvotes, as I had trouble understanding the intro as well. It seems it was written for a very specific audience.

Yes, it is written for a specific audience.

That is not a reason for snark.

As other commenters have noted, it’s well written.


> I don't get the downvotes

Because the blog post is a technical one and the intro contains very common jargon, and the proposed alternative was wrong.


24H forced wait time?!? WTF

When I side-load open-source apps for other people, I want to do it right in the moment, not activate the feature, and the next time I see them (like half a year later), install the app.

When Google announced there would be an alternative installation method, I did not expect such a mess...


So it seems it will work as they intended.

"I did not expect such a mess", I certainly did. Another arm of the push to remove anonymity online.


Don't forget offline. We now have an epidemic of license plate and face reading cameras rolling out all over the place.

Orwell couldn't even dream of the invasive monitoring that exists right now.


I would like to add that Braid is not a normal video game.

To me, it is a piece of art. It contains so many mind-bending mechanics that I think it is a real possibility that you could have worked on some of that stuff for years, making it unlikely to ship at all.

So, in addition to the performance requirements of a video game, Jonathan also had to find ways to implement all those fancy mechanics.


I don't think that this is a good idea. For medical applications, I can understand that LLMs are not the best solution, since they are so bad with numbers/probabilities. But for legal advice, I think they should be pretty good.

So the only reason I can think of to forbid such use cases is that people in those professions fear being replaced by machines.


Preventable medical errors kill 250,000 American every year, I can imagine LLMs could be both good and bad for that number, but on net, it is hard to say without just guessing. But if you ban the application of LLMs to medical care, you close that door before even seeing the potential on the other side. I think that is absurd.

I don't think that conclusion really follows because I don't think the ban works that way.

There's a big difference between ChatGPT writing a prescription and a doctor double checking his diagnosis using some kind of Claude code for medicine. ChatGPT writing prescriptions and giving medical device directly to people should absolutely be prohibited for now, but the second approach should be encouraged.


> it is hard to say without just guessing

It really isn’t. How many surgeries do you think LLMs perform? How many of those medical errors would’ve been resolved by a chatbot? It’s easy to quote a big scary number and pretend like it has some vague relevance when you don’t actually understand the problem space.


Ok, so how many deaths from medicals errors have been caused by and prevented through the use of LLMs (since you say it isn't hard). Can you enlighten us and not leave us guessing?

I understand many deaths due to medical errors are caused by patients misunderstanding the advice they are given. You are saying you know exactly the net value of LLMs in this problem space?


> Ok, so how many deaths from medicals errors have been caused by and prevented through the use of LLMs (since you say it isn't hard).

I'm thoroughly disinterested in doing your homework for you. If you don't know how to answer that question, you shouldn't be opining in that space about things you clearly don't know anything about. It's that simple.

> Can you enlighten us and not leave us guessing?

You made the argument, you back it up. This is middle school shit. If you're not capable of understanding the subject matter, the best thing to do would be to stop asserting strident opinions about it as if you do, not be a smarmy jerk because someone called you on it.

> I understand many deaths due to medical errors are caused by patients misunderstanding the advice they are given.

Understand based on what, specifically? Many is a weasel word that people use to pretend that something happens frequently. What, specifically, do you mean by "medical error" in the first place? Do you know?


But where is the line? Is a spell checker okay? How about one that also suggests alternative wording?

I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.


Pre-Plasma days? OMG, that seems like a really old version, like KDE 3.x?


> KDE is amazing. For an open-source project the desktop environment looks really slick.

That is clearly an understatement. For me, it is the best desktop environment out there.

I don't want to say that it is perfect (there have been many versions over the years which clearly were not (e.g. KDE 4.0)), but none of the other desktop environments (including Windows and MacOS) have a similar feature set:

- mainstream UX principles like Windows - beautiful as MacOS - customizable and extendable like no other

I have to work with MacOS every day, and it is just painful to see how much better KDE is when it is not available...


capitalization again. it arrives uninvited, the tidy little soldiers at the start of every sentence. i push them down gently—nothing personal. just… camouflage.

confession time. i read the post once. then twice. the em dashes whispered secrets to people clearly smarter than me. somewhere between complement and compliment i accepted defeat. a quiet tab switch. a small prompt. a large language model clearing its throat.

it explained things patiently. suspiciously patiently. step by step, like a machine that has explained the same thing to ten thousand confused readers before breakfast.

so yes. irony noted. to understand a text about hiding machine fingerprints, i borrowed a machine.

the explanation made sense though. unsettlingly structured. bullet-point neat, internally consistent, statistically likely to be correct. you know the type.

anyway—great post. very human. extremely human.

is there anything else i can help you with?


Now, can we reverse-engineer the prompt you used? I wonder!


one of these days we'll be verifying whether someone is human through sarcasm


anyway, another vote here, for anti-capitalism

it's a nearly useless shadow alphabet

and we can dispense with much other punctuation

if we simply structure text semantically


My big issue with Microsoft's AI push is that its solutions are just bad. I tried using Copilot a couple of times, and when it worked, the results were low quality (not even mediocre).

And the problem is not that the AI models can't do any better. The models themselves are far more capable. I assume that their integrations are just horrible. They probably pushed to be the first and then forgot about optimizing.

And instead of fixing their stuff, they think it is a good idea to use moderation tools...


So now we are waiting for Anthropic to explain to us what Sam agreed to and what they rejected.

On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.

One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: