Hacker Newsnew | past | comments | ask | show | jobs | submit | xiphias2's commentslogin

Sure, and actually the open models are already good enough to do that, it's not like any company could stop any organization that can collect the data from doing this.

They can just improve on it a lot.


I don't really understand this reasoning actually:

if OpenClaw usage go up, and a service (OpenAI it looks like) gets lots of usage data for personal assistent usage, they can optimize to make it better for people who get a $200 subscription just because of that use case.


For anybody who thinks it's about Trump vs other administration: it's not, both AI surveillance of all people and using it for automatic fight was just bound to happen.

The only question is whether the safety of the models were really done well enough to protect the people and be a net positive force in the world.

I guess if they would be safely trained to do more good than bad (how Dario and SamA said), there wouldn't even be a need for the contract terms.


It would/will be extremely irresponsible to put non-deterministic and fallible models in charge of weapons. We are not close to having solved the problem of ensuring AI pursues good outcomes

I agree completely. Anybody who uses the models extensively know it can do something amazing for a prompt and something awful for another. But I also know that wars are unfortunately real and there are real enemies between countries and they don't want a limited model.

How exactly does the "limitation" affect any war the US may be in with another country?

Probably drones targeting and automatically killing Russian people by a thinking model guessing if its Russian on Ukrainian person is a red line.

Elon Musk already denied Starlink for being used for remote killing, but at some point all these technologies will be nationalized, as they are too important not to be.


I understand that there's a precedent here, but isn't normally the precedent for the opposite in contract law?

And if UK is precedent based, how come the previous precedents don't apply here?

I agree that no toilet breaks is cruel, but the problem here is knowing about the supplier using it?

There was not much about the legal bases in the article.


The U.K. Supreme Court case [1]:

> This appeal is not about the merits of the workers’ claims, but rather whether England or Malaysia is the appropriate forum (ie. the proper place) in which the claims can and/or should be determined. The first and second Appellants, Dyson Technology Limited and Dyson Limited, are English companies. The Respondents commenced proceedings against the English companies in England. However, the English companies sought a stay of proceedings on the grounds that England was not the appropriate forum to determine the claims. The third Appellant, Dyson Manufacturing Sdn Bhd, a Malaysian company, was joined to the proceedings on the basis that it is a necessary and proper party to the claims. The Respondents have also indicated their intent to join the Malaysian employer, ATA/J, to proceedings.

The BBC article didn’t say, but this is presumably a civil (not criminal) case and, should the plaintiffs have prevailed, would have resulted in a financial award. The settlement basically gets to the same outcome, just faster.

I’m not certain that allowing the plaintiffs to sue the parent company directly is really that big of a logical leap. The court should be an accessible venue for dispute settlement in general. Supposedly the plaintiffs would have had a chance to argue that the parent company had insufficient oversight of labor practices at their suppliers. We didn’t get a ruling on that.

[1] “Limbu and others (Respondents) v Dyson Technology Limited and others (Appellants)” https://supremecourt.uk/cases/uksc-2025-0019


Congrats! Great try!

I have a different view point on what to automate and I'm working differently with agents, but I much prefer seeing projects like this on HN to just product announcements.


,,Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.''

Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?

They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.


If the killbots are ok for the periphery, surveillance will surely be arriving for the metropole's inhabitants.

,,AI has limited real world experience or grasp of the consequences.''

People in the world have limited experience about war.

We're living in a world where doing terrible things with 1000 people with photo/video documentation can get more attention then a million people dying, and the response is still not do whatever it takes so that people don't die.

And now we are at a situation where nuclear escalation has already started (New START was not extended).

It would have been the biggest and most concerning news 80 years ago, but not anymore.


> People in the world have limited experience about war.

Right, but realistically, how many people today would carelessly chose "Nuke em" today? I know history knowledge isn't at its all time high directly, and most of the population is, well, not great at reasoning, but I still think most people would try to do their best to avoid firing nukes.


The basic game theory of nukes is that either the world is escalating or deescalating, there's no other long term stable agreement.

Maybe people don't agree with ,,nuke them'', but OK with USA starting nuclear experiments again (which USA is preparing for right bow), which is a clear escalation.

Russia is waiting for USA to start the nuclear experiments to start them itself for defending itself to be able to do a counterstrike if needed.

After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes.

You don't have to have the ,,nuke them'' thinking, even one step of escalation is enough to get to a disastrous position.


> After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes.

And I'm afraid they'll be far from the only ones...


I don't really buy the nuclear deterrence thing. Say a country just invested in conventional military and went to war with a nuclear one, maybe even full-on invasion trying to capture it. They really gonna get nuked?

Arab nations did try to capture Israel multiple times, but maybe you don't count this because the war never swayed much in their favor.


> but I still think most people would try to do their best to avoid firing nukes.

"most people" are not in the positions that matter. A significant portion of the people who are in a position to advocate for such a decision believe that:

- killing people sends em to heaven/hell where they were going anyway; and that this is also true for any of your own citizens that get killed by a counterstrike.

- the end of the world will be the best day ever


> "most people" are not in the positions that matter

If polling were to reveal a majority of either party were more open to nuclear strikes than their predecessors, that gives policy makers a signal and an opening.


The current administration does not seem to be considering the majority within their own party considering how unpopular the current approach to immigration enforcement is. Or for another example, the glycophosphate/MAHA situation.

There were lots of administrations who could have said to other countries ,,let's get rid of the nukes together'' while USA was the only string power.

Deescalation stopped because of people in general not caring enough (and making money of being the biggest power), not because of administrations that come and go.

As to the immigration situation: we know that governments are not executing in general how they should be, but people are able to enforce some policies if they fight together united and in agreement. But right now they are not in agreement.


> There were lots of administrations who could have said to other countries ,,let's get rid of the nukes together'' while USA was the only string power.

There was only one administration with that opportunity, really; Truman.

Every other administration has had a nuclear armed Russia in play.

Attempts to do what you describe were still quite common, starting as early as the 1950s. https://en.wikipedia.org/wiki/Nuclear_arms_race#Treaties


> current administration does not seem to be considering the majority within their own party considering how unpopular the current approach to immigration enforcement is

55% of Republicans say ICE's efforts are about right; 23% think they don't go far enough [1]. There is limited evidence Trump has lost touch with his supporters on this issue. The question is if this is this GOP's pronoun issue–popular in the base but toxic more broadly.

[1] https://www.ipsos.com/en-us/where-americans-stand-immigratio...


There have always been a handful of Internet Tough Guys saying things on forums like "LOL Nuke them! hur hur hur hur!" Totally disregardable vibes and memes. Now, we have an actual US government administration that is run on the same Tough Guy vibes and memes. I don't think it matters what most people think. The people in power might just do it for the lulz.

And yet the people in positions that matter have not fired a nuke since ending WW2. Even the craziest sounding regimes like Russia and NK.

I think it's a higher number than you would expect. Which, in the context of nukes, is too high a number as long as it's greater than 1.

On social media, there are many, and this feeds back into training data. Unfortunately.

Carelessly probably not much. Carefully - way more than you imagine.

Deploying nukes and "carefully" are opposite ends of the spectrum.

Not quite. The people that will agree that turning X from urbanized into rural society if they can't strike back is a good idea are not few and far between. Everyone has different view who X are.

> And now we are at a situation where nuclear escalation has already started (New START was not extended).

This is a massive understatement. Russia has announced, and probably tested, https://en.wikipedia.org/wiki/9M730_Burevestnik . This is basically Project Pluto reloaded, but now as a Russian instead of a US missile.

I remember reading about Project Pluto some 25 years ago or so. It was terrifying to read about. And now Russia has realized it.


> People in the world have limited experience about war.

Most (but not all) people have empathy, which allows them to understand the harm of their actions even without direct experience.

I don't think I will ever trust that any AI has empathy even if it gives off signals that it does.

I only trust that it exists in people because of my shared experience with their biology.


Am I the only one here who was amazed by the speed of improvement between 5.2-codex and 5.3-codex?

I feel that Sam is saying what investors want to hear, but the coding work it is capable of and how it improved with using the terminal (TerminalBench) in such a short time is something that I'm sure can't be seen by short term revenue projections. I'm sure the other AI companies are having the same speedups, but it's real.

The usual limit is of course the slop output that is not well modularized that makes it hard to do bigger things, and codex is terrible at refactoring into the right direction (it has no taste).

3x YoY growth in revenue is just not hard to imagine with this kinds of models, I think they have to get out with more expensive parallel working agents and higher-than-pro subscriptions, but it is coming I'm sure.


This looks like Symbolica, except the great thing of what they are doing is that they are setting new ARC-AGI records.

https://www.symbolica.ai/blog/arcgentica


This case will make settlement amounts higher, which is the main thing car companies care about when making decisions about driving features/marketing.

With Robotaxi it will get even higher as it will be clear 100% the company's fault.


Fight Club 2.0: You pay to retrain it only if the AI will kill more people than our settlement fund can pay out.

You're already downvoted, but this quote from Fight Club always annoyed me as it misunderstands how recalls work.

1. Insurance companies price in the risk, and insurance pricing absolutely influences manufacturers (see the absolute crap that the Big 3 sold in the 70s) 2. The government can force a recall based on a flaw whether or not the manufacturer agrees


> this quote from Fight Club always annoyed me as it misunderstands how recalls work.

How excusable is it that maybe it's the narrator misunderstanding it, or making stuff up while talking to the lady?


v2.0- Tesla drivers insure with Tesla and the recalls are all OTA software fixes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: