China is certainly lax, but the US doesn't allow autonomous ATTACK systems. For Attack systems it is always required that a human makes the judgement call when to attack.
Or least it didn't until the current regime.
The US does have autonomous defensive systems.
I could be wrong though, can you post your evidence? The closest I could find is loitering munitions.
Even so, a company shouldn't be forced to go against its ethics if those ethics help humans.
Drone pilots don't get any info about their target, certainly not enough to make a judgement call. If they object (or burn out) someone else is put in the chair.
People are conscripted, they put on the uniform and become legitimate targets? It might as well be a robot doing the shooting. Same difference.
The pilot becomes responsible for those outcomes. For example indiscriminately killing civilians for example is a war crime. Its easier to get an AI to commit war crimes than humans.
Perhaps but if the difference is significant I don't know. Everything changes then we try stretch rhetoric from stabbing someone with a sword to hypersonic missiles? We might hold the pilot responsible if they erase a building but I'm far less comfortable blaming them. We know the targets are actually picked by computers using metadata. The difference gets increasingly vague.
They want Anthropic to enabling mass surveillance and autonomous attack systems with no human in the loop.
Hardly compares to a kid downloading a model to experiment with.