Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.
There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.
This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.
Are we gods chosen people or something that we are the only ones undeserving of mass surveillance? Are you implying that morality depends on citizenship to a particular state?
A moral stand? ... What? Did we read the same statement? It opens right out the gate with:
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies.
Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
Not a single one of those wars was in defense of freedom and democracy.
I'm not going to go through all of those wars one-by-one, but are you joking with Iraq War II? That war was sold on the lie that Saddam Hussein had weapons of mass destruction and was somehow behind 9/11, by a president who himself had stolen the 2000 election by getting his brother to halt the counting of votes in Florida.
They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.
Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.
I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies.
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade.
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get.
These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."
The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.
It’s a contract dispute. Contracts are more than just talk.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.
NSA and other three-letter agencies happily do it under cloak and dagger.
I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.
What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents.
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans.
It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention.
I don't trust the ranges put into job listings. It may give you a ballpark idea of the range, but anything that looks even slightly attractive or above market is usually put there to attract resumes. The real number is worse.
The problem is specifically with unregulated sites that don't verify the identities of players. The bots aren't so much the problem as the fact that they can collude and share hole cards. But fyi, bots aren't actually good at playing poker outside of specific scenarios vs bad players or in scenarios where the decision tree is not large (ie short stack tournaments where the decisions are pre computed, you can imagine how massive your edge can be when you have a pair of 9s and you know there are already 3 dead aces and your decision is only all in or fold)
Many academics that I've talked with say that their departments are cutting back to 50% of last year's PhD enrollment due to budget cuts and uncertainty. These were in biology and chemistry.
This needs to be stressed. It's not just uncertainty of grants. That's a given, even when fed is stable.
Schools are questioning basic financial aid now. If Trump follows through on eliminating ED, no one is confident that there's a plan for any of the essential services and payments. . . Because there's never really been a plan for anything else.
In this context ED could mean the Education Department or Erectile Dysfunction. If he eliminates ED, he may be the most popular politician of all time (in a certain demographic).
reply