Anthropic’s feud with the Department of Defense is more than a contract dispute. It’s a preview of a looming power struggle between elected governments and increasingly powerful AI companies.
CEO Dario Amodei refused to let the Pentagon use Anthropic’s AI for “any lawful purpose,” worrying it could enable domestic surveillance or autonomous weapons. That stance could have barred certain military uses even if the government deemed them legal, according to Jessica Tillipman, a government contracts expert at George Washington Law School.
This is an example of a potential future in which AI CEOs, not democratically elected leaders, decide what’s acceptable.
“It does not sound crazy to a Silicon Valley executive that maybe they could be in charge instead of you,” AI alignment researcher Eliezer Yudkowsky warned politicians during the dispute. “If they actually could control superintelligence, they’d discard you like used toilet paper.”
Here’s the concern: In a democracy, voters can remove an elected official who makes bad decisions. But if a CEO makes a harmful choice, especially one that boosts revenue or profit, the public has little recourse. Shareholders, not voters, determine whether CEOs keep their jobs.
Whatever Amodei or other AI leaders say, corporations exist to generate returns. Investors pour billions into Anthropic, expecting enormous future profits. Lofty rhetoric about safety and ethics may be sincere, but it also helps win customers and recruit talent. As former product manager Aakash Gupta quipped, “Dario is a better marketer than people give him credit for.”
Anthropic’s own decisions reflect these tensions. In late February, it scrapped a prior commitment to pause scaling or delay deploying models if safety measures lagged behind capabilities — a striking shift for a company whose CEO frequently warns about runaway AI risks.
Amodei also reversed course on taking Middle Eastern investment when the company needed a massive funding round. And this year, Bloomberg reported that Anthropic applied to compete for a $100 million Pentagon prize to design autonomous drone swarm technology.
The pattern isn’t unique. When Google acquired DeepMind, cofounder Demis Hassabis insisted its AI couldn’t be used for military purposes. Google agreed and the deal closed. But by early 2025, Google updated its AI Principles, removing a pledge against weapons and surveillance uses.
Early AI startup Clarifai followed a similar arc, according to former employee Michal Wolski: “We had the same stance at Clarifai in 2014, then it was time to pay back all the money that we raised.”
The result of this inevitable backsliding: In July, Google, OpenAI, Anthropic, and xAI each signed Defense Department contracts worth up to $200 million. So, if you’ve been using Gemini, ChatGPT, Claude, or Grok as your chatbot, you’ve already been supporting AI companies that help the US wage war.
Tech blogger Ben Thompson argues the right answer is new laws and accountable oversight — not cheering for unelected executives to decide how AI should be used. That’s “the road to an even more despotic future,” he wrote this week.
Democracy is messy. But it’s preferable to outsourcing the levers of power to CEOs whose primary duty is to shareholders — and whose tools may soon be the most consequential ever built.
Anthropic spokespeople didn’t respond to a request for comment on Wednesday morning.
Sign up for BI’s Tech Memo newsletter here. Reach out to me via email at [email protected].

