Summary
According to a source familiar with the negotiations, on Friday morning, Anthropic received word that Hegseth’s team would make a major concession. The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like as appropriate—suggesting that the terms were subject to change, based on the administration’s interpretation of a given situation.
[...]
Anthropic’s team was relieved to hear that the government would be willing to remove those words, but one big problem remained: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart. Soon after, Hegseth directed the U.S. military’s contractors, suppliers, and partners to stop doing business with Anthropic. The list of companies that contract with the military is extensive, and includes Amazon, the company that supplies much of Anthropic’s computing infrastructure. The Department of Defense did not respond to a request for comment. A spokesperson for Anthropic referred me to the company’s statement addressing Hegseth’s remarks.
[...]
According to my source, at one point during the negotiation, it was suggested that this impasse over autonomous weapons could be resolved if the Pentagon would simply promise to keep the company’s AI in the cloud, and out of the weapons themselves. The argument was that the models could be kept outside so-called edge systems, be they drones or other kinds of autonomous weapons. They might synthesize intelligence before an operation, but they wouldn’t actually be making kill decisions. The AI’s hands would be clean of any deadly errors that the drones made.
But Anthropic wasn’t satisfied by this solution. The company reasoned that in modern military AI architectures, the distinction between the cloud and the edge is no longer all that defined. It’s less a wall and more of a gradient. Drones on the battlefield can now be orchestrated through mesh networks that include cloud data centers. And although they’re designed to survive on their own, the military’s impulse will always be to maintain as much connectivity between them and the most powerful models in the cloud; the better the connection, the more intelligent the machine.