Defense Secretary Pete Hegseth has personally summoned Dario Amodei, the chief executive of artificial intelligence company Anthropic, to discuss the military’s use of the company’s flagship AI model, Claude. The unusual meeting, first reported by TechCrunch, signals an escalating confrontation between the Pentagon and one of Silicon Valley’s most prominent AI firms over the boundaries of artificial intelligence deployment in national defense.
The summons comes at a moment of acute friction between the Department of Defense and the AI industry. While companies like Palantir, Anduril, and even OpenAI have moved aggressively to court military contracts, Anthropic has maintained a more cautious posture — one that has increasingly drawn the ire of defense officials who view the company’s restrictions on military applications as an obstacle to national security objectives. The meeting between Hegseth and Amodei is expected to address Anthropic’s acceptable use policy, which currently places significant limitations on how Claude can be deployed in military and intelligence contexts.
A Company Built on Safety, Now Under Political Pressure
Anthropic was founded in 2021 by Amodei and his sister Daniela Amodei, both former executives at OpenAI, with a stated mission of building AI systems that are safe, interpretable, and aligned with human values. The company has positioned itself as the responsible actor in a field often criticized for moving too fast. Its acceptable use policy explicitly restricts certain military applications of Claude, including those involving weapons targeting, surveillance of specific individuals, and lethal autonomous decision-making.
That safety-first philosophy, which once earned Anthropic praise from regulators and ethicists, has become a source of tension with the current administration. According to TechCrunch, the Defense Department has grown frustrated with what officials perceive as Anthropic’s unwillingness to fully support military operations. The summons to Amodei is widely interpreted as a pressure campaign to soften or remove the company’s self-imposed restrictions on defense-related use cases.
The Pentagon’s AI Appetite Grows
The Department of Defense has been rapidly expanding its use of AI across virtually every domain of military operations. From logistics and supply chain optimization to intelligence analysis and battlefield decision support, AI tools have become embedded in Pentagon workflows at a pace that would have been unthinkable five years ago. The department’s Chief Digital and Artificial Intelligence Office, known as CDAO, has been tasked with accelerating AI adoption across all branches of the military.
In this context, Claude — widely regarded as one of the most capable large language models available — represents a prize asset. Military planners and intelligence analysts have expressed interest in using Claude for tasks ranging from processing vast quantities of open-source intelligence to drafting operational plans and summarizing classified briefings. Some of these applications fall within Anthropic’s current acceptable use guidelines, but others push against the boundaries the company has drawn. The Pentagon’s position, according to officials familiar with the discussions cited by TechCrunch, is that Anthropic’s restrictions are overly broad and prevent legitimate, non-lethal military uses of the technology.
Hegseth’s Approach: Confrontation Over Collaboration
Pete Hegseth, who took over as Defense Secretary after a contentious confirmation process, has adopted an aggressive posture toward technology companies that resist full cooperation with the military. His approach marks a departure from the more diplomatic tone of his predecessors, who generally sought to build partnerships with Silicon Valley through incentives and collaborative frameworks like the Defense Innovation Unit.
Hegseth has publicly stated that American technology companies have a patriotic obligation to support the nation’s defense. In remarks earlier this year, he suggested that companies restricting military access to their AI tools were effectively aiding adversaries like China, which faces no such private-sector resistance to military AI deployment. The summons to Amodei fits squarely within this confrontational strategy. Rather than negotiating through intermediaries or procurement channels, Hegseth has chosen to bring the issue directly to the CEO — a move designed to maximize pressure and public attention.
The Broader Industry Watches Closely
The meeting carries implications far beyond Anthropic. Every major AI company in the United States is watching to see how Amodei responds and what, if any, concessions he makes. OpenAI, once similarly cautious about military partnerships, has in recent months softened its own acceptable use policies and entered into contracts with defense and intelligence agencies. Google, despite the employee backlash that forced it to abandon Project Maven in 2018, has since re-engaged with the Pentagon through its Google Public Sector division. Microsoft has long maintained deep defense ties through its Azure Government cloud platform and its ownership of Nuance Communications.
Anthropic’s competitors see the company’s reluctance as both a principled stand and a potential market opportunity. If Anthropic continues to resist full military engagement, rivals stand to capture billions of dollars in defense AI contracts. But if Amodei capitulates under pressure, it could undermine the company’s brand identity and alienate a significant portion of its workforce and investor base — many of whom were drawn to Anthropic precisely because of its commitment to safety and ethical boundaries.
The Legal and Ethical Fault Lines
The confrontation raises profound legal and ethical questions about the relationship between the federal government and private technology companies. Unlike traditional defense contractors such as Lockheed Martin or Raytheon, which were built specifically to serve military customers, AI companies like Anthropic develop general-purpose technologies with both civilian and military applications. The question of whether and how the government can compel a private company to make its technology available for military purposes is legally murky.
There is no existing statute that directly requires a commercial AI company to provide its products for defense use, though the Defense Production Act gives the president broad authority to direct private industry to prioritize government contracts during national emergencies. Whether the current administration would invoke such authority over AI technology remains an open question, but the mere possibility adds weight to the Pentagon’s negotiating position. Legal scholars have noted that the government’s leverage is significant: Anthropic relies on cloud computing infrastructure from Amazon Web Services, which itself holds major defense contracts, creating a web of dependencies that could be used as pressure points.
Amodei’s Tightrope Walk
For Dario Amodei personally, the meeting with Hegseth represents one of the most consequential moments of his tenure as CEO. Amodei has been vocal about his belief that advanced AI systems pose existential risks if deployed carelessly, and he has argued that voluntary safety commitments by AI companies are preferable to government regulation. But that argument loses force if the government itself is the entity pushing for fewer restrictions.
People close to Amodei, as described in the TechCrunch report, say he is preparing to draw a distinction between military applications that are purely administrative or analytical — such as document summarization, translation, and logistics planning — and those that involve direct support for lethal operations. This middle-ground approach would allow Anthropic to expand its defense business without abandoning its core safety principles. Whether the Pentagon will accept such a compromise remains to be seen.
What Comes Next for AI and National Defense
The outcome of this confrontation will set a precedent that shapes the relationship between the AI industry and the U.S. military for years to come. If the Pentagon succeeds in pressuring Anthropic to loosen its restrictions, it will send a clear signal to every AI company that resistance to defense cooperation carries real costs. If Amodei holds firm and maintains meaningful limits on military use, it could embolden other companies to do the same — though at the risk of regulatory or contractual retaliation.
The stakes extend well beyond American borders. China’s military-civil fusion strategy has effectively eliminated any barrier between commercial AI development and military application, giving the People’s Liberation Army access to the full output of the country’s technology sector. U.S. defense officials argue that maintaining similar restrictions on American AI companies amounts to unilateral disarmament in the global AI competition. Amodei and his allies counter that the strength of American AI lies precisely in the trust and ethical frameworks that distinguish it from authoritarian alternatives — and that eroding those frameworks in pursuit of short-term military advantage would be a strategic mistake of the highest order.
The meeting between Hegseth and Amodei has not yet been publicly scheduled, but according to TechCrunch, it is expected to take place within the coming weeks. When it does, it will be far more than a conversation between two men. It will be a defining moment in the debate over who controls America’s most powerful technology — and what it will be used for.