Sloppy Contracts, Serious Consequences

Altman called the deal sloppy, then the AI warfighting debate unravelled backwards: Anthropic said no, OpenAI said yes, and everyone started pointing fingers.

 

The “sloppy” moment that framed the fight

When Sam Altman called OpenAI’s rushed Pentagon deal ‘opportunistic and sloppy’, it landed because it wasn’t about missiles or militaries at all — it was about power, governance and who gets to steer what many policymakers and analysts describe as strategically significant technology. But it also echoed a deeper anxiety: if the paperwork at the top is slapdash, what happens when the same industry is asked to help decide what a weapon sees, targets and destroys?

That unease has only sharpened over the course of this week. The AI sector’s internal fights — over funding, board control, safety “guardrails” and speed — now overlap with the practical question defence agencies keep asking: can these models be used in war, and if so, under what limits?

OpenAI’s “yes”, and the fine print it insists on

Previously, OpenAI’s clearest step into defence has been its partnership with defence tech company Anduril, publicly described as focusing on counter-drone work. In announcing the collaboration, OpenAI and Anduril framed it as defensive and bounded: using AI to help detect and respond to aerial threats rather than to build fully autonomous “killer robots”. The deal was widely reported, including by Reuters on OpenAI teaming with Anduril for AI systems, and outlined by Anduril’s own partnership announcement.

The controversy is that OpenAI’s “yes” sits alongside a public insistence that it still won’t support certain outcomes. OpenAI’s published rules continue to prohibit weapon development and other harmful uses; the relevant language appears on its Usage policies page. OpenAI also changed its policy wording in early 2024, including removing an explicit ban on “military and warfare” uses — a change reported by The Intercept’s investigation into OpenAI’s policy update — which prompted debate over what “allowed with restrictions” means in practice.

The result is a familiar modern posture: OpenAI says it can support national security missions while also drawing bright lines around autonomous lethal action. Critics argue that once an AI system is integrated into targeting workflows — even “just” for analysis, identification or prioritisation — the ethical distance between “decision support” and “decision making” can shrink quickly in real operations.

Anthropic’s “no”… then a carefully scoped “yes” to government

Anthropic has marketed itself as the more cautious sibling in the frontier-model family: more conservative release practices, a heavier emphasis on safety, and clearer prohibitions on some high-risk applications. In late 2024, Reuters reported that Anthropic agreed to work with the US government on AI safety while stating it would not develop AI for weapons or surveillance, as described in Reuters’ report on Anthropic working with government on AI safety. Anthropic’s restrictions are also written into its public rules, including limits on weapons-related activity in its Acceptable Use Policy.

Then came the complication: Anthropic also moved to sell models into defence and intelligence environments through major partners, a step Reuters reported as a collaboration with Palantir and AWS to serve defence customers. That story is captured in Reuters’ report on Anthropic collaborating with Palantir and AWS for defence customers. Anthropic argued it could support government users while maintaining prohibitions on weapons and unlawful surveillance — essentially, “yes to the customer, no to the weapon”.

Depending on your point of view, that is either a pragmatic acknowledgement that governments will use frontier models anyway (so safer versions should be available), or a stance that may prove difficult to sustain once models are integrated into classified workflows. The core tension is that “we don’t build weapons” can still coexist with “we supply models that materially improve parts of the kill chain”, especially where the line between intelligence analysis and targeting can be operationally thin.

Autonomous warfighting is not a future problem

The AI-in-war debate often sounds abstract until you look at the problems militaries are already trying to solve: cheap drones, drone swarms, and engagement speeds that can outrun human reaction time. Reuters has described the US military’s push to use AI against drone swarms, including systems that promise faster detection and response while claiming human oversight remains in place; see Reuters on both the US and China military turning to AI and drones.

That operational reality is one reason counter-drone work is a politically useful entry point for AI companies. “Defensive” sounds clean. But counter-drone missions can still involve lethal outcomes, and autonomy is attractive precisely because it compresses the time between sensing and action. Even if a human is “in the loop”, the human role can degrade into rubber-stamping machine-generated recommendations under time pressure — a risk discussed in defence ethics and human factors research — which can make policy language like “human oversight” feel thinner than it looks on a website.

Anduril’s software stack is often presented as an integrated platform for sensing and decision support; the company describes its Lattice system as a unifying operating layer for autonomous systems in its Lattice product overview. Add a frontier model to that kind of platform and it becomes easier to see why the debate keeps circling back to a blunt question: what exactly is the AI doing, and at what point does “support” become “control”?

The global rules are stuck, so corporate policies become de facto law

International efforts to define limits for lethal autonomous weapons have dragged for years, and they remain politically contested. The UN process most often cited is the Convention on Certain Conventional Weapons (CCW) discussions on lethal autonomous weapons systems, summarised by the UN Office for Disarmament Affairs’ background on CCW meetings. Humanitarian organisations have urged stronger constraints; the International Committee of the Red Cross has argued for limits and rules, including restrictions on unpredictable systems and those targeting people, in its ICRC position on autonomous weapon systems.

Because treaty progress is slow, company policies and contract clauses can end up acting like “soft law” — until they don’t. OpenAI’s shifting policy wording, even while maintaining a weapons-development prohibition, illustrates how quickly soft law can be rewritten. Anthropic’s dual stance — refusing weapons development while selling into defence ecosystems — shows how “principles” can become a complex compliance exercise once revenue and geopolitics arrive.

This is the part that can make the whole affair feel sordid: firms compete to be seen as responsible while also racing to avoid being left out of one of the world’s largest and most durable customer bases.

Pointing fingers won’t fix the structural incentives

The finger-pointing has become predictable. Critics of OpenAI argue that accepting defence partnerships legitimises the integration of frontier AI into military systems, regardless of stated boundaries. Defenders counter that refusing to engage doesn’t stop militaries from using AI; it may simply shift the work to less transparent vendors or state actors with fewer safety constraints. Anthropic’s critics argue its “no weapons” stance is too narrow — that supplying models for intelligence and operational analysis can still be weapons-adjacent. Anthropic’s defenders respond that clear prohibitions, auditing and controlled deployments are better than a free-for-all.

Both sides have plausible arguments, and that is part of the problem: the incentives tend to favour adoption. Drone warfare is comparatively cheap and scalable. Decision cycles are accelerating. AI is one of the few technologies that plausibly offers an edge in detection, coordination and electronic warfare — all areas that can be framed as defensive right up to the moment they aren’t.

A more productive debate would shift from “which company is more moral” to “what verifiable constraints exist once these systems are deployed?” That means clearer definitions (for example, what counts as targeting), measurable safeguards (what logging, auditability and human control are required), and procurement rules that penalise vague promises.

The “sloppy” comment resonated because it captured a wider truth: in a domain where mistakes cost lives, process matters. Contracts, policies and governance are not just paperwork — they can become part of the wider system that enables how force is applied.

Right now, the world is still arguing over the paperwork while the technology keeps moving.

WhatsApp
LinkedIn
Facebook