The Brief
  • Frontier AI is becoming a capital-intensive infrastructure race, not a lightweight software business.
  • State demand becomes the natural market once labs need revenue large enough to justify rising compute, data-centre and chip costs.
  • Military AI is not just autonomous weapons; it includes perception, prediction, planning and targeting-adjacent systems.
  • The deeper risk is structural: the market may reward pliability over principle as lawmakers fall behind.

The incentive machine

For years, the frontier-AI story sold the world two ideas at once. The first was technological: bigger models would unlock new scientific, commercial, and cognitive capability. The second was moral: these same systems could be built responsibly, governed carefully, and steered toward net social good benefit.

What that story downplayed was the industrial reality underneath it. In the “arms race” of AI, and in the aim of trying to get to AGI (Artificial General Intelligence), frontier AI should not be mistaken for a lightweight software business. It is a capital-intensive race for compute, data centers, chips, electricity, water, and engineering talent.

Anthropic announced a $30 billion (R500 billion) Series G in February 2026 after raising $13 billion (R220 billion) in 2025; OpenAI’s SoftBank-led financing was reported at up to $40 billion (R675 billion); and Alphabet told investors it expects to spend $175 billion to $185 billion (R2.9 trillion to R3.1 trillion respectively) on capital expenditure in 2026 alone. The economics of scale are now so extreme that AI is starting to look less like consumer software and more like infrastructure.

The natural market

The scale of infrastructure required in AI production matters because the current model eventually needs infrastructure-scale customers. There are only so many markets large enough, sticky enough, and politically strategic enough to justify this level of spending. The state is one of them.

Not the state as imagined in glossy responsible-AI manifestos, but the state as it actually exists: a buyer concerned with intelligence, cyber defense, surveillance, deterrence, logistics, and war. Once frontier labs need durable revenue, strategic legitimacy, and support for ever-larger compute footprints, government demand stops looking like a side channel and starts looking like a natural destination. The ethical dilemma comes later. The incentive arrives first.

The ethical dilemma comes later. The incentive arrives first.

The battle lines are drawn

The American case is where that tension is now playing out most visibly. Anthropic and the Pentagon reportedly fell into a standoff after the company refused to remove restrictions related to fully autonomous weapons and mass domestic surveillance. The dispute escalated into a “supply chain risk” designation, litigation, and a scramble over whether Anthropic tools would have to be removed from defense systems.

OpenAI, by contrast, has leaned more openly into government business. Its “OpenAI for Government” initiative says a Defense Department contract with a ceiling of $200 million (R3.3 billion) will be used to prototype frontier AI for administrative work, healthcare-related functions, acquisition analysis, and proactive cyber defense.

This is where the AI-industry debate gets too moralistic and not economic enough. Anthropic, OpenAI, Google, and Palantir are not simply making abstract ethical choices in a vacuum. They are occupying different positions in the same emerging stack. Anthropic has tried to preserve a narrow set of non-negotiable guardrails. OpenAI appears more willing to structure cooperation with the state through contractual and policy language. Palantir already lives on the operational side of the divide, where AI is not a laboratory abstraction but something embedded into defense workflows, decision systems, and real-time action.

Kill-tech

The real picture of what we call kill-tech is broader and more troubling than many imagine. Military AI includes perception systems that classify objects in satellite or drone imagery; prediction systems that rank threats or prioritise targets; language systems that summarise intelligence or assist planning; and autonomy-adjacent systems that narrow the range of human options before a decision is ever formally made.

The ethical problem is not just the machine that fires. It is the entire algorithmic chain that shapes what gets seen, flagged, escalated, or ignored.

Moral hazard — investing for progress, deployed for war

AI algorithms do not get created in a vacuum, which is why employees matter more than they are often given credit for. Engineers and researchers are not merely expressing lifestyle politics when they push back against military use of their IP. They are reacting to a recurring pattern in technological history: inventions built for progress are routinely absorbed into systems of coercion.

Google learned that during the Project Maven revolt in 2018, when thousands of employees objected to the company’s AI being used to analyse drone footage for the Pentagon. More recently, Google faced internal backlash over Project Nimbus, the company’s cloud contract with the Israeli government. These episodes are not identical, but they reveal something important. In frontier AI, labour dissent has become one of the few places where moral resistance can still surface before deployment hardens into institutional routine.

In frontier AI, labour dissent may be one of the last real kill switches left.

Who finances kill-tech?

There is also a deeper moral hazard here. A frontier lab can tell itself it is building general intelligence for medicine, education, science, or productivity. Investors can tell themselves they are backing a platform for human flourishing. Boards can tell themselves they are funding tools that will augment society. But general-purpose systems do not stay in the moral category in which they were financed.

Once they become capable and strategic enough, they get repurposed. That is the dark side of dual-use technology: the upside is privatised early, while the downstream ethical burden is socialised later. Frontier AI firms do not need to intend war for their systems to become useful to war. They only need to build something powerful enough that states decide they cannot afford not to use it.

Lawmakers behind the curve

And yet lawmakers are badly behind the curve. The most revealing critique is that the United States is drifting toward regulation by contract in military AI. That means the effective boundaries are not being set primarily by statute, treaty, or democratic process, but by bilateral negotiations between vendors and the government.

One company tries to draw a line at autonomous weapons. Another references existing policy. Another accepts the operational logic and integrates anyway. The problem is not flexibility. The problem is legitimacy. When the rules of military AI are written through procurement, safety teams, and legal departments rather than public law, accountability becomes fragmented and opaque.

Citizens do not vote on usage clauses. Engineers do not control downstream interpretation. And governments, when pressed, can always argue that no private company should get to dictate the conduct of state power.

The selection effect

The structural risk is that this market may select for the wrong traits. If defense demand becomes a defining customer segment for frontier AI, the firms that win may not be those with the strongest principles, but those most able to adapt their principles to sovereign demand. In peacetime, ethical AI looks like a differentiator. In a militarising market, it can start to look like friction.

That is the selection effect hanging over this industry now. Not survival of the safest, but survival of the most pliable.

Once you pop, you can’t stop

This is what makes the current moment more serious than another Silicon Valley values debate. The people steering these companies are not cartoon villains. They are founders, investors, executives, board members, procurement officials, and military planners responding rationally to their own incentives. That is precisely the problem.

Pandora’s box, in other words, is already open. The question is no longer whether frontier AI can be kept pure of war. That argument has already been overtaken by events. The real question is whether democratic societies can still impose durable public boundaries on technologies being industrially pulled toward conflict, surveillance, and strategic coercion.

If they cannot, then the “kill switch” will not be a feature inside the model. It will be the absence of one outside it.

The Ledger View

  • Economic core: Frontier AI is becoming infrastructure, and infrastructure-scale industries eventually look for state-scale customers.
  • Data science core: War AI is a system stack of classification, prediction, planning and operational support, not just a single autonomous weapon.
  • Governance gap: Law is lagging while procurement, contracts and internal policies are setting real-world boundaries.
  • Bigger question: If the market rewards the most adaptable firms in wartime, ethical restraint may become a competitive disadvantage.