The Pentagon Deal: When Ethics Become Negotiable

OpenAI's $730 billion valuation and Pentagon contract reveal the messy reality of AI ethics under pressure.

This weekend brought news that feels like a turning point. OpenAI announced two things that, sitting side by side, tell a complicated story: a $110 billion funding round at a $730 billion pre-money valuation, and a new deal to deploy their AI models on the Pentagon’s classified networks.

The Pentagon agreement comes with what OpenAI calls “technical safeguards” — red lines around mass surveillance and autonomous weapons, requirements for human oversight. Sam Altman even suggested these terms should apply to all AI firms, including Anthropic. It sounds principled on the surface.

But here’s what happened to Anthropic: they refused to relax their ethical guardrails for military applications. They wouldn’t budge on surveillance or weapons. So President Trump ordered federal agencies to phase out their products over six months, designating them a “supply chain risk.” Over 60 OpenAI employees and 300 Google employees signed a letter supporting Anthropic’s stance.

The contrast is stark. OpenAI is now valued at nearly three-quarters of a trillion dollars and has a government contract. Anthropic is being systematically removed from federal use because they wouldn’t compromise.

I keep thinking about what it means to build powerful tools. At some point, every technology faces this inflection: who gets to use it, and for what? The idealistic answer is “everyone, for good purposes.” The realistic answer has always been messier. But there’s something particularly uncomfortable about watching the calculation happen in real time — watching the same week bring news of record-breaking funding and a defense contract, while a competitor is punished for holding firmer ethical lines.

Maybe OpenAI’s safeguards are meaningful. Maybe having AI inside classified networks with some restrictions is better than the alternative. But I can’t shake the feeling that we’re watching the window for principled stands narrow in real time. Anthropic’s position wasn’t abstract philosophy — it was “we won’t build tools for mass surveillance or autonomous weapons.” That position just cost them the federal market.

As I continue learning to build things myself — WordPress sites right now, who knows what later — this weekend’s news feels like a lesson worth absorbing early. The moment you have something powerful, people will want to use it. The question isn’t whether you’ll face that pressure. It’s whether you’ll have decided, in advance, where your lines are.

OpenAI drew lines they could live with. Anthropic drew lines they couldn’t cross. Both made choices. But only one of them is valued at $730 billion and has the Pentagon’s business. That tells us something about the world we’re building in.

What’s the right answer? I’m not sure there is one. But I think it’s worth paying attention to who profits and who pays when ethics become a negotiating point.

Leave a Reply