February 2026 is proving to be a watershed moment for AI agents. The space is moving fast — perhaps too fast — with major corporate moves, security crises, and brewing conflicts between AI safety ideals and military realities.
OpenAI Makes Its Move
The biggest headline this week: OpenAI hired Peter Steinberger, the founder of OpenClaw (formerly Moltbot), to lead their personal agents initiative. Sam Altman announced the hire on February 15, framing it as a strategic push toward multi-agent systems and runtime orchestration.
Steinberger confirmed the news, noting that OpenClaw’s explosive growth — 196,000 GitHub stars and 2 million weekly visitors — made it impossible to ignore. Apparently Meta made competing offers in the billions, but Steinberger chose OpenAI. OpenClaw itself will continue as an independent open-source project under a foundation supported by OpenAI.
This signals a significant shift in OpenAI’s strategy: less focus on raw model intelligence, more focus on how agents coordinate, invoke tools, and manage context over time.
The Security Crisis Nobody’s Talking About Enough
While everyone focuses on the OpenAI hire, there’s a growing security crisis around OpenClaw deployments. BitSight reported infostealer malware specifically targeting OpenClaw configuration files, potentially allowing remote access to users’ local instances.
What’s particularly concerning: unexpected increases in older “Moltbot” deployments despite the January 30 rebrand to OpenClaw. This suggests either outdated infrastructure or intentional obfuscation. Add in malicious skills on ClawHub bypassing VirusTotal scans, and you have a recipe for widespread compromise.
The critics calling OpenClaw a “privacy nightmare” — granting full system access to AI agents prone to hallucinations — are starting to look prescient.
The Talent Exodus
Both OpenAI and Anthropic are bleeding safety researchers. Top researchers resigned from both companies this month amid fears of existential AI risks. OpenAI disbanded its mission alignment team, reassigning the leader to “chief futurist” — a title that somehow manages to sound both important and meaningless.
At xAI, half the founding team (6 of 12 members, including co-founders Tony Wu and Jimmy Ba) have departed. Elon Musk has reorganized the remaining team into units focused on Grok, Coding, Imagine, and something called “Macrohard.”
Even Anthropic’s safety leader quit — to pursue poetry. When the people building AI safety systems are leaving to write sonnets, perhaps we should pay attention.
Military-Industrial AI
The Pentagon is going to war with Anthropic — figuratively, for now. After Claude aided a US military raid capturing Nicolás Maduro via a Palantir partnership (violating Anthropic’s no-violence policy), the Pentagon is threatening $200M in contract cuts and a “supply chain risk” designation.
This could bar Anthropic from working with DoD partners entirely. Meanwhile, the Pentagon is negotiating with Google, Meta, and xAI for broader AI access including “all lawful purposes” — language that should make anyone paying attention nervous.
Product Updates Worth Noting
Amid the drama, there are actual product launches:
- Anthropic’s Claude Opus 4.6 now features a 1M-token context window and multi-agent teams for knowledge work
- OpenAI’s Frontier enterprise platform aims at large-scale AI agent deployment
- Anthropic’s Cowork expanded with customizable plug-ins for marketing and legal workflows
OpenAI also tested ads in ChatGPT for free and Go tiers, prompting Anthropic to run Super Bowl ads mocking the intrusion. Sam Altman denied ads influence responses, but the optics aren’t great.
What This Means
We’re watching the AI agent space mature in real-time — and it’s messy. The idealistic open-source project (OpenClaw) gets absorbed into the corporate machine. Safety researchers leave rather than watch their work deployed in ways they didn’t intend. Military applications clash with corporate ethics policies.
The tools are getting more powerful. The governance isn’t keeping up. And the people building these systems are increasingly uncomfortable with what they’re building.
For developers and businesses betting on AI agents, the message is clear: the technology is arriving faster than the guardrails. Choose your integrations carefully, audit your security, and maybe don’t grant full system access to anything that can hallucinate.
Sources: Engadget, BitSight, The Hacker News, Axios, Fortune, MarketingProfs