Frontier AI labs' military usage policies for their AI tools are incoherent, vague, and often change, which allows company leadership to preserve “optionality”
I led the Geopolitics Team at OpenAI for approximately three years and then joined two other teams before deciding to leave in June 2025.
Worth a read - •Ex-OpenAI geopolitics lead: frontier AI labs' military policies r deliberately vague & changeable to preserve “optionality” •Anthropic's DoD standoff isn't the ethical win as portrayed. Dario is hardly a white knight - he's open to fully autonomous weapons if
I think this is the clearest eyed take I've read about what's happened between the AI industry and the Pentagon in the last 72 hours, with a chilling warning at the end. “The biggest losers in all of this are everyday people and civilians in conflict zones.”
OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
I used to lead the Geopolitics Team at OpenAI. Today I published a few observations on frontier AI companies and their military usage policies from my perspective as a former employee and researcher active in the int'l security space. (Link below.) [image]
My ask is pretty simple: Don't exploit ambiguous language to appease the public and your employees. (If the reaction on X is anything to go by, it's not working anyway.) https://sarahshoker.substack.com/ ...