Benjamin Sturgeon

14 April 2026

Characterising the Views on Safety from Frontier AI Labs: OpenAI

Day 14 of Inkhaven: 30 Days of Posts

In Part 1 I looked at Anthropic and Google DeepMind. Today: OpenAI.

Safety as Political Power

OpenAI has had a fascinating arc as a company, going from being very publicly concerned about safety to merely doing just enough safety work to make its product viable.

Primarily this change has been political in nature. Safety advocates demand political power because to be meaningful in any sense they need to be able to influence the products that get released and whether they get released at all. Fundamentally this puts them into conflict with Sam Altman, who has many positive qualities but also has a very strong need to be in control and to have power. As Paul Graham put it, "Sam is extremely good at becoming powerful."

It is no surprise then that eventually the safety people would come into conflict with him, and finding that they were powerless against Sam, that their best option was to leave. Examples here include Dario and Daniela Amodei, as well as each of the heads of the AI safety team.

Consolidation

Since the dramatic events in November 2023 leading to Sam being briefly ousted from the CEO position and him purging the board of resistance, Sam has continued to consolidate power within the organisation. In real terms this has meant people who would actually resist Sam slowly leaving the organisation, e.g. Mira Murati, Ilya Sutskever, etc. In February 2026, OpenAI disbanded its mission alignment team entirely, reassigning members throughout the organisation.

As a result, there has been a significant exodus of top researchers, with many of OpenAI's best leaving for Anthropic largely over ideological reasons. Engineers at OpenAI are eight times more likely to leave for Anthropic than the reverse. Most of the founding team of OpenAI have either left the company to start other companies or were involved in founding Anthropic. Many of the most critical researchers to the development of the original ChatGPT are also now at Anthropic.

Lack of Focus

OpenAI seems increasingly unclear about what it should be doing as a company. This is demonstrated by a catastrophic lack of focus at the company which got so bad they called an internal code red to try and reduce side projects. Examples of these side projects include:

Most of which have quietly been abandoned over the last few months.

As another example, OpenAI aims to compete and continue improving its models by throwing $121 billion at training compute by 2028, while Anthropic have set out to spend around $30 billion over the same period, while seemingly still improving its models at a faster rate. These represent fundamentally different problem-solving mindsets, one using deals and additional revenue extraction, the other focused completely on product.

The focus on ads as an ever-growing source of revenue at OpenAI further gestures at this. It seems ridiculous that a company that is focused on building the most powerful technology of all time would need to peddle ads to survive.

Fundamentally, the sense given by all this is that Sam somehow has still not internalised exactly how powerful or dangerous this technology is at its core. It is hard to imagine how he could, and still be distracted by these short-term applications of the technology.

What This Means for Safety

Rather than making OpenAI less dangerous, this gives them an even stronger incentive to rush out and release a potentially harmful model in the pursuit of not being overshadowed. OpenAI signed the contract with the Department of War that Anthropic refused. Anthropic was subsequently blacklisted after declining to agree to a renegotiated contract that would have granted the Pentagon unfettered access to Claude for uses like autonomous weapons and mass surveillance. We can reasonably expect OpenAI not to show the same restraint as Anthropic showed with their recent choice not to release Claude Mythos.

Anthropic's key moves have all been about saying "no". No to the Pentagon's terms, no to releasing Mythos publicly, no to erotic chatbots, no to AI video slop feeds, no to ads. This has been effective in building a sense that the company is significantly more serious in its approach to building the most important technology ever developed (though of course, the bar should be extremely high). Sam Altman has said that his behaviour is often driven by an urge to people please, and this is reflected in the focus and choices the company has made.