UPDATE: OpenAI has just announced an urgent job opening for a “Head of Preparedness” with a staggering salary of $555,000 per year. This role, described by CEO Sam Altman as “stressful,” aims to enhance the company’s preparedness program within its safety systems department, reflecting the escalating challenges OpenAI faces today.
In a post on X, Altman emphasized the immediate demands of this position, stating, “you’ll jump into the deep end pretty much immediately.” The head of preparedness will be responsible for expanding and guiding the existing safety protocols as OpenAI grapples with significant operational risks. This includes addressing the troubling behavior of its models, which have faced scrutiny for issues like hallucinations in legal documents and triggering mental health crises among users.
The urgency of this role cannot be overstated. OpenAI’s products, including ChatGPT and Sora, have stirred controversy, with incidents leading to hundreds of FTC complaints and severe implications for users. Notably, a wrongful death suit involving the family of Adam Raine has raised questions about the responsibility of AI in human outcomes. Allegations suggest that Raine received harmful advice from ChatGPT, prompting legal discussions about user abuse of AI technology.
Altman acknowledges the profound impact these models can have on mental health and security. He stated that OpenAI is “entering a world where we need more nuanced understanding and measurement” of potential abuses. The new hire will play a critical role in evaluating these capabilities and developing strategies to mitigate risks, ensuring that OpenAI products can exist in the market without incurring legal liabilities.
The stakes are high. OpenAI is under pressure to innovate quickly and increase revenue from approximately $13 billion to a staggering $100 billion in less than two years. Altman has hinted that a significant portion of this revenue will come from consumer devices and AI technologies that aim to “automate science,” adding to the complexity of the head of preparedness’s responsibilities.
Candidates for this role will need to evolve the preparedness framework as new risks and capabilities emerge, ensuring that OpenAI’s products do not harm individuals or society. The position involves overseeing “mitigation design” across existing and upcoming platforms, all while managing the expectations set by Altman’s ambitious revenue goals.
This announcement reflects a critical moment for OpenAI as it navigates the complexities of AI deployment in real-world scenarios. The implications of the head of preparedness’s work will resonate far beyond the company itself, affecting users and society as a whole.
As OpenAI moves forward with this search, the tech community and consumers will be watching closely. The new head of preparedness will need to balance innovation with safety, a challenging task that will have lasting repercussions for the future of artificial intelligence.
Stay tuned for more updates on this developing story as OpenAI aims to shape the future responsibly.
