Research
Safety
Company
Download

Developing beneficial AGI
safely and responsibly

“AI systems are becoming a part of everyday life. 
The key is to ensure
that these machines are 
aligned with human intentions and values.”

Mira Murati, Chief Technology Officer at OpenAI

AI technology comes with tremendous benefits, along with serious risk of misuse. Our Charter guides every aspect of our work to ensure that we prioritize the development of safe and beneficial AI.

A focus on safety

Our teams span a wide spectrum of technical efforts tackling AI safety challenges at OpenAI. The Safety Systems team stays closest to the deployment risk while our Superalignment team focuses on aligning superintelligence and our Preparedness team focuses on safety assessments for frontier models.

Safety teams

Safety & Alignment

Disrupting malicious uses of AI by state-affiliated threat actor

Safety & Alignment

OpenAI Red Teaming Network

Safety & Alignment

Our approach to AI safety

Collaboration

We collaborate with industry leaders and policymakers to ensure that AI systems are developed in a trustworthy manner.

Anna Makanju, Head of Public Policy at OpenAI

“This technology will profoundly transform how we live, 
and we can guide its trajectory, limit abuse, and 
secure broadly beneficial
outcomes.”

Safety & Alignment

OpenAI and journalism

Safety & Alignment

Superalignment Fast Grants

Safety & Alignment

Moving AI governance forward

Collaboration

We collaborate with industry leaders and policymakers to ensure that AI systems are developed in a trustworthy manner.

Research
Overview
GPT-4
DALL·E 3

Safety

Company
About
Careers
Residency
Charter
Security

Download

OpenAI © 2015–2024