OpenAI White Paper on the European Union’s Artificial Intelligence Act
OpenAI is an AI research and deployment company with the mission of ensuring that artificial
general intelligence (AGI) is developed and used in a way that benefits all of humanity.1 Since
our founding in 2015, we have deployed numerous AI systems on the path towards that goal,
including GPT-3, a large language model that performs a variety of natural language tasks;
DALL·E, an image generation system that draws detailed pictures from text input; and Codex, a
code generation system which writes code based on text input.
OpenAI’s foundational charter revolves around the development of “safe and beneficial AGI.” In
addition to core AI research and development, we invest heavily in policy research and
formulation, risk analysis and mitigation, and technical and process infrastructure to maximize
safe use of our technologies. Our company is governed by a non-profit with independent
directors making up a majority of the board, and the board is required to put social benefit
ahead of all other considerations. OpenAI also has a unique “capped-profit” legal structure that
allows us to effectively increase our investments in computing power and talent while
maintaining the checks and balances needed to actualize our mission.
We believe that AGI has the potential to profoundly benefit society, and understand that
AI support thoughtful regulatory and policy approaches designed to ensure that powerful AI tools
benefit the largest number of people, and we applaud the EU for tackling the immense
challenge of comprehensive AI legislation via the Artificial Intelligence Act (AIA).
OpenAI shares the EU’s goal of increasing public trust in AI tools by ensuring that they are built,
deployed, and used safely, and we believe the AIA will be a key mechanism in securing that
outcome. Many themes and requirements of the AIA are reflected in the tools and mechanisms
that OpenAI already employs to balance technological progress with safe and beneficial use.
For example, we currently require applications building with our tools to adhere to use-case
policies that exclude harmful or especially-risky uses; monitor and audit applications to help
prevent misuse; and employ an iterative deployment process, through which we release
products with baseline capabilities and stringent restrictions, and slowly expand features and/or