About the Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.
About the Role
As a data scientist on the Intelligence and Investigations team, you will be responsible for prototyping strategies to find misuse of our products, writing and scaling detection to find abuse cases, and investigating abuse trends on OpenAI’s products. You will also respond to critical incidents, especially those that are not caught by our existing safety systems. This role will focus on all aspects of Detection & Response, including operations and escalations, and will also contribute to the Investigations team as a strong generalist.
This role is based in our San Francisco office and includes participation in an on-call rotation that may involve resolving urgent incidents outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.
In this role, you will:
Detect, respond to, and escalate safety incidents
Develop new ways to scale our detection, especially using language models to improve and automate our coverage
Improve our response processes, especially alerting, triage, and incident management
Collaborate with policy, legal, and product teams to set data-backed policies that appropriately manage risk
Collaborate with engineering, enforcement, and security teams to build solutions that appropriately mitigate abuse at scale
You might thrive in this role if you:
Have at least 4 years of experience doing technical analysis, especially in SQL and Python
Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems
Have at least 2 years of experience scaling and automating processes, especially with language models
Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams
Have experience with basic data engineering, such as building core tables or writing data pipelines (not expected to build infrastructure or write production code)
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.