OpenAI

European AI Safety Policy Lead - Technical

London, UK
201 days ago

Share:

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires effective engagement with public policy stakeholders and the broader community impacted by AI. Accordingly, our Global Affairs team builds authentic, collaborative relationships with public officials and the broader AI policymaking community to inform and support our shared work in these domains. We ensure that insights from policymakers inform our work and - in collaboration with our colleagues and external stakeholders - help shape policy guardrails, industry standards, and safe and beneficial development of AI tools. 

About the Role

As the AI Safety Policy Lead, you will be engaging on the full spectrum of policy issues on AI Safety and support the Global Affairs team in the region more broadly with technical expertise and knowledge on LLMs and AI technologies.

OpenAI releases industry-leading research and tools. You will face new challenges as the impact of cutting edge generative AI technologies continues to be explored and as the needs of the organization evolve. Day-to-day work may encompass anything from helping to shape strategic initiatives and policy documents to preparing our leaders for engagements with government officials or representing OpenAI in private and public forums. 

We are looking for a self-directed and creative individual combining a technical and research background in LLMs and AI technologies with experience in engaging effectively with policy-makers, research institutes, academics and civil society. 

This strategic yet hands-on role will be part of the Policy Planning team and work closely with key internal and external partners. 

This role will be based out of London and will require frequent travel to meet with key stakeholders. We offer relocation assistance to new employees. 

We're looking for a blend of qualifications, including:

  • 3-5 years of experience in (technical or technically-grounded) research and policy work on AI Safety

  • Demonstrated interest and ability to engage with policy-makers, regulators, civil society and academics on nuanced discussions around the wide range of AI Safety issues

  • Technical background (ideally a Masters degree in ML/AI or equivalent experience) with deep understanding of LLMs/AI, how these systems are trained and deployed, and how they can be made safe in practice

  • Worked on topics around: AI risk assessment, model safety, robustness, governance, misinformation/disinformation, etc. and has ideally advised governments on policy actions and work in this space

  • Existing network and credibility within the AI Safety community in Europe

  • Ability to assess and understand the impact of legislative and regulatory proposals on OpenAI’s product and research roadmap

You’ll thrive in this role if: 

  • Like thinking carefully and deeply about the pragmatic politics of making AI safe and beneficial

  • Have an engineering-level understanding of AI technology and an ability to get to answers on tricky technical questions yourself (think reading arxiv papers or codebases to find an answer to a question from a policy-maker)

  • Have an established network and credibility with EU Member States and international policymakers, regulators, civil society, and other stakeholders

  • Sound judgment and outstanding personal integrity

  • Ability to execute in fast and flexible environments through rapid cycles of analysis, decision, and action

  • Excellent communication, presentation, and interpersonal skills, with the ability to convey complex technical and policy concepts to diverse audiences

  • Strong strategic thinking, problem-solving, and project management skills

  • Demonstrated knowledge and understanding of the European Union policy-making system, institutions, and processes, and the key policy issues and debates related to AI

  • Track record of effectively working with cross-functional teams, especially engineering and research teams, and aligning a diverse range of internal and external partners

  • Care deeply about the impact of increasingly-advanced AI technology on society

  • Previous work on AI governance issues and technical AI development expertise a significant plus

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Please mention that you found this job on MoAIJobs, this helps us grow, thanks!

Related Jobs

Anthropic
Communications Lead, Trust and Safety, Policy Communications
San Francisco, CA
OpenAI
AI Policy Counsel
San Francisco
Anthropic
Anthropic AI Safety Fellow
Remote-Friendly (Travel Required) | San Francisco, CA
Welocalize
AI Operations Squad Lead
United States
Anthropic
Anthropic AI Safety Fellow, London
London, UK