POSTED Nov 26

SEAL Research Scientist Intern (Summer 2025)

at ScaleAISan Francisco, CA; New York, NY

Share:

As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding large language models (LLMs). Safety, Evaluations and Analysis Lab (SEAL) is Scale’s new frontier research effort dedicated to building robust evaluation products and tackling the challenging research problems in evaluation and red teaming. At SEAL, we are passionate about ensuring transparency, trustworthiness, and reliability of language models, while simultaneously igniting the advancement of model capabilities and pioneering novel skills - we are setting the northern star for the AI community, where safety and innovation illuminate the path forward.

We are seeking talented research interns to join us in shaping the landscape for safety and transparency for the entire AI industry. We support collaborations across the industry and the publication of our research findings. This year, we are seeking top-tier candidates for multiple projects, focusing on frontier agent data, evaluation and safety; scalable oversight and alignment of LLMs; science of evaluation for LLM; and exploring the frontier and potentially dangerous capabilities of LLMs with effective guardrails. Below is a list of SEAL’s representative projects:

Example Projects:

  • Adversarial robustness, jailbreaks and safety red teaming
  • Measuring the dangerous capabilities of frontier models and conducting preparedness research
  • Research on the science and creation of new benchmarks for frontier models
  • Building frontier evaluations for LLMs and agents such as AI R&D
  • Developing scalable oversight protocols and red teaming oversight methods
  • Develop, evaluate and improve the agentic use of frontier models, including tool-use, SWE coding, browser-use, OS-related, computer-use/GUI and other related agents. 

Required to have:

  • Currently enrolled in a BS/MS/PhD Program with a focus on Machine Learning, Deep Learning, Natural Language Processing or Computer Vision with a graduation date in Fall 2025 or Spring 2026
  • Prior experience or track of research publication in agent, safety, evaluation, alignment or a related field
  • Experience with one or more general purpose programming languages, including: Python, Javascript, or similar
  • Ability to speak and write in English fluently
  • Be available for a Summer 2025 (May/June starts) internship 

Ideally you’d have:

  • Have had a previous internship around Machine Learning, Deep Learning, Natural Language Processing, Adversarial Robustness, Alignment, Evaluation and Agents. 
  • Experience as a researcher, including internships, full-time, or at a lab
  • Publications in top-tier ML conferences such as NeurIPS, ICLR, CVPR, ICML, COLM, etc. or contributions to open-source projects

PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.

About Us:

At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI.  Our products power the world's most advanced LLMs, generative models, and computer vision models. We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an affirmative action employer and inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status. 

We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.

We comply with the United States Department of Labor's Pay Transparency provision

PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.

Please mention that you found this job on Moaijobs, this helps us get more companies to post here, thanks!

Related Jobs

ScaleAI
Research Intern, Post-training, (Summer 2025)
San Francisco, CA; New York, NY
ScaleAI
Machine Learning Research Intern, Frontier Reasoning Intern (Summer 2025)
San Francisco, CA; New York, NY
Meta
Research Scientist Intern, Computational Imaging (PhD)
Redmond, WA
Meta
Research Scientist Intern, Large Multimodal Models (PhD)
Paris, France
Anthropic
Research Engineer / Research Scientist, Multimodal
London, UK