At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
About UsConducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems.
Research Engineers work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.
We’re looking for a versatile Research Engineer, at ease both with figuring out how to approach new research questions, and the technical implementation of research ideas.
Our team focuses on improving the safety in Gemini pre-trained models and making them more powerful to support downstream safety needs. This includes making them easier to align for safety purposes and more capable of complicated safety reasoning.
Key responsibilities:
- Conduct research and experimentation in Gemini pre-training to make our models safer and helpful to all users.
- Design and maintain high quality evaluation protocols to assess model behavior gaps and headroom related to safety and fairness.
- Exploring data, reasoning and algorithmic solutions to make sure Gemini Models are safe, maximally helpful, and work for everyone.
- Write high-quality code and infrastructure to enable fast experimentation on Gemini models.
- Drive innovation and enhance understanding of Safety in Pre-training at scale.
In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:
- Masters’ level experience in machine learning, or practical ML experience in an academic or industrial lab.
- You have at least a year of experience working with deep learning and/or foundation models (whether from industry, academia, coursework, or personal projects).
- You are adept at building codebases that support machine learning at scale. You are familiar with ML / scientific libraries (e.g. JAX, TensorFlow, PyTorch, Numpy, Pandas), distributed computation, and large scale system design.
- You are keen to address safety in foundational models, and are keen to have direct impact on Gemini models.
- You are excited to work with strong contributors to make progress towards a shared ambitious goal.
In addition, the following would be an advantage -
- PhD in Computer Science or Machine Learning related field.
- Track record of publications at NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI, UAI
- Experience in areas such as Safety, Fairness and Alignment.
- Experience with LLMs training and inference.
- Experience with collaborating or leading an applied research project