ElevenLabs

AI Safety Engineer

London
71 days ago

Share:

This role is remote, so it can be executed globally. However, to facilitate working with the engineering team we strongly prefer candidates on the East Coast (US) or in Europe (NY/London/Warsaw/Berlin).

We are looking for an experienced engineer with a background in trust & safety and machine learning/AI to lead safety engineering at Eleven Labs.

About ElevenLabs

We are a rapidly growing startup pioneering the development of cutting-edge AI voice models and products, including text-to-speech, speech-to-speech, audiobook creation, dubbing, and voice libraries.

We launched in January 2023 and have since reached over 1 million users. We are backed by many of the leading names in tech and AI, including Nat Fridman, Daniel Gross, Andreessen Horowitz, Instagram co-founder Mike Krieger, Oculus VR co-founder Brendan Iribe, Deepmind & Inflection co-founder Mustafa Suleyman, among others.

With our latest Series-B funding at a $1.1B evaluation, we're entering an exciting phase of expansion and innovation. We’re seeking passionate and talented individuals to join us at this pivotal moment. As we push boundaries and achieve new milestones, we invite those driven by high aspirations and a hunger for significant impact to join our team. This is a rare chance to be an early member of a company on the rise, aiming to achieve extraordinary things. If this excites you, we want to meet you.

Join us in shaping the future of voice technology!

About the role

As a founding member of our dedicated Safety Engineering function, you'll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.

You will lead the design and implementation of systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You will spearhead industry-wide innovation on the adoption of the latest AI and ML capabilities in content moderation, and will be primarily responsible for bringing automation and efficiency to our moderation infrastructure.

Specifically, you will:
  • Architect, build, and maintain our anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.

  • Lead the adoption of latest gen AI methods to automate our abuse monitoring and content moderation workflows;

  • Design, implement and iterate ML models using proprietary and industry tools to continuously improve our detection and enforcement capabilities

  • Collaborate with broader engineering team to design and build safety mitigations across our product suite; develop ubiquitous moderation coverage across our deployments

  • Expand our internal safety tooling and infrastructure

  • Implement provenance solutions in partnership with internal and external partners.

  • Collaborate with our data team to develop and maintain actionable safety metrics

Who you are

Each one of us is driven by the pursuit of excellence, supporting one another while taking ownership of our outcomes, and exploring uncharted territories. To thrive in this environment, you:

  • Are passionate about safe & broadly accessible AI in audio

  • Are a strong communicator who is able to explain technical concepts to non-technical partners and is interested in working with a wide range of cross-functional teams

  • Are a highly motivated and driven individual with a strong work ethic, including willingness to work nights and weekend as needed

  • Strive for excellence in every aspect of work, consistently taking ownership of your outcomes and overdelivering on goals.

  • Have a humble attitude and are eager to learn whatever it might take to help your team and our customers succeed.

What you bring
  • 6+ years in progressively senior software engineering roles, including at least some spent in trust and safety, integrity or AI safety teams

  • Strong experience in Python, including asynchronous Python. Proven track record of building production Python applications.

  • Experience/proven track record with: Building backend safety infrastructure & tooling; designing, implementing, and iterating on ML/AI models to detect, monitor and enforce on abusive content; machine learning frameworks such as Pytorch

  • Experience and/or interest in applying generative AI to increase moderation efficiency

  • Experience and/or interest in designing and implementing AI provenance tools

  • Experience with SQL and data analysis tools; familiarity with React would be useful

Strong candidates will also have a mix of experience in:

  • Setting up and maintaining production backend services and data pipelines.

  • Designing and implementing trust and safety operational flows (ie flagging, actioning, recording)

  • Experience mentoring and leading technical teams

What we offer

At ElevenLabs, our biggest reward is shaping the future of voice technology. In addition, we offer:

  • Competitive compensation package including stock options; we want you to have ownership in the company and share the successes that lie ahead. That’s why we offer early employees stock options as part of their compensation package.

  • Remote-first; we look at who you are rather than where you live. We currently have offices in NYC, London, and Warsaw.

  • Huge impact; you will be responsible for designing industry-leading solutions in AI safety in one of the world’s most exciting AI startups.

  • Extraordinary team; we favor ambitious, smart people striving for outsized impact. You’ll work on the state of the art surrounded by people with extraordinary skills and positive attitudes.

  • Annual company off-sites; previous locations included Croatia, Portugal and Switzerland.

Please mention that you found this job on MoAIJobs, this helps us grow, thanks!

Related Jobs

AMD
AI Infra Engineer
San Jose, California
AMD
AI Application Engineer
San Jose, California
Helsing
Deployed AI Engineer
Munich - Berlin - London - Paris - Warsaw - Tallinn
X AI
AI Engineer / Data
San Francisco & Palo Alto, CA