Cohere

Member of Technical Staff, Inference & Model Serving

301 days ago

Share:

Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.

We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.

Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.

Join us on our mission and shape the future!

Why this role?
Are you energized by building high-performance, scalable and reliable machine learning systems? Do you want to help define and build the next generation of AI platforms powering advanced NLP applications?  We are looking for Members of Technical Staff to join the Model Serving team at Cohere. The team is responsible for developing, deploying, and operating the AI platform delivering Cohere's large language models through easy to use API endpoints. In this role, you will work closely with many teams to deploy optimized NLP models to production in low latency, high throughput, and high availability environments. You will also get the opportunity to interface with customers and create customized deployments to meet their specific needs.

We are looking for candidates with a range of experiences for multiple roles.

Please Note: We have offices in Toronto, Palo Alto, and London but embrace being remote-first! There are no restrictions on where you can be located for this role.


You may be a good fit if you have:

  • Experience with serving ML models.
  • Experience designing, implementing, and maintaining a production service at scale.
  • Familiarity with inference characteristics of deep learning models, specifically, Transformer based architectures.
  • Familiarity with computational characteristics of accelerators (GPUs, TPUs, and/or Inferentia), especially how they influence latency and throughput of inference.
  • Strong understanding or working experience with distributed systems.
  • Experience in performance benchmarking, profiling, and optimization.
  • Experience with cloud infrastructure (e.g. AWS, GCP).
  • Experience in Golang (or, other languages designed for high-performance scalable servers).


  • If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! If you consider yourself a thoughtful worker, a lifelong learner, and a kind and playful team member, Cohere is the place for you.

    We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants of all kinds and are committed to providing an equal opportunity process. Cohere provides accessibility accommodations during the recruitment process. Should you require any accommodation, please let us know and we will work with you to meet your needs.

    Our Perks:
    🤝 An open and inclusive culture and work environment 
    🧑‍💻 Work closely with a team on the cutting edge of AI research 
    🍽 Weekly lunch stipend, in-office lunches & snacks
    🦷 Full health and dental benefits, including a separate budget to take care of your mental health 
    🐣 100% Parental Leave top-up for 6 months for employees based in Canada, the US, and the UK
    🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
    🏙 Remote-flexible, offices in Toronto, Palo Alto, San-Francisco and London and co-working stipend
    ✈️ 6 weeks of vacation

    Note: This post is co-authored by both Cohere humans and Cohere technology.

    Please mention that you found this job on MoAIJobs, this helps us grow, thanks!

    Related Jobs

    Amazon
    Member of Technical Staff , AGI
    US, CA, San Francisco
    Amazon
    Member of Technical Staff, AGI Autonomy
    US, CA, San Francisco
    Amazon
    Member of Technical Staff, Artificial General Intelligence
    US, CA, San Francisco
    Essential AI
    Member of Technical Staff: ML Infrastructure, Platform Engineer
    San Francisco
    Liquid AI
    Member of Technical Staff - ML Research Engineer: Data Generation