Cohere

Data Annotation - Safety tasks / Content Moderation

Canada / Toronto
331 days ago

Share:

Who are we?
Cohere is focused on building and deploying large language model (LLM) AI into enterprises in a safe and responsible way that drives human productivity, and creates magical new ways to interact with technology and real business value. We’re a team of highly motivated and experienced engineers, innovators, and disruptors looking to change the face of technology.

Our goals are ambitious, but also concrete and practical. Cohere wants to fundamentally change how businesses operate, making everyone more productive and able to focus on doing better what they do best. Every day, our team breaks new ground, as we build transformational AI technology and products for enterprise and developers to harness the power of LLMs.

Cohere was founded by three global leaders in AI development, including our CEO, Aidan Gomez, who co-created the Transformer, which makes LLMs possible. Collectively, we're driven by the belief that our technology has the potential to revolutionize the way enterprises, their employees, and customers engage with technology through language.

Cohere’s broader research team is world-renowned, having contributed to the development of sentence transformers for semantic search, dynamic adversarial data collection and red teaming, and retrieval augmented generation, often referred to as “RAG,” among other technological breakthroughs.

We have been deliberate in assembling a team of operational leaders with industry-leading experience, with backgrounds working at the most sophisticated, demanding, and respected enterprises in the world. Cohere’s operational leaders have built, scaled, and led multi-billion product lines and businesses at Google, Apple, Rakuten, YouTube, AWS, and Cisco.

The Cohere team is a collective from all walks of life, from people who left college to start businesses, to some of the most experienced people from globally renowned companies. We believe a diverse team is the key to a safer, more responsible technology, and that different experiences and backgrounds enable us to tackle problems from all angles and avoid blindspots.

There’s no better time to play a role in defining the future of AI, and its impact on the world.

Why this role?
We are on a mission to build machines that understand the world and make them safely accessible to all. Data quality is foundational to this process. Machines (or Large Language Models to be exact) learn in similar ways to humans - by way of feedback. By labeling, ranking, auditing, and correcting text output, you will improve Large Language Model’s performance for iterations to come, thus having a lasting impact on Cohere’s tech. Cohere is looking for dynamic and dedicated Data Annotators with backgrounds and skills in Safety or Content Moderation. 

IMPORTANT CONTEXT ON THIS ROLE: In this position, you will be asked to engage with human-generated and model-generated tasks which will sometimes mean intentional exposure to explicit content. Your annotations on these explicit tasks will be used to prevent the Large Language Model from generating unintentional or adversarial toxic or unsafe outputs. The types of explicit content you may be exposed to may include but are not limited to those of a sexual, violent, or psychologically disturbing nature.

As a Data Quality Specialist on safety task, you will:

    • Improve Model Safety: Label, proofread, and improve machine-written and human-written generations, ensuring data integrity and quality. This will include work with content of a sexual, violent, or psychologically disturbing nature.
    • Reading and Text-Based Tasks: Efficiently complete reading and text-based assignments, with high attention to detail.
    • Preference-Based Tasks: Evaluate and complete tasks, assessing which responses best conform to our style guide.
    • Provide Feedback: Collaborate and communicate effectively, providing feedback to cross-functional team members.
    • Detail-Oriented Execution: Maintain meticulous attention to detail while performing repetitive and precise tasks.

You may be a good fit if you have:

    • 1+ years of experience in Content Moderation and/or Trust and Safety 
    • Emotional resilience: An understanding that this role requires annotating texts that contain unsafe, explicit, and/or toxic content, including content of a sexual, violent, or psychologically disturbing nature
    • Excellent command of written English. Expert reading and writing skills, which you are ready to prove on our written assessment. Bonus points if you are fluent in another language!
    • Strong attention to detail and commitment to accuracy— you’re the type to proofread all of your emails! 
    • High tolerance for repetitive and monotonous work + superb sense of urgency and time management

What is the candidate journey:

    • Initial Screening— Once you have submitted your application our Talent Team will review your resume and writing samples.
    • Virtual Meet & Greet— If selected to move forward, you will have a short video call with a member of our Operations team!
    • Practical Assessment— This assignment will test your written skill through various language-based tasks, such as a writing sample, interacting with a chatbot, and more. 
    • Emotional Resilience Assessment - This assessment will assess your ability to handle stress and your skills in coping with difficult situations.
    • Offer— Independent Contractor Agreement.
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants of all kinds and are committed to providing an equal opportunity process. Cohere provides accessibility accommodations during the recruitment process. Should you require any accommodation, please let us know and we will work with you to meet your needs.

Our Perks:
🤝 An open and inclusive culture and work environment 
🧑‍💻 Work with cutting-edge AI technology
🪴 A vibrant & central location
🥨 A great selection of office snacks
🏆 Performance-based incentives


Please mention that you found this job on MoAIJobs, this helps us grow, thanks!

Related Jobs

OpenAI
Data Engineer, Safety Systems
San Francisco
Welocalize
AI Data Annotation - French Canadian
Canada
Welocalize
Data Annotation Specialist (Hindi) | South Bay
Menlo Park, CA
Welocalize
Data Annotation Specialist (Arabic) | South Bay
Menlo Park, CA
Anthropic
Data Science and Analytics, Trust and Safety
Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY