About Stability:
Stability AI is a community and mission driven, open artificial intelligence company that cares deeply about real-world implications and applications. Our most considerable advances grow from our diversity in working across multiple teams and disciplines. We are unafraid to go against established norms and explore creativity. We are motivated to generate breakthrough ideas and convert them into tangible solutions. Our vibrant communities consist of experts, leaders and partners across the globe who are developing cutting-edge open AI models for Image, Language, Audio, Video, 3D and Biology.
About the Role:
Our Product Integrity Operations team is responsible for ensuring we are complying with responsible AI laws and regulations, enforcing our Acceptable Use Policy (AUP) and Terms of Service and performing related red team testing of our models and safeguards. As an Integrity Operations Specialist, you will be reviewing user activity on our platform and the use of our various models to determine if policy violative actions are occurring. You will also be performing red team testing of our models and safeguards and helping address safety issues before those models are finalized and shipped. You will work closely with internal teams (product, engineering, legal, etc.) and external organizations globally to prevent bad actors from misusing our products and services. The ideal candidate is comfortable with technology and has experience working in fraud, abuse or similar investigative work and is comfortable working at a fast pace on a variety of tasks. Please note that this role may involve exposure to explicit content and topics, including those of a sexual, violent, or psychologically disturbing nature.
Responsibilities:
- Perform reviews of user activity, ensuring consistent and accurate action is taken related to user activity on our platform. .
- Conduct Red Teaming exercises, developing and executing offensive tactics and techniques to identify and exploit new models’ vulnerabilities.
Respond to concerns related to our products while also conducting root cause analysis to prevent recurrence of confirmed violative incidents.
- Adhere to metrics and KPIs related to workflow, volume, response times, accuracy, and overall operational effectiveness, providing data and content-driven insights.
- Stay current on industry best practices for trust and safety enforcement programs and develop skills and understand new technologies.
- Partner with internal and external stakeholders in identifying best practices, new approaches, and providing thought leadership for AI safety operational practices.
Qualifications:
- 3+ years experience working in Trust & Safety, Fraud, Abuse Prevention or similar investigative roles, preferably in the technology industry.
- Solid understanding of bad actor/ adversarial behavior and abuse prevention mechanisms.
- Basic SQL and Python programming skills.
- Analyze large volumes of transactional and user data to identify suspicious patterns or behaviors. Recommend controls when identifying exceptions or trends.
- Highly organized and able to manage competing priorities in time-sensitive situations.
- Experience working in fast paced environments, where you have had to adapt to rapid changes, navigate ambiguity and take ownership for solving any problems that arise.
- Experience handling highly sensitive topics and customer concerns
- Experience collaborating with cross-functional teams including legal, compliance, product, and engineering
- Excellent written and verbal communication skills
- A true passion for AI