Threat Investigator, Trust & Safety
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Threat Investigator, you will be conducting investigations around adversarial actors, identifying vulnerabilities, and developing novel detection techniques to identify and mitigate abuse of our products and services. This role requires conducting thorough investigations, creating and implementing processes, tools, and strategies to proactively detect adversarial actors, managing sensitive incidents, and working cross-functionally to enhance our defenses against emerging risks in the rapidly evolving landscape of AI technology.
Your work will be essential in maintaining Anthropic's commitment to safe and beneficial AI as we continue to expand our product capabilities.
IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.
Responsibilities:- Analyze the deployment of our products and services to identify how these systems are being misused or abused, with a particular focus on influence operations
- Develop abuse signals and tracking strategies to proactively detect adversarial actors
- Study trends internally and in the broader ecosystem to anticipate how systems could be misused or manipulated for harm in the future, generating and publishing reports
- Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems
- Utilize the results of deep dive investigations to implement systematic changes to our safety approach to mitigate harm
- Keep abreast of the latest industry risks, vulnerabilities, and issues related to the use of language models and generative AI; identify opportunities for improvement to our policies, controls, and enforcement mechanisms
- Forecast how abuse actors will leverage new advances in AI technology and inform safety by design strategies
- Build and maintain relationships with external threat intelligence partners and information sharing communities
- Work with cross-functional team members to build out our threat intelligence program, establishing processes, tools, and best practices
- Have experience in technical analysis and investigations, including skills in SQL and Python
- Have experience with large language models and a deep understanding of AI technology
- Have subject matter expertise in abusive user behavior detection, for example influence operations, coordinated inauthentic behavior patterns, and/or cyber threat intelligence
- Can derive insights from large amounts of data to make key decisions and recommendations
- Have experience conducting threat actor profiling and utilizing threat intelligence frameworks
- Have strong project management skills and the ability to build processes from the ground up
- Possess excellent communication skills to collaborate with cross-functional teams
The expected salary range for this position is:
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're differentWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.