Do you want to join an innovative team of scientists who leverage new technologies like reinforcement learning (RL), large language models (LLMs), graph analytics, and machine learning to help Amazon provide the best customer experience by protecting Amazon customers from hackers and bad actors? Do you want to build advanced algorithmic systems that integrate these state-of-the-art techniques to help manage the trust and safety of millions of customers every day? Are you excited by the prospect of analyzing and modeling terabytes of data and creating sophisticated algorithms that combine RL, LLMs, graph embeddings, and traditional machine learning methods to solve complex real-world problems? Do you like to innovate, simplify, and push the boundaries of what's possible? If yes, then you may be a great fit to join the Amazon Account Integrity team, where you'll have the opportunity to work at the forefront of AI and machine learning, tackling challenging problems that have a direct impact on the security and trust of Amazon's customers.
The Amazon Account Integrity team works to ensure that customers are protected from bad actors trying to access their accounts. Our greatest challenge is protecting customer trust without unjustly harming good customers. To strike the right balance, we invest in mechanisms which allow us to accurately identify and mitigate risk, and to quickly correct and learn from our mistakes. This strategy includes continuously evolving enforcement policies, iterating our Machine Learning risk models, and exercising high‐judgement decision‐making where we cannot apply automation.
Please visit https://www.amazon.science for more information
The Amazon Account Integrity team works to ensure that customers are protected from bad actors trying to access their accounts. Our greatest challenge is protecting customer trust without unjustly harming good customers. To strike the right balance, we invest in mechanisms which allow us to accurately identify and mitigate risk, and to quickly correct and learn from our mistakes. This strategy includes continuously evolving enforcement policies, iterating our Machine Learning risk models, and exercising high‐judgement decision‐making where we cannot apply automation.
Please visit https://www.amazon.science for more information