We are looking for an experienced Software Development Engineer to join our AI team and work on supporting cutting-edge generative AI projects.
As a Software Engineer on the Amazon Web Crawling team, you will be responsible for designing, developing and maintaining the systems that empower Amazon's ability to understand and interact with web content. This position requires creativity, passion and experience building innovative solutions to complex technical problems.
We are looking for someone with experience leading technical reviews with senior leadership who is able to present on a monthly basis. This team has high visibility and requires a tech lead that is comfortable moving across teams on 1 month sprints. Those with startup experience encouraged to apply.
Key job responsibilities
- Design and develop scalable web crawling and data extraction systems to acquire structured data from websites
- Optimize web crawling architecture for performance, scale, resilience and cost efficiency
- Implement robust systems to process high volumes of web content and extract meaning
- Develop data pipelines and infrastructure to support petabyte-scale datasets
- Work closely with scientists and other engineers to rapidly prototype and deploy new algorithms
- Write high quality, well-tested production code in languages like Python, Spark, Java, Scala.
We are open to hiring candidates to work out of one of the following locations:
Bellevue, WA, USA | Los Angeles, CA, USA | Sunnyvale, CA, USA
As a Software Engineer on the Amazon Web Crawling team, you will be responsible for designing, developing and maintaining the systems that empower Amazon's ability to understand and interact with web content. This position requires creativity, passion and experience building innovative solutions to complex technical problems.
We are looking for someone with experience leading technical reviews with senior leadership who is able to present on a monthly basis. This team has high visibility and requires a tech lead that is comfortable moving across teams on 1 month sprints. Those with startup experience encouraged to apply.
Key job responsibilities
- Design and develop scalable web crawling and data extraction systems to acquire structured data from websites
- Optimize web crawling architecture for performance, scale, resilience and cost efficiency
- Implement robust systems to process high volumes of web content and extract meaning
- Develop data pipelines and infrastructure to support petabyte-scale datasets
- Work closely with scientists and other engineers to rapidly prototype and deploy new algorithms
- Write high quality, well-tested production code in languages like Python, Spark, Java, Scala.
We are open to hiring candidates to work out of one of the following locations:
Bellevue, WA, USA | Los Angeles, CA, USA | Sunnyvale, CA, USA