Amazon
Senior Software Engineer - AI, Artificial General Intelligence | Inference Engines
US, MA, Boston•
50 days ago
The Artificial General Intelligence team is responsibly advancing the company’s generative AI technologies, including Amazon’s most expansive multimodal Large Language Models. Our inference engines power these initiatives.
Key job responsibilities
You will develop, improve and release our cutting-edge inference engines.
You will leverage advanced hardware, innovative software architecture, and distributed computing techniques to enable breakthrough research and product development across the company.
You will innovate in state-of-the-art inference and establish Amazon as the market leader in enterprise AI solutions, offering unbeatable price-performance.
You will lead our efforts to build the best inference performance on custom AWS Trainium and Inferentia silicon and the Trn1, Inf1/2 servers. Strong software development (Python and C++) and Machine Learning knowledge (Text and Multimodal) are both critical to this role.
You will understand current and future directions of ML framework development, with a focus on enabling the fastest and most price-performant inference.
You will collaborate with AWS Neuron, AWS Bedrock, and other teams within and outside Amazon to achieve the best outcome for AGI customers.
About the team
We are a Science and Engineering team working on the cutting edge of inference. We are interested in tackling the hardest and most impactful problems in AI inference. We explore inference-aware architectures, and compiler, kernel and runtime improvements to serve AI models of increasing size and performance.
Key job responsibilities
You will develop, improve and release our cutting-edge inference engines.
You will leverage advanced hardware, innovative software architecture, and distributed computing techniques to enable breakthrough research and product development across the company.
You will innovate in state-of-the-art inference and establish Amazon as the market leader in enterprise AI solutions, offering unbeatable price-performance.
You will lead our efforts to build the best inference performance on custom AWS Trainium and Inferentia silicon and the Trn1, Inf1/2 servers. Strong software development (Python and C++) and Machine Learning knowledge (Text and Multimodal) are both critical to this role.
You will understand current and future directions of ML framework development, with a focus on enabling the fastest and most price-performant inference.
You will collaborate with AWS Neuron, AWS Bedrock, and other teams within and outside Amazon to achieve the best outcome for AGI customers.
About the team
We are a Science and Engineering team working on the cutting edge of inference. We are interested in tackling the hardest and most impactful problems in AI inference. We explore inference-aware architectures, and compiler, kernel and runtime improvements to serve AI models of increasing size and performance.