Amazon is among the top 10 most-visited website worldwide. Amazon's search experience is central to how hundreds of millions of customers shop using billions of queries to find products for sale. The scale and impact of Amazon Search is huge.
Within the Amazon Search organization, the MIDAS (Metrics, Insights, Data Annotation for Search) team delivers high quality labeled data at scale in order to improve the search experience for shopping on Amazon through AI model training and evaluation as well as to produce metrics that measure our customer experiences. We focus on agility, linguistic expertise, high standards for data integrity, enabling self-service, and frugality of resources in order to meet or exceed our customers' expectations. We collaborate closely with several machine learning (ML) applied science, engineering, and product teams that develop and test ML models to improve the quality of semantic matching, ranking, computer vision, image processing, and augmented reality.
The Language Engineer role in the MIDAS team owns the creation of the data annotation workflow, writing intuitive and labeler-friendly annotation guidelines, data wrangling and analysis, specifications for labeling UI templates, and reporting of labeled data quality metrics to deliver on internal customers' requirements and achieve the desired Amazon customer outcomes. In order to achieve high rates of accuracy and consistency in labeled data outputs, Language Engineers apply linguistic (i.e., semantics, syntax, pragmatics) and scripting expertise to overcome natural language processing and language understanding challenges.
Key job responsibilities
* Design and develop data annotation guidelines and workflows.
* Manage and process large amounts of structured and unstructured data.
* Adopt and design quality control metrics and methodology to evaluate the quality of data annotation.
* Maximize productivity, process efficiency and quality through streamlined workflows, process standardization, documentation, audits and investigations on a periodic basis.
* Handle annotation & data investigation requests from multiple stakeholders with high efficiency and quality in a fast-paced environment.
* Collaborate with scientists, engineers, and product managers in defining metrics, guidelines, and workflows.
* Initiate and contribute towards improvement projects, present solution proposals, and implement them.
* Establish processes and mechanisms to onboard and train junior data associates on an ongoing basis.
* Handle work prioritization and deliver based on business priorities.
* Be flexible in changes to conventions deployed in response to customers’ requests and change workflows accordingly.
Within the Amazon Search organization, the MIDAS (Metrics, Insights, Data Annotation for Search) team delivers high quality labeled data at scale in order to improve the search experience for shopping on Amazon through AI model training and evaluation as well as to produce metrics that measure our customer experiences. We focus on agility, linguistic expertise, high standards for data integrity, enabling self-service, and frugality of resources in order to meet or exceed our customers' expectations. We collaborate closely with several machine learning (ML) applied science, engineering, and product teams that develop and test ML models to improve the quality of semantic matching, ranking, computer vision, image processing, and augmented reality.
The Language Engineer role in the MIDAS team owns the creation of the data annotation workflow, writing intuitive and labeler-friendly annotation guidelines, data wrangling and analysis, specifications for labeling UI templates, and reporting of labeled data quality metrics to deliver on internal customers' requirements and achieve the desired Amazon customer outcomes. In order to achieve high rates of accuracy and consistency in labeled data outputs, Language Engineers apply linguistic (i.e., semantics, syntax, pragmatics) and scripting expertise to overcome natural language processing and language understanding challenges.
Key job responsibilities
* Design and develop data annotation guidelines and workflows.
* Manage and process large amounts of structured and unstructured data.
* Adopt and design quality control metrics and methodology to evaluate the quality of data annotation.
* Maximize productivity, process efficiency and quality through streamlined workflows, process standardization, documentation, audits and investigations on a periodic basis.
* Handle annotation & data investigation requests from multiple stakeholders with high efficiency and quality in a fast-paced environment.
* Collaborate with scientists, engineers, and product managers in defining metrics, guidelines, and workflows.
* Initiate and contribute towards improvement projects, present solution proposals, and implement them.
* Establish processes and mechanisms to onboard and train junior data associates on an ongoing basis.
* Handle work prioritization and deliver based on business priorities.
* Be flexible in changes to conventions deployed in response to customers’ requests and change workflows accordingly.
Related Jobs
Meta
Software Engineer, Language - Wearables
Sunnyvale, CA, Bellevue, WA, Redmond, WA, Menlo Park, CA, Seattle, WA, Burlingame, CA, New York, NY
Meta
Research Engineer, Language - Generative AI
Bellevue, WA, Menlo Park, CA, New York, NY, San Francisco, CA
Meta
Software Engineer, Language - Generative AI
Menlo Park, CA, Seattle, WA, New York, NY, San Francisco, CA
Meta
Research Engineer, Language - Generative AI
Bellevue, WA, Menlo Park, CA, Seattle, WA, New York, NY, San Francisco, CA