In early 2014, Srikanth Thirumalai met with Amazon CEO Jeff Bezos. Thirumalai, a computer scientist who’d left IBM in 2005 to head Amazon’s recommendations team, had come to propose a new plan for incorporating the latest advances in artificial intelligence into his division. Amazon’s product recommendations had been built around AI since the company’s very early days,. Amazon also applied AI in its shipping schedules and the robots moving around product in their warehouses.
Amazon began selling Echo to Amazon Prime customers in the U.S. starting in 2014 and also brought machine learning to the tens of thousands of Amazon Web Services customers.
In 2016, AWS released new machine-learning services that more directly drew on the innovations from Alexa—a text-to-speech component called Polly and a natural language processing engine called Lex. These offerings allowed AWS customers to build their own mini Alexas.
A third service involving vision, Rekognition, drew on work that had been done in Prime Photos, a relatively obscure group at Amazon that was trying to perform the same deep-learning wizardry found in photo products by Google, Facebook, and Apple.
Amazon Web Services announced on April 5 2018 a new way for machine learning developers to build and deploy models through its cloud services.
Amazon’s SageMaker AI service gained support for a local mode that lets developers start testing intelligent systems on their development computers before moving them to the cloud. With local mode, a developer can first test out different approaches, then send them out to Amazon’s SageMaker for more extensive training on Amazon’s cloud.
AWS’ AI-powered transcription and translation services are now generally available for customers to use. Appropriately named Transcribe and Translate, these services enable companies to gain the benefits of AI without having to hire experts. This is at the core of Amazon’s approach to AI.
Both machine learning APIs received new functionality as part of themes recent AI feature updates. Transcribe now allows customers to input custom vocabulary, while Translate will automatically detect the source language that customers input.
In later-2018 Amazon offers limited language support for both services,. Translate can turn Arabic, Simplified Chinese, French, German, Spanish, and Portuguese into English and vice versa. In the coming months, AWS plans to support Japanese, Russian, Italian, Traditional Chinese, Turkish, and Czech. Transcribe, meanwhile, works with Spanish and English.
These new AI services join their existing AWS AI services, including Lex for natural language understanding, Polly for speech generation, and Rekognition for image processing.
With Amazon's machine learning overhaul in place, the company’s AI expertise is now distributed across its many teams. While there is no central office of AI at Amazon, there is a unit dedicated to the spread and support of machine learning, as well as some applied research to push new science into the company’s projects.
The Core Machine Learning Group is led by Ralf Herbrich, who worked on the Bing team at Microsoft and was at Facebook for one year. Herbrich’s group continues to push machine learning into everything the company attempts.
For example, the fulfillment teams wanted to better predict which of the eight possible box sizes it should use with a customer order, so they turned to Herbrich’s team for help. After they applied AI to this business problem, the error rates went down significantly.
Even when an Amazon service doesn’t yet use the company’s machine-learning platform, it can be an active participant in the process. Amazon’s Prime Air drone-delivery service, still in the prototype phase, has to build its AI separately because its autonomous drones can’t count on cloud connectivity. The lessons being learned solving Prime Air business problems are definitely of interest to other parts of Amazon. AI R&D spreads throughout Amazon organically.
At the core, proving success using AI drives the innovation and momentum.