How Amazon’s Alexa, the Frontrunner in the AI Race, Fell Behind, According to the Scientists Behind It
July 25, 2024

The history of next-gen technology is littered with the remains of organisations that ‘got their first’ when it comes to ground-breaking’, industry-shaking innovation that sends other company stocks to the moon. Think of Microsoft Skype’s pioneering foray into the world of professional telecommunication and how it was left on the wayside as people picked up Zoom during the pandemic. According to a new Fortune report, this is also what happened to Amazon’s Alexa – one of the very first voice-based assistants that rolled out worldwide. Despite being groundbreaking at the time, Amazon is far behind in utilising LLM models to deliver a personalised robot that can respond to real-world conversation, which is what most people would have expected from the company as the next logical step. However, ChatGPT (now in its 4o version), Google’s Gemini model, and Meta’s Llama, for example, are far ahead of the game. The Fortune report delves into the issue in elaborate detail. 

For one, according to the report, Amazon does not possess the fundamental requirements for LLM technology. One of these is the vast amounts of data that Large Language Models depend on to ‘train’ for potential inputs and outcomes. Even if it did, the company does not possess or even have access to the computer chips needed to run LLMs on the scale that companies like OpenAI do. One spokesperson cited in the Fortune report, however, asserts that Amazon is suffering from no such lack. According to her, the information provided by former research scientists upon which the report is based, is dated. She further asserts that the company in fact has access to “hundreds of thousands of GPUs” and other AI-specific chips.  

How Amazon’s Alexa, the Frontrunner in the AI Race, Fell Behind, According to the Scientists Behind It

One other reason given for Amazon’s failure to capitalise on generative AI technology is their continued prioritisation of building generative AI for AWS, Amazon’s cloud computing unit. AWS’s business-oriented generative AI assistant, Amazon Q, will also be the first to enjoy the LLM model that Amazon is now belatedly working on. This detail was revealed by a research scientist who left the project within the last few months. This LLM model is said to be advanced enough for enterprise use cases. The LLM model that is being utilised in this manner is also based on a third-party LLM model called Claude. Claude is a creation of AI startup Anthropic, in which Amazon has invested $4 billion. Claude is considered to be one of the better models on the market, on par with OpenAI’s own ChatGPT models. 

However, Amazon’s project to build ChatGPT’s next competitor, a new and improved Alexa, does not benefit from Claude’s technology. According to the former employees that the Fortune article cites, this is due to privacy concerns. They also attribute this barring of technology sharing to the company’s largely ego-driven politics. However, the same spokesperson for Amazon who commented on the supposed lack of appropriate hardware disputes the notion that the company is not allowing Alexa to benefit from Claude’s technology. She also declined to comment on how exactly the Alexa project would be utilising Anthropic’s technological know-how.

How Amazon’s Alexa, the Frontrunner in the AI Race, Fell Behind, According to the Scientists Behind It

As the name suggests, LLMs depend on vast amounts of data to train their ‘AI’. However, Amazon does not possess relevant data at the scale to train its model. For one, the original Alexa was not built using a large language model. Rather, it was built on a collection of smaller machine-learning models that depended on thousands of hard-coded rules that turned what its user said into executable commands/actions. It is confusing how Amazon does not possess the data necessary, however: since its launch, Alexa has sold over 500 million units worldwide. As Mihail Eric, a former senior machine learning scientist at Alexa AI, detailed in an X (formerly Twitter) post, these devices represent a “mind-boggling user data moat,” which meant that the company had “all the resources, talent, and momentum to become the unequivocal market leader in conversational AI.” However, Eric suggests that the company’s technical and bureaucratic problems prevented this tech ‘from ever seeing the light of day’.

To cap off the series of unfortunate events that have so far ensured that Amazon does not emerge as a serious contender in the AI industry is the burden posed by Alexa itself. As described above, Alexa is not built on the LLM technology that forms the basis of AI technology today. In addition to creating synthetic datasets to replace the data that their LLMs lacked, Amazon scientists also had to work out a way of integrating the new technology with the millions of third-party devices and services that feature Alexa. According to one former manager, delivering on the experience that Alexa promises with the conversational capabilities of modern AI so far involves the company maintaining two vastly different technology stacks – one to cater to ‘old’ Alexa’s features and services and another to facilitate its newer capabilities. 

These issues finally culminated in a company-wide scramble to meet expectations when OpenAI rolled out ChatGPT for widespread use in November 2022. The launch was followed by a boom in generative AI that cast Alexa and Echo, its hardware interface. Amazon struggled to come up with a cohesive vision for a new AI product that was nothing they were prepared for. Employees quickly grew dissatisfied due to the long work hours that seemed to lead nowhere, given that they were being asked to “get some magic” out of LLM technology despite not having a concrete target to achieve. The fact that one of their senior data scientists had consistently tried to warn his corporate superiors of next-gen AI only served to rub salt onto already raw wounds. 

Given this background, the report concludes with the implication that the promise made by Amazon’s head of devices and services, David Limp, in September 2023 will inevitably be underdelivered upon. Despite Limp’s demo of a fully conversational AI assistant, this version was never updated onto the half a billion devices that feature Alexa worldwide. According to the employees who spoke to Fortune, it’s difficult to imagine that a new Alexa will ever be ready, despite Amazon’s steadfast commitment to create “the world’s best personal assistant.” Whether the company emerges as the overlooked underdog in the months or years to come or whether it will pass silently into the shadows is something that perhaps only time can truly tell. 

(Theruni Liyanage)

© All content copyright The Hype Economy. Do not reproduce in any form without permission, even if you have a paid subscription.