The first thing many people think of when they think of Alexa is its vast library of more than 50,000 skills, ranging from roleplaying games to guided workouts and virtual sommeliers. That may not be the case for much longer.
This morning at Bloomberg Live's The Value of Data event, Rohit Prasad, Alexa's vice president and head scientist, confirmed in an interview with Tom's Guide that Amazon intends to eliminate the need for Alexa users to enable and call upon individual skills.
Specifically, Prasad is working towards an Alexa that can parse its abilities to find the one that best addresses your request.
For example, Prasad noted, you won't need to say "Get me an Uber," you'll say, 'Get me a car to the airport.'" Amazon's assistant will use context clues, such as your location, your subscriptions and services you've used in the past, to determine whether to call an Uber, Lyft, or other ride-sharing service.
"We don't want Alexa to be like your smartphone, where you have fifty apps on your home screen," Prassad said. "The way we're solving that is that you'll just speak, and we will find the most relevant skill that can answer your query."
The number of Alexa skills that users can choose to call upon has exploded in the past few months. This past March, Voicebot.ai reported that Alexa's U.S. skill count had surpassed 30,000, up from 25,000 in December 2017, and 20,000 in September 2017. Prasad announced at a Bloomberg Live panel this morning that the count has passed 50,000.
Part of the decision to pull skill-enabling away from the user has to do with Amazon's continued efforts to make interaction with Alexa feel less like barking orders, and more like a natural conversation. For the past two years, the company has hosted the Alexa Prize (opens in new tab), a competition that challenges universities to create coherent, social AIs. "Through the innovative work of students, Alexa customers will have novel, engaging conversations," the competition's website (opens in new tab) reads.
In March, Amazon also released Follow-Up Mode, which allows users to make multiple requests without having to say "Alexa" before each one.
So when can we expect to say goodbye to skills as we know them?
In all likelihood, not for a while. Prasad noted that the project is still in its early stages, and that Alexa's scientists have quite a challenge ahead of them. "It's hard to tell what exactly the customer needs," Prasad said. "If you say, 'Alexa, get me a car,' you don't want to buy a car on Amazon, you need a car to get to the airport. The ambiguity in that language, and the incredible number of actions Alexa can take, that's a super hard AI problem."