What is Amazon’s plan for Alexa to manage your whole life?


What is Amazon’s plan for Alexa to manage your whole life?

Five years after Amazon launched the first generation Echo connected object, voice assistants are very widespread but relatively little used in the end if not to ring the morning wake, get the weather of the day or play a piece of music. It must be said that the permanent promotions of Amazon and Google, especially at Christmas, are not for nothing. Beyond developing ever more channels to sell products from its marketplace, even if it means wiping out stinging failures as with launching its smartphone Fire, would Amazon not have other more hidden ambitions ?

Voice assistants, a growing market

According to a recent July 2019 study by Roland Berger communicated by LSA, the number of French users of voice assistants is estimated at 20 million. Although the maturity of the French market is not the same as in the United States, the voice speakers count in the Hexagon 1.7 million users, with 600,000 units sold in 2018. Globally, the trend will accelerate with annual growth of 25% until 2023 to reach 8 billion voice assistants in circulation in five years. And Asia would overtake the United States.

In the commercial battle that has started between the largest American and Chinese technology companies, the winner is not what we think. According to LSA always, “In terms of installed systems, Apple remains the world leader with Siri for the moment, who is also the most international assistant, or rather the least centered on the United States, with only 24% of assistants integrated into aircraft in North America in late 2016, against 95% for Alexa or his compatriot Google Assistant. However, Asian firms point more than the tip of their noses. Baidu marketed his voice assistant on 200 million devices, compared to 100 million for Alexa from Amazon over the same period. “

Alexa is now distributed everywhere, capable of controlling more than 85,000 products for the smart home, from televisions to doorbells to headphones. He can execute more than 100,000 “skills” and count. It processes billions of interactions per week, generating huge amounts of time, preferences and comings and goings. Now is the time to know if Alexa’s abilities are better or worse than those of Google’s assistant. Alexa may not be up to par but is in tune with the uses for the interior. Alexa has become an empire and Amazon is just beginning.

Amazon wants to make Alexa an omnipresent companion who actively shapes and orchestrates your life.

A recent article in the MIT Technology Review provides very interesting details regarding Amazon’s projects. Rohit Prasad, VP and Alexa’s chief scientist, revealed more details about Alexa’s next direction. Many of you do not practice the language of Shakespeare, I have partially translated this article here so that you can better understand these recent developments in the strategy adopted by Amazon. In essence, the bottom line is that the voice assistant moves from passive to proactive interactions.

Rather than waiting and responding to requests, Alexa will anticipate what the user might want. The idea is to make Alexa an omnipresent companion who actively shapes and orchestrates your life. This will require Alexa to know you better than ever.

Rohit Prasad, VP and chief scientist of Alexa

In fact, Prasad, who presented his vision for Alexa’s future to the WebSummit in Lisbon, Portugal, in November 2019, gave the world a glimpse of what this change might look like. In June, at the conference re: Mars, he had already presented a functionality called, showing how it could be used to help you plan an evening. Instead of manually launching a new request for each part of the evening, you just have to start the conversation, for example by asking to book cinema tickets. Alexa would then follow up to ask if you also wanted to make a restaurant reservation or call an Uber.

To accelerate this transition, Amazon needs both hardware and software. In September, the technology giant launched a series of Alexa “on the go” products, including the Echo Buds (wireless headphones) and Echo Loop (an intelligent ring).

All of these new products allow Alexa to listen to and record data on a considerably larger part of your life, to better provide informed assistance about your whereabouts, actions and preferences.

Rohit Prasad, VP and chief scientist of Alexa

From a software perspective, these capabilities will require Alexa to use new methods to process and understand all of the disparate sources of information. Over the past five years, Prasad’s team has focused on strengthening the assistant’s mastery of AI fundamentals, such as basic voice and video recognition, and expanding their understanding natural language. In addition to this foundation, they have now started to develop Alexa’s intelligent prediction and decision-making capabilities and, increasingly, her higher level reasoning skills. In other words, the goal is for Alexa’s AI capabilities to become much more sophisticated in a few years.

Objective: to make Alexa smarter

Here’s how Alexa’s software updates can be pooled to run a nighttime out scenario. In order to respond to a request to buy cinema tickets followed by a dinner with friends and a reservation by an Uber, a network of neurons learns, thanks to billions of interactions with users per week, to recognize the commonly used with each other. This is how smart prediction comes into play. When enough users book dinner after a film, Alexa will group them together and recommend them jointly. As a reminder, these are the voice applications designed by companies or brands that allow Internet users to perform daily tasks by naturally interacting with their content through voice via Alexa.

But reasoning is necessary to know what time to book the Uber. Taking into account your location and the location of the cinema, the start time of your film and the expected traffic, Alexa determines when the car should pick you up to get there on time.

Prasad imagines many other scenarios that may require more complex reasoning. You could imagine one, for example, that would allow you to ask your Echo Buds where the tomatoes are while you are in Whole Foods. The Buds will need to record that you are in Whole Foods, access a map of its floor plan, and then tell you that the tomatoes are in aisle seven. This shows the full potential of possible business thanks to the strategy of integration, takeover or partnerships with other actors such as the Whole Foods chain (bought by Amazon) or Monoprix (partnership with the Casino group).

In another scenario, you could ask Alexa from your home via Echo to send you a notification if your flight is delayed. When doing so, you may already be driving. Alexa must realize (by identifying your voice in your initial request) than you, not a roommate or a family member, need notification and, based on the last Echo compatible device with which you interacted, that you are now in your car. Therefore, notification should be made in your car rather than at your home.

This level of prediction and reasoning will also have to take into account video data, as more and more compatible Alexa products include cameras. Let’s say you’re not at home, think Prasad, and a scout scout scout knocks on your door to sell cookies. Alexa on your Amazon Ring, a doorbell equipped with a camera, should record (via video and audio input) who is at your door and why, and, knowing that you are not at home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf. Well, it must be recognized that this scenario is far from the most likely, especially outside the United States.

To make this possible, the Prasad team is currently testing a new software architecture for processing user commands. This involves filtering audio and visual information through many other layers. Alexa must first record the user is trying to access among the approximately 100,000 available. Then, it must include the control in the context of the identity of the user, the device used by this person and its location. Finally, it will have to refine the response on the basis of the preferences previously expressed by the user.

“Here is what I think will be at stake in the coming years: (learn better) reason and make (reasoning) more personalized, with more context,” says Prasad. “In short, it is to collect all the possible data to make decisions in mass. “

One limit: acceptance of Alexa’s use of personal data

From a technical point of view, all of this would be an incredible achievement. What Prasad is talking about – combining various data sources and automatic learning methods to conduct high-level reasoning – has been a goal for researchers in artificial intelligence for decades.

From a consumer perspective, however, these changes also have critical privacy implications. Prasad’s vision set out in the MIT Technology Review article does indeed assume that Alexa will follow you everywhere, know what you are doing at some point and will be the main interface for the way you coordinate your life. Basically, it requires picking up huge amounts of intimate details about your life. Some fear that Amazon will eventually go beyond this reference base using your data to advertise and profile you.

“It is ultimately about monetizing the daily lives of individuals and groups of people,” says Jeffrey Chester, executive director of the Center for Digital Democracy, a Washington-based consumer privacy organization, DC .

Rohit Prasad, VP and chief scientist of Alexa

When he insisted on this point, Prasad stressed that his team had facilitated the automatic deletion of their data by users and their withdrawal from human examination. However, neither option prevents data from being used to form the myriad of Alexa’s automatic learning models. In fact, Prasad alluded to ongoing research that would move Alexa’s training process to a process where models can be quickly updated whenever there is new user data, more or less guaranteeing that the value of these data will be captured before being eliminated. In other words, the automatic deletion of your data only means that they will still not be there to form future models once the training algorithms are updated; for current models, your data would be used in much the same way. (In follow-up requests, an Amazon spokesperson said that the company had not sold the data collected by Alexa to third-party advertisers or to target advertising, unless the user accessed a service via Alexa , like Amazon.com.)

Jen King, Director of Privacy at the Center for Internet and Society at Stanford Law School, believes that these types of data control are far too superficial. “If you want to give people meaningful control, you must be able to respect their decision to withdraw completely or give them more choice about how their data is used,” she said. “Providing functional assistance according to a local context could be done in an extremely privacy-friendly manner. I do not think that this scenario should be intrinsically problematic. “

In practice, King considers that this means several things. First, at a minimum, Amazon should ask users to choose rather than refuse to let their data be used. Second, Amazon should be more transparent about its use. Currently, when you delete your data, what the company may have already done is unclear.

“Imagine you have an AI surveillance camera in your house and forgot that it was on and that you were walking around the house naked,” she said. “As a consumer, it would be useful to know, when you delete these files, whether the system has already used them to form the algorithm it uses. “

Jen King, Director of Privacy at the Center for Internet and Society at Stanford Law School

Finally, Amazon should give users more flexibility as to when and where they can use their data. Users may be happy, for example, to give up their own data while wanting their children to be out of bounds. “Technology companies tend to design these products with the idea that it is all or nothing,” she said. “I think it’s a really misguided way to approach it. People may want some of the added value of these systems, but that doesn’t mean they want them in all aspects of their lives. “

Prasad’s ultimate vision is to make Alexa available and useful for everyone. Even in developing countries, he imagines cheaper versions that people can access on their smartphone. “For me, we are moving the cognitive load to routine tasks,” he said.

“I want Alexa to be a productivity enhancer … to be truly ubiquitous so that it works for everyone. “

Rohit Prasad, VP and chief scientist of Alexa


Leave a Reply

Your email address will not be published. Required fields are marked *