July 22, 2020 at 10:00PM
Amazon is taking Alexa, its digital assistant, and making it smarter, more ubiquitous, and easier to talk to. At its Alexa Live developer event held online Wednesday, the retailer laid out 31 news announcements that broaden Alexa’s capabilities and attempt to fill the gap left by Amazon’s lack of a phone. It also announced a cheaper Alexa ConnectKit module that lets manufacturers easily add Alexa capabilities to their device.
First up Alexa will get smarter thanks to the addition of deep neural networks (DNNs) that improve Alexa’s natural language understanding. So far, test skills have shown a 15% improvement in understanding thanks to the use of DNNs, although your mileage may vary. Developers don’t even have to make a change to their skills to reap this benefit.
Tied to the overall conversational improvement, developers can use Alexa Conversations, which is in beta, to do away with highly scripted interactions. The deep neural networks will allow Alexa to follow conversational paths so users can ask for things in a natural way. Developers applying the new technique will provide examples of dialog and map it to actions that the skill should take.
There’s also an intriguing new functionality in a preview that may cause Amazon some consternation in the future, something Amazon calls name-free interactions. This feature lets users ask Alexa for help in a general task, such as “order a pizza” or “help me plan my trip,” and then Alexa will choose the most obvious installed skill for the task.
So in the case of the pizza-order request, Alexa will see what pizza or food ordering skills the user might have on the device and choose that one for the task. In the case of multiple skills that might serve the same function, it’s unclear how Alexa will choose between them. In my interview with Daniel Rausch, VP, smart home, and Alexa mobile, he said he wasn’t sure of the weight that Alexa would put on certain signals.
I love the idea of having what feels like a more general-intelligence assistant, but if Amazon isn’t clear about how Alexa chooses the app to use, regulators and app developers will likely cry foul. However, Amazon says that developers who have so far used this skill have seen a 15% increase in conversations.
These fall into the category of general improvement to Alexa, but there are several improvements designed to make Alexa easier to access and use in general and while on a phone. The first is pretty simple but will reduce friction considerably. A beta program will let developers use links to open their Alexa skills directly, as opposed to sending users to the Alexa site to search for the skill, select the skill, authenticate the skill, and then return back to the original site to continue what they were doing.
Another friction-reducing feature will be what Amazon calls skill resumption which lets someone start a conversation with an Alexa skill and then resume it without having to call out the name of the skill again. The skill will also be able to provide new information without having the skill actively enabled again. So in the case of asking Alexa to order an Uber, once that interaction is finished, the user can do something else on their phone or device and then come back and ask, “Alexa, how far away is my Uber?” or Alexa can pipe up to say that the Uber has arrived.
But the biggest phone-related feature is the ability to put Alexa directly in an app. In this case, an app developer can embed Alexa into their app, and then a consumer can ask Alexa to handle tasks in that app for them. So if Spotify embeds Alexa in its mobile app I could ask Alexa to skip this song while in Spotify. This seems really powerful and makes it possible to include Alexa on the phone even though Amazon doesn’t have a phone. However, given the demand for data that Amazon includes with its Alexa integrations, app developers may be leery of sharing even more information about how consumers use their apps.
Amazon put a lot of focus on displays as well with this news, making its Amazon presentation language (APL) used to build apps that combine voice and displays, easier to use. A better user interface is now out, as is a way to edit and mix audio and sound in the APL. Developers can also allow on-display purchases and web-based purchases in skills. (Amazon takes its 30% cut.)
Finally, for hardware partners, Amazon has produced a cheaper Alexa ConnectKit board that Rausch says costs about $4. This is about half the cost of prior modules. These modules make it easy for companies to put Alexa on their devices and use Amazon’s cloud to host the data related to those devices. For the cost of the board and an upfront fee, the company making the device doesn’t have to worry about ongoing cloud costs and security tied to the cloud side of the business. Amazon takes care of that for them.
These announcements show that Amazon sill has a clear vision of where it wants Alexa to go and how it plans to get there by using the tools it has. The hardware to make it easier to bring companies who want to develop smart home products into its ecosystem is compelling.
The lack of a phone means that Amazon needed to find another way to reduce friction while piggybacking on competitor’s handsets, and it’s finding ways to do that. And its focus on creating a digital assistant that can respond to users naturally continues to mean people will turn to Alexa even if they have to jump through a hoop or two.
The post Amazon’s Alexa event makes Alexa smarter, chattier and puts it in more places appeared first on Stacey on IoT | Internet of Things news and analysis.
0 Comments