About Me

The 2 Biggest Challenges in Speech UX

This article was published by UX Magazine on February 27th, 2013

Speech is still a relatively new interface. Technology is finally starting to catch up to the dreams that we've had since the invention of the computer itself: dreams of having natural conversations with computers—something portrayed in countless science fiction movies, TV shows, and books.

But from a UX standpoint, what does "natural" even mean?

The most popular conception of a natural conversation with a computer is one in which the user can say anything they want and the computer will understand and respond appropriately. In commercials, Apple advertises Siri as a personal assistant that you can seemingly say anything to. This, however, is far from reality.

It’s a speech designer's job to make it seem as if users can say anything, when that’s not actually the case. Designers must develop ways to shape and constrain a user's interactions with their device. Users must be trained to communicate in an understandable manner that doesn’t make them feel like they’re tailoring the way they speak to their devices.

Users must also be made aware of what the device can do to prevent them from making errors and how to harness the complete power of the device: the two biggest challenges designing for user experience in speech recognition technology.

Feature Discovery

This is by far the hardest part of speech interface design. Speech recognition is still very new so we simply cannot recognize and do everything. Even other humans sometimes misunderstand or misinterpret what someone is saying. On top of that, people rarely look through user manuals or research everything a device can do.

Designers need to find ways to educate users about what they can do as they are interacting with devices. With touch interfaces this can be achieved through well-named buttons and high level categorization. Many speech interfaces do not have the luxury of these visual queues.

The most obvious way that people train one another is through explicit instruction. Teachers spend a lot of time lecturing their students. Parents explain to their kids that they should treat others the way they wish to be treated. This can be one way for devices to train users, but is potentially time-consuming and frustrating for experienced users. As interface designers we must find more ways to help users train themselves through self-discovery.

Another way that we teach one another is through leading by example. We don't teach a newborn to speak their first words by telling them how to shape their mouth and where to place their tongue. We speak in front of them and they experiment on their own to mimic the sounds they hear.

Nobody teaches someone to use a revolving door: we see someone else use it and copy them. Are there any opportunities for the device to lead the way in an interaction? Maybe two different virtual actors communicating for a user to observe. This method could end up being verbose but, if done well, could also be very successful considering our brains are wired to learn this way.

Bottom line: if the user can't figure out what they can do with the device they may never unlock its power, negating all of the work designers put into it.

Phrasing

People have developed many ways to express ideas, and even many ways to express the same idea. Synonyms and ambiguities are incredibly challenging elements of speech recognition from a technical point of view, forcing developers to choose between accuracy and performance. If we can design a UX that reduces ambiguity and the number of ways to phrase an idea, the system can be tuned to perform much better. If the device uses consistent phrasing the user will tend towards using the same phrasing in the future.

People frequently repeat what another person has said, with very slight variation, in order to clarify an idea. This can often be a mechanism for helping someone learn to express an idea better.

A mother teaching the difference between "can" and "may" might go like this:

"Mommy, can I have soda?"

"May you have soda?"

Designers standardizing terminology:

"When the user drags their finger quickly to the left the page should move to the next page on the right."

"Ok, so a swipe left transitions to the right page"

This means that if we have a device that can tell time and is listening for the following phrases:

  • "What time is it?"
  • "What hour is it?"
  • "How late is it?"

The device can always reply "The time is five thirty-two," queuing the user to use “time” instead of “hour” or “late.” Developers can then concentrate on making the "What time is it?" phrase work better.

Lastly, another idea for training the user's phrasing is to use non-verbal positive and/or negative feedback. People use body language to indicate if they understand what someone else is saying. They will often nod along if they understand or they may have a puzzled expression if they don't.

It would be great if we could develop a similar system for speech recognition devices. A positive tone of voice could indicate that the device understands the user very well, providing a subtle hint that they should continue to use similar phrasing. We may also flash the screen or vibrate the device to signify a positive or negative response.

Phrasing may be just an intermediate step until technology improves, but training the user to use more predictable phrasing will always improve the experience.

The Even Bigger Problem

The key to solving these problems is feedback, and here is the true difficulty: how can the device provide unobtrusive and concise feedback that helps shape the new user without frustrating the experienced one? To make this even more difficult, speech interfaces are often used in circumstances where the user cannot be looking at the device, so we can’t rely on existing paradigms of visual feedback.

There is hope, however. People are expert auditory communicators: we all do it every day. There are many things we can learn by studying how we speak with one another. What other tools can we learn and utilize to make everyone an expert with speech interfaces?