Voice User Interfaces (VUIs) have rapidly become an integral part of our daily lives. Enabled by advances in natural language processing and artificial intelligence, VUIs allow users to interact with machines using spoken language, making technology more accessible and user-friendly.
From virtual assistants like Siri and Alexa to interactive phone systems and automotive infotainment, VUIs have found their way into diverse applications, promising a seamless bridge between humans and machines
While the potential of VUIs is immense, their effectiveness hinges on the ability to establish a coherent and natural form of communication between users and machines. In this context, communication breakdowns emerge as a critical challenge.
People frequently using voice user interface systems and first timers using voice user interface.
Because its easier to do mundane task like setting up an alarm.
They finds it quite unique to interact with voice as it's hand free.
As Voice systems offers a certain confidence in the interactions, that triggers people's innate communication system.
They think it'll might reduce their screen time and eventually cognition load.
Because people feel voice systems will communicate with them in human alike interactions.
The study explores intrinsic user's communication behavior with a voice user interfaces. Further recognizing machine perspective while inspecting machine learning models used in voice systems.
Drop in user task completion while interacting with voice user interface
Drop in user satisfaction while interacting with voice user interface
Sam, an avid music listener, was traveling to his office when he asked the VUI to play his travel playlist. Imagine yourself as sam and here's your script when you converse with voice system.
Follow the dialogue script states and interact with voice and text systems, note down your frustrations or the things you liked and reach out to me to talk about voice systems and all things UX.