DiscoverCal introduction tutorial providing an overview of its features and how to operate.
DiscoverCal is a Voice User Interface (VUI) calendar manager that was designed to be displayed in a smart home or office setting. It accepts only voice commands and allows its user to add, edit, and delete events from their calendar. A GUI was designed for DiscoverCal to display the calendar and provide information on possible voice intents and utterances. DiscoverCal uses adaptive techniques to help users learn supported utterances and intents and discover new ones. Adaptive techniques adapt the system’s context and provide users with contextually relevant information to aid in interactions. DiscoverCal’s menu adapts based on the participant’s successful usage of intents and shows more complex intents progressively for the system.
Demostration of DiscoverCal
Voice User Interfaces (VUIs), like Siri and Alexa, are growing in popularity. However, even with the boost in adoption, VUIs are still plagued with problems. The issues include VUIs misrecognizing what people say and the challenge users face when learning all supported features. Research with DiscoverCal explores how we can better design VUIs to reduce the frustrations people experience and help users learn the system. Our research focuses on better understanding how an individual user’s experience with a VUI may vary and design adaptive techniques to adjust VUIs to support these diverse users.
Our most recent work examines patterns of interaction with a VUI by observing the obstacles people encounter when using a new VUI and the tactics they employ to overcome them. Our research focuses on categorizing the obstacles that people faced when using an unfamiliar VUI in efforts to observe their frustrations with the system. Additionally, we categorize the tactics people use to attempt to overcome these obstacles. By doing this, we better understand our users’ strategies and support them if these obstacles do occur.
To analyze the obstacles and tactics, our team conducted a user study of 12 participants interacting with DiscoverCal. In our user study, participants completed three sessions with DiscoverCal where they completed tasks with the system (e.g. add events and modify them). We used the Think Aloud protocol to encourage participants to verbalize their thought process to gain deeper insight into their interactions. Each session was transcribed and compared to the usage data DiscoverCal logged of the participant’s interaction. This data was analyzed to create 4 categories of obstacles and 10 tactics categories. With our obstacle and tactic categories, we analyzed the patterns of tactics employed for each obstacle. Our analysis uncovered two key contributions. First, while NLP Error obstacles were the most frequent, the other obstacles were equally if not more frustrating to our participants. The fallback tactics made up an almost equal or higher percentage to the total tactics applied for the non-NLP Error obstacles. This is an important contribution because there is a sentiment in VUI design that all we need is better NLP. And once machines can hear us perfectly, there will be no issues. However, even if we remove all the obstacles caused by NLP errors, we are still left with obstacles that greatly harm our users’ experience. Our second main contribution focuses on our participants’ patterns. The exploration tactics were frequently used when overcoming obstacles. This exploratory pattern is important because it does not currently align with how we support people to learn a new VUI. Right now, Siri and Alexa ask you to memorize a list or open their companion app to learn commands. But this is not how we observed our participants to learn DiscoverCal. We believe further research is needed on how to support this exploration pattern instead of forcing people to do it our way.
Future research strives to continue to uncover how people use VUIs and individual differences in use. Our goal is to design adaptive techniques for DiscoverCal to adjust the system and cater towards these individual needs.
- C. Myers, A. Furqan, D. Grethlein, S. Ontañónand J. Zhu,“Modeling Behavior Patterns with an Unfamiliar Voice User Interface” in Proceedings of the 27th ACM Conference On User Modelling, Adaptation And Personalization(UMAP), Larnaca/ Cyprus, 2019, forthcoming. (Acceptance rate: 23%)
- C. Myers, A. Furqan and J. Zhu, “The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface,” in Proceedings of the 2019 ACM Conference on Human Factors in Computing Systems (CHI’19), Glasgow, UK, 2019, forthcoming. (Acceptance rate: 23.8%)
- C. Myers, A. Furqan, J. Nebolsky, K. Caro, and J. Zhu, “Patterns for how users overcome obstacles in Voice User Interfaces,” in Conference on Human Factors in Computing Systems – Proceedings, 2018, vol. 2018–April. (Acceptance rate: 28%)
- A. Furqan, C. Myers, and J. Zhu, “Learnability through Adaptive Discovery Tools in Voice User Interfaces,” Proc. 2017 CHI Conf. Ext. Abstr. Hum. Factors Comput. Syst. – CHI EA ’17, pp. 1617–1623, 2017.
- C. Myers, A. Furqan, and J. Zhu, “Adaptable Utterances in Voice User Interfaces to Increase Learnability,” in 6th Workshop on Interacting with Smart Objects (SmartObjects), 2018, no. 2082, pp. 44–49.