Skip to main content
SearchLoginLogin or Signup

Chapter 4. Justin Beaver Stories: A conversational and empathic virtual animal in mixed reality technology

Historias del Castor Justin. Un animal virtual conversacional y empático en tecnología de realidad mixta

Published onNov 04, 2022
Chapter 4. Justin Beaver Stories: A conversational and empathic virtual animal in mixed reality technology
·

Abstract

In this appendix, this work describes a framework for the creation of a conversational character in a mixed reality empathetic experience. The framework allows for the synchronization of emotional animations of the virtual character in line with the character’s dialogue text, with the aim to improve the users’ empathetic experience. The dialogue is driven by a Natural Language Processing (NLP) pipeline, including automatic speech recognition, chat-bot, and text to speech generation micro-services. Within this framework, we present a holographic experience called “Justin Beaver Stories” using the Magic Leap one, Hololens or Nreal mixed reality goggles to project the virtual character into the user’s field of vision. This can be used to evaluate the impact of bringing a beaver to the user’s environment instead of bringing the user to the beaver’s natural environment. Interaction occurs by humanizing the beaver through human communications abilities, resulting in a conversational virtual beaver. The storyline describes the beaver’s lifestyle and problems, represented in a situation of distress. Positive experiences show the practical usability of the framework in the area of HCI.

Keywords: Conversational virtual character, mixed reality, emotional expressions, empathy, natural language processing, chat-bot, animal appearance

Full Text

Download Chapter PDF

Video

Justin Beaver Stories Demo

References

Anusuya, M. A., & Katti, S. K. (2010). Speech recognition by machine, a review. arXiv preprint arXiv:1001.2267.

Allen, J. (1976). Synthesis of speech from unrestricted text. Proceedings of the IEEE, 64(4), 433-442.

Batson, C. D., Fultz, J., & Schoenrade, P. A. (1987). Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences. Journal of personality, 55(1), 19-39.

Bradski, G. R., Miller, S. A., & Abovitz, R. (2019). U.S. Patent No. 10,203,762. Washington, DC: U.S. Patent and Trademark Office.

Clark, M. A., Robertson, M. M., & Young, S. (2019). “I feel your pain”: A critical review of organizational research on empathy. Journal of Organizational Behavior, 40(2), 166-192.

De Gennaro, M., Krumhuber, E. G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in psychology, 10, 3061.

De Melo, C. M., Gratch, J., & Carnevale, P. J. (2014). Humans versus computers: Impact of emotion expressions on people’s decision making. IEEE Transactions on Affective Computing, 6(2), 127-136.

Kano, Y., & Morita, J. (2019, September). Factors Influencing Empathic Behaviors for Virtual Agents: Examining about the Effect of Embodiment. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 236-238).

Egges, A., Kshirsagar, S., & Magnenat-Thalmann, N. (2004). Generic personality and emotion simulation for conversational agents. Computer animation and virtual worlds, 15(1), 1-13.

Haag, A., Goronzy, S., Schaich, P., & Williams, J. (2004, June). Emotion recognition using biosensors: First steps towards an automatic system. In Tutorial and research workshop on affective dialogue systems (pp. 36-48). Springer, Berlin, Heidelberg.

Kuchaiev, O., Li, J., Nguyen, H., Hrinchuk, O., Leary, R., Ginsburg, B., Kriman, S., Beliaev, S., Lavrukhin, V., Cook, J. & Castonguay, P., (2019). Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577.

Marinoiu, E., Zanfir, M., Olaru, V., & Sminchisescu, C. (2018). 3d human sensing, action and emotion recognition in robot assisted therapy of children with autism. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2158-2167).

Mensio, M., Rizzo, G., & Morisio, M. (2018, April). The rise of emotion-aware conversational agents: threats in digital emotions. In Companion Proceedings of the The Web Conference 2018 (pp. 1541-1544).

Mirsamadi, S., Barsoum, E., & Zhang, C. (2017, March). Automatic speech emotion recognition using recurrent neural networks with local attention. In 2017 IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 2227-2231). IEEE.

Miller, A. H., Feng, W., Fisch, A., Lu, J., Batra, D., Bordes, A., Devi Parikh, & Jason Weston J. (2017). Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476.

Monyneaux D. G., Steinbrucker F. T., Wu Z., Wei W., Min J., & Yifu Zhang (2019, June). Caching and updating of dense 3d reconstruction data. Library Catalog: Google Patents.

Oker, A., Pecune, F., & Declercq, C. (2020). Virtual tutor and pupil interaction: A study of empathic feedback as extrinsic motivation for learning. Education and Information Technologies, 25(5), 3643-3658.

Paiva, A. (2011). Empathy in social agents. International Journal of Virtual Reality, 10(1), 1-4.

Park, K., & Jeong, Y. S. (2019, June). Indoor Dialog Agent in Mixed Reality (video). In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services (pp. 708-709).

Smid, K., & Pandzic, I. G. (2002, June). Conversational virtual character for the web. In Proceedings of Computer Animation 2002 (CA 2002) (pp. 240-247). IEEE.

Setiaji, B., & Wibowo, F. W. (2016, January). Chatbot using a knowledge in database: human-to-machine conversation modeling. In 2016 7th international conference on intelligent systems, modelling and simulation (ISMS) (pp. 72- 77). IEEE.

Stanger, N., Kavussanu, M., & Ring, C. (2012). Put yourself in their boots: Effects of empathy on emotion and aggression. Journal of sport and exercise psychology, 34(2), 208-222.

Tam, K. P. (2013). Dispositional empathy with nature. Journal of environmental psychology, 35, 92-104.

Valle, R., Shih, K., Prenger, R., & Catanzaro, B. (2020). Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis. arXiv preprint arXiv:2005.05957.

Zhao, Z., Han, F., & Ma, X. (2019, December). Live emoji: A live storytelling vr system with programmable cartoon-style emotion embodiment. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) (pp. 251-2511). IEEE.

Comments
0
comment

No comments here

Why not start the discussion?