Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment

Ghazanfar Ali, Hong Quan Le, Junho Kim, Seungwon Hwang, Jae In Hwang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we present the design of a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. These places need engaging and no-repetitive digital content delivery to maximize user involvement. An intelligent virtual agent is a promising mode for both purposes. Premises of framework is wearable mixed reality provided by MR devices supporting spatial mapping. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. By applying a modular approach and deploying computationally intensive modules on cloud-platform, we achieved a seamless virtual experience in a device with limited resources. Human-like gaze and speech interaction with a virtual agent made it more interactive. Automated mapping of body animations with the content of a speech made it more engaging. In our tests, the virtual agents responded within 2-4 seconds after the user query. The strength of the framework is flexibility and adaptability. It can be adapted to any wearable MR device supporting spatial mapping.

Original languageEnglish
Title of host publicationProceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019
PublisherAssociation for Computing Machinery
Pages47-52
Number of pages6
ISBN (Electronic)9781450371599
DOIs
Publication statusPublished - 2019 Jul 1
Event32nd International Conference on Computer Animation and Social Agents, CASA 2019 - Paris, France
Duration: 2019 Jul 12019 Jul 3

Publication series

NameACM International Conference Proceeding Series

Conference

Conference32nd International Conference on Computer Animation and Social Agents, CASA 2019
CountryFrance
CityParis
Period19/7/119/7/3

Fingerprint

Intelligent virtual agents
Animation
Museums
Object recognition
Speech recognition
Communication

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications

Cite this

Ali, G., Le, H. Q., Kim, J., Hwang, S., & Hwang, J. I. (2019). Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment. In Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019 (pp. 47-52). (ACM International Conference Proceeding Series). Association for Computing Machinery. https://doi.org/10.1145/3328756.3328758
Ali, Ghazanfar ; Le, Hong Quan ; Kim, Junho ; Hwang, Seungwon ; Hwang, Jae In. / Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment. Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019. Association for Computing Machinery, 2019. pp. 47-52 (ACM International Conference Proceeding Series).
@inproceedings{6c9fe77ec67940d696ea48c99631a164,
title = "Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment",
abstract = "In this paper, we present the design of a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. These places need engaging and no-repetitive digital content delivery to maximize user involvement. An intelligent virtual agent is a promising mode for both purposes. Premises of framework is wearable mixed reality provided by MR devices supporting spatial mapping. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. By applying a modular approach and deploying computationally intensive modules on cloud-platform, we achieved a seamless virtual experience in a device with limited resources. Human-like gaze and speech interaction with a virtual agent made it more interactive. Automated mapping of body animations with the content of a speech made it more engaging. In our tests, the virtual agents responded within 2-4 seconds after the user query. The strength of the framework is flexibility and adaptability. It can be adapted to any wearable MR device supporting spatial mapping.",
author = "Ghazanfar Ali and Le, {Hong Quan} and Junho Kim and Seungwon Hwang and Hwang, {Jae In}",
year = "2019",
month = "7",
day = "1",
doi = "10.1145/3328756.3328758",
language = "English",
series = "ACM International Conference Proceeding Series",
publisher = "Association for Computing Machinery",
pages = "47--52",
booktitle = "Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019",

}

Ali, G, Le, HQ, Kim, J, Hwang, S & Hwang, JI 2019, Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment. in Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019. ACM International Conference Proceeding Series, Association for Computing Machinery, pp. 47-52, 32nd International Conference on Computer Animation and Social Agents, CASA 2019, Paris, France, 19/7/1. https://doi.org/10.1145/3328756.3328758

Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment. / Ali, Ghazanfar; Le, Hong Quan; Kim, Junho; Hwang, Seungwon; Hwang, Jae In.

Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019. Association for Computing Machinery, 2019. p. 47-52 (ACM International Conference Proceeding Series).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment

AU - Ali, Ghazanfar

AU - Le, Hong Quan

AU - Kim, Junho

AU - Hwang, Seungwon

AU - Hwang, Jae In

PY - 2019/7/1

Y1 - 2019/7/1

N2 - In this paper, we present the design of a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. These places need engaging and no-repetitive digital content delivery to maximize user involvement. An intelligent virtual agent is a promising mode for both purposes. Premises of framework is wearable mixed reality provided by MR devices supporting spatial mapping. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. By applying a modular approach and deploying computationally intensive modules on cloud-platform, we achieved a seamless virtual experience in a device with limited resources. Human-like gaze and speech interaction with a virtual agent made it more interactive. Automated mapping of body animations with the content of a speech made it more engaging. In our tests, the virtual agents responded within 2-4 seconds after the user query. The strength of the framework is flexibility and adaptability. It can be adapted to any wearable MR device supporting spatial mapping.

AB - In this paper, we present the design of a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. These places need engaging and no-repetitive digital content delivery to maximize user involvement. An intelligent virtual agent is a promising mode for both purposes. Premises of framework is wearable mixed reality provided by MR devices supporting spatial mapping. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. By applying a modular approach and deploying computationally intensive modules on cloud-platform, we achieved a seamless virtual experience in a device with limited resources. Human-like gaze and speech interaction with a virtual agent made it more interactive. Automated mapping of body animations with the content of a speech made it more engaging. In our tests, the virtual agents responded within 2-4 seconds after the user query. The strength of the framework is flexibility and adaptability. It can be adapted to any wearable MR device supporting spatial mapping.

UR - http://www.scopus.com/inward/record.url?scp=85069173185&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069173185&partnerID=8YFLogxK

U2 - 10.1145/3328756.3328758

DO - 10.1145/3328756.3328758

M3 - Conference contribution

AN - SCOPUS:85069173185

T3 - ACM International Conference Proceeding Series

SP - 47

EP - 52

BT - Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019

PB - Association for Computing Machinery

ER -

Ali G, Le HQ, Kim J, Hwang S, Hwang JI. Design of seamless multi-modal interaction framework for intelligent virtual agents in wearable mixed reality environment. In Proceedings of the 32nd International Conference on Computer Animation and Social Agents, CASA 2019. Association for Computing Machinery. 2019. p. 47-52. (ACM International Conference Proceeding Series). https://doi.org/10.1145/3328756.3328758