Observation of Human Response to a Robotic Guide Using a Variational Autoencoder

Hee Seung Moon, Jiwon Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper proposes a robotic-guide system equipped with a haptic device that can deliver kinesthetic feedback to and receive kinesthetic reaction from a follower. In addition, a feature-extraction method from a depth image of a user following the robotic guide based on a variational autoencoder (VAE) model is presented. One of the major roles of a sensory assistive robot is to help visually impaired people to walk through unknown spaces while avoiding obstacles. Haptic sensory information can be used as a directional cue for these people in recognizing the correct direction. We focus on how people react to haptic guidance from the assistive robot because an accurate prediction for human response enables robots to perform a more active role in not interfering with the human movement. In an indoor experiment, we observed the user reaction following our robotic guide in terms of the kinesthetic force that the user received and the depth image taken from the robot. Using the VAE model, the latent variable well represented the feature of the depth image, e.g., brief position information of a user torso. Furthermore, we tracked the precise trajectory of both the user and robotic guide using a motion-capture system.

Original languageEnglish
Title of host publicationProceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages258-261
Number of pages4
ISBN (Electronic)9781538692455
DOIs
Publication statusPublished - 2019 Mar 26
Event3rd IEEE International Conference on Robotic Computing, IRC 2019 - Naples, Italy
Duration: 2019 Feb 252019 Feb 27

Publication series

NameProceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019

Conference

Conference3rd IEEE International Conference on Robotic Computing, IRC 2019
CountryItaly
CityNaples
Period19/2/2519/2/27

Fingerprint

Robotics
Robots
Robot
Variational Model
Haptics
Haptic Device
Visually Impaired
Motion Capture
Feature extraction
Latent Variables
Walk
Trajectories
Feature Extraction
Guidance
Feedback
Human
Observation
Trajectory
Unknown
Prediction

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Control and Optimization

Cite this

Moon, H. S., & Seo, J. (2019). Observation of Human Response to a Robotic Guide Using a Variational Autoencoder. In Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019 (pp. 258-261). [8675594] (Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IRC.2019.00048
Moon, Hee Seung ; Seo, Jiwon. / Observation of Human Response to a Robotic Guide Using a Variational Autoencoder. Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 258-261 (Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019).
@inproceedings{833245405ecd4405a7965262c5ae22db,
title = "Observation of Human Response to a Robotic Guide Using a Variational Autoencoder",
abstract = "This paper proposes a robotic-guide system equipped with a haptic device that can deliver kinesthetic feedback to and receive kinesthetic reaction from a follower. In addition, a feature-extraction method from a depth image of a user following the robotic guide based on a variational autoencoder (VAE) model is presented. One of the major roles of a sensory assistive robot is to help visually impaired people to walk through unknown spaces while avoiding obstacles. Haptic sensory information can be used as a directional cue for these people in recognizing the correct direction. We focus on how people react to haptic guidance from the assistive robot because an accurate prediction for human response enables robots to perform a more active role in not interfering with the human movement. In an indoor experiment, we observed the user reaction following our robotic guide in terms of the kinesthetic force that the user received and the depth image taken from the robot. Using the VAE model, the latent variable well represented the feature of the depth image, e.g., brief position information of a user torso. Furthermore, we tracked the precise trajectory of both the user and robotic guide using a motion-capture system.",
author = "Moon, {Hee Seung} and Jiwon Seo",
year = "2019",
month = "3",
day = "26",
doi = "10.1109/IRC.2019.00048",
language = "English",
series = "Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "258--261",
booktitle = "Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019",
address = "United States",

}

Moon, HS & Seo, J 2019, Observation of Human Response to a Robotic Guide Using a Variational Autoencoder. in Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019., 8675594, Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019, Institute of Electrical and Electronics Engineers Inc., pp. 258-261, 3rd IEEE International Conference on Robotic Computing, IRC 2019, Naples, Italy, 19/2/25. https://doi.org/10.1109/IRC.2019.00048

Observation of Human Response to a Robotic Guide Using a Variational Autoencoder. / Moon, Hee Seung; Seo, Jiwon.

Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 258-261 8675594 (Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Observation of Human Response to a Robotic Guide Using a Variational Autoencoder

AU - Moon, Hee Seung

AU - Seo, Jiwon

PY - 2019/3/26

Y1 - 2019/3/26

N2 - This paper proposes a robotic-guide system equipped with a haptic device that can deliver kinesthetic feedback to and receive kinesthetic reaction from a follower. In addition, a feature-extraction method from a depth image of a user following the robotic guide based on a variational autoencoder (VAE) model is presented. One of the major roles of a sensory assistive robot is to help visually impaired people to walk through unknown spaces while avoiding obstacles. Haptic sensory information can be used as a directional cue for these people in recognizing the correct direction. We focus on how people react to haptic guidance from the assistive robot because an accurate prediction for human response enables robots to perform a more active role in not interfering with the human movement. In an indoor experiment, we observed the user reaction following our robotic guide in terms of the kinesthetic force that the user received and the depth image taken from the robot. Using the VAE model, the latent variable well represented the feature of the depth image, e.g., brief position information of a user torso. Furthermore, we tracked the precise trajectory of both the user and robotic guide using a motion-capture system.

AB - This paper proposes a robotic-guide system equipped with a haptic device that can deliver kinesthetic feedback to and receive kinesthetic reaction from a follower. In addition, a feature-extraction method from a depth image of a user following the robotic guide based on a variational autoencoder (VAE) model is presented. One of the major roles of a sensory assistive robot is to help visually impaired people to walk through unknown spaces while avoiding obstacles. Haptic sensory information can be used as a directional cue for these people in recognizing the correct direction. We focus on how people react to haptic guidance from the assistive robot because an accurate prediction for human response enables robots to perform a more active role in not interfering with the human movement. In an indoor experiment, we observed the user reaction following our robotic guide in terms of the kinesthetic force that the user received and the depth image taken from the robot. Using the VAE model, the latent variable well represented the feature of the depth image, e.g., brief position information of a user torso. Furthermore, we tracked the precise trajectory of both the user and robotic guide using a motion-capture system.

UR - http://www.scopus.com/inward/record.url?scp=85064129674&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064129674&partnerID=8YFLogxK

U2 - 10.1109/IRC.2019.00048

DO - 10.1109/IRC.2019.00048

M3 - Conference contribution

AN - SCOPUS:85064129674

T3 - Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019

SP - 258

EP - 261

BT - Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Moon HS, Seo J. Observation of Human Response to a Robotic Guide Using a Variational Autoencoder. In Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 258-261. 8675594. (Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019). https://doi.org/10.1109/IRC.2019.00048