Confidence-based Deep Multimodal Fusion for Activity Recognition

Jun Ho Choi, Jong-Seok Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name “Yonsei-MCML.”

Original languageEnglish
Title of host publicationUbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers
PublisherAssociation for Computing Machinery, Inc
Pages1548-1556
Number of pages9
ISBN (Electronic)9781450359665
DOIs
Publication statusPublished - 2018 Oct 8
Event2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 - Singapore, Singapore
Duration: 2018 Oct 82018 Oct 12

Other

Other2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018
CountrySingapore
CitySingapore
Period18/10/818/10/12

Fingerprint

Fusion reactions
Sensors
Electric fuses
Experiments
Deep learning

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Information Systems

Cite this

Choi, J. H., & Lee, J-S. (2018). Confidence-based Deep Multimodal Fusion for Activity Recognition. In UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers (pp. 1548-1556). Association for Computing Machinery, Inc. https://doi.org/10.1145/3267305.3267522
Choi, Jun Ho ; Lee, Jong-Seok. / Confidence-based Deep Multimodal Fusion for Activity Recognition. UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers. Association for Computing Machinery, Inc, 2018. pp. 1548-1556
@inproceedings{7836f3be4140421082f87c48909eaba3,
title = "Confidence-based Deep Multimodal Fusion for Activity Recognition",
abstract = "Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name “Yonsei-MCML.”",
author = "Choi, {Jun Ho} and Jong-Seok Lee",
year = "2018",
month = "10",
day = "8",
doi = "10.1145/3267305.3267522",
language = "English",
pages = "1548--1556",
booktitle = "UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers",
publisher = "Association for Computing Machinery, Inc",

}

Choi, JH & Lee, J-S 2018, Confidence-based Deep Multimodal Fusion for Activity Recognition. in UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers. Association for Computing Machinery, Inc, pp. 1548-1556, 2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018, Singapore, Singapore, 18/10/8. https://doi.org/10.1145/3267305.3267522

Confidence-based Deep Multimodal Fusion for Activity Recognition. / Choi, Jun Ho; Lee, Jong-Seok.

UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers. Association for Computing Machinery, Inc, 2018. p. 1548-1556.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Confidence-based Deep Multimodal Fusion for Activity Recognition

AU - Choi, Jun Ho

AU - Lee, Jong-Seok

PY - 2018/10/8

Y1 - 2018/10/8

N2 - Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name “Yonsei-MCML.”

AB - Human activity recognition using multimodal sensors is widely studied in recent days. In this paper, we propose an end-to-end deep learning model for activity recognition, which fuses features of multiple modalities based on their confidence scores that are automatically determined. The confidence scores efficiently regulate the level of contribution of each sensor. We conduct an experiment on the latest activity recognition dataset. The results confirm that our model outperforms existing methods. We submit the proposed model to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge [23] with the team name “Yonsei-MCML.”

UR - http://www.scopus.com/inward/record.url?scp=85058271027&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85058271027&partnerID=8YFLogxK

U2 - 10.1145/3267305.3267522

DO - 10.1145/3267305.3267522

M3 - Conference contribution

AN - SCOPUS:85058271027

SP - 1548

EP - 1556

BT - UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers

PB - Association for Computing Machinery, Inc

ER -

Choi JH, Lee J-S. Confidence-based Deep Multimodal Fusion for Activity Recognition. In UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers. Association for Computing Machinery, Inc. 2018. p. 1548-1556 https://doi.org/10.1145/3267305.3267522