CHOSUN

Wearable Sensor-Based Human Activity Recognition with Deep Recurrent Neural Networks

Metadata Downloads
Author(s)
Abdulmajid Murad
Issued Date
2018
Abstract
인간 행동 인 식을 위하여 심화 학습 방법들을 선택하는 것은 신체에 착용한 센서로부터 획득한 원시 입력 시퀀스로부터 차별된 특징을 추출하는데 효과적이다. 인간의 움직임은 시간경과에 따라 연속적인 샘플들로 인코딩되지만, 일반적인 기계 학습 방법들은 시간적 입력 데이터 샘플들 간의 상관관계를 활용하지 않고 인식 작업들을 수행한다. 컨볼루션 신경망 (Convolutional neural networks: CNN)은 입력 데이터간의 종속성을 획득하기 위해 1차원 시간적 시퀀스에 따른 컨볼루션을 사용하여 이문제를 해결하지만, 컨볼루션 커널의 크기는 획득한 데이터 샘플간의 종속성 범위를 제한한다. 결과적으로, 일반적인 모델은 광범위한 행동인식 구성에 적합하지 않으며 고정된 길이의 입력창을 필요로한다. 본 연구에서, 저는 가변적인 길이를 갖는 입력 시퀀스에서 긴 범위의 종속성을 획득할수 있는 인식모델 설계를 위하여 깊은 순환 신경망 (Deep recurrent neural networks: DRNN)을 이용하는 것을 제안한다. 저는 단방향, 양방향 및 계단식 아키텍쳐를 기반으로하는 LSTM(Long short-term memory) DRNN을 제안하고, 다양한 벤치마크 데이터셋들의 효율성을 평가한다. 실험결과는 제가 제안하는 모델들이 기존 SVM (Support Vector Machines)과 KNN (k-nearest neighbors)같은 기계학습 방법을 사용하는 것보다 성능이 우수함을 보여준다. 추가적으로, 제안하는 모델은 DBN (Deep Believe Neworks)와 CNN 같은 다른 심화 학습 기술보다 우수한 성능을 제공한다.|Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNN) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this thesis, we propose the use of deep recurrent neural networks (DRNN) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNN and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machines (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBN) and CNN.
Alternative Title
깊은 순환 신경망을 이용한 웨어러블 센서 기반 인간 행동 인식
Alternative Author(s)
압둘마지드 무라드
Affiliation
Department of Information and Communication Engineering
Department
일반대학원 정보통신공학과
Advisor
변재영
Awarded Date
2018-08
Table Of Contents
Table of Contents i
List of Figures iii
List of Tables iv
Acronyms v
Abstract vi
Abstract [Korean] viii
Introduction 1
1.1 Motivation 1
1.2 Objectives 2
1.3 Contributions 2
1.4 Thesis Layout 3
Background 4
2.1 Human Activity Recognition System 4
2.2 Recurrent Neural Networks 6
2.2.1 Traditional RNN 6
2.2.2 LSTM-based RNN 7
2.3 Performance Metrics 9
Related Works 11
3.1 Traditional Approaches 11
3.2 Deep Learning Approaches 12
3.2.1 Artificial Neural Networks 12
3.2.2 Deep Belief Networks 12
3.2.3 Stacked Autoencoders 13
3.2.4 Convolutional Neural Networks 13
3.2.5 Hybrid Models 14
3.2.4 Recurrent Neural Networks 14
Proposed HAR System 15
4.1 System Architecture 15
4.2 Unidirectional LSTM-Based DRNN Model 18
4.3 Bidirectional LSTM-Based DRNN Model 19
4.4 Cascaded LSTM-based DRNN Model 21
Experimental Data 23
Experimental Results and Discussion 27
6.1 Training Proposed Models 27
6.2 Performance Results 31
6.3 Discussion 38
Conclusion 40
Degree
Master
Publisher
Graduate School of Chosun University
Citation
Abdulmajid Murad. (2018). Wearable Sensor-Based Human Activity Recognition with Deep Recurrent Neural Networks.
Type
Dissertation
URI
https://oak.chosun.ac.kr/handle/2020.oak/13595
http://chosun.dcollection.net/common/orgView/200000266863
Appears in Collections:
General Graduate School > 3. Theses(Master)
Authorize & License
  • AuthorizeOpen
  • Embargo2018-08-24
Files in This Item:

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.