CUI Lahore Repository

Activity Recognition for Assisted Living based on Multimodal Features using Deep Learning

Show simple item record

dc.contributor.author Yaseen, Hafsa
dc.date.accessioned 2022-08-22T07:30:39Z
dc.date.available 2022-08-22T07:30:39Z
dc.date.issued 2022-08-22
dc.identifier.uri http://repository.cuilahore.edu.pk/xmlui/handle/123456789/3434
dc.description.abstract Human activity recognition (HAR) is a prominent field in computer vision and signal processing that analyzes the information obtained from numerous sensors, including vision sensors and wearable sensors. The purpose of HAR is to recognize actions from a sequence of observations on the activities of individuals and environment events. It provides a broad variety of applications, including ambient assisted living, robotic technology, intelligent surveillance, human-computer interaction, smart home, transportation, and smart healthcare. As assisted living indicates the technological services that help impaired people and senior citizens to spend independent life. Therefore, HAR, which facilitates proactive gestures and interactions with their surroundings, has become a significant precondition for assisted living applications. For all that, tremendous efforts have been made to reliably capture human action and behavior by manipulating single modality data, but the combined analysis of multimodal data has received less attention. Different modalities usually contain complementary information that must be combined for better learning of action recognition for Ambient Assisted Living. In this research, a novel framework called “Activity Recognition for Assisted Living based on Multimodal Features using Deep learning” is proposed to leverage intra-modality discriminative features as well as inter-modality connection in visual and inertial data using deep neural networks. Two separate unimodal, i.e., visual and inertial models, are proposed to learn action recognition classifiers for these modalities effectively. These models automatically acquire high-quality discriminative action-related images and inertial features. Finally, these heterogeneous models are combined into an end-to-end approach via decision-level fusion. The comprehensive experiments are conducted using the publicly accessible benchmark C-MHAD dataset. The outcomes showed that the proposed methodology surpassed existing methods in action recognition by a significant margin, with an F1-score of 89% en_US
dc.publisher Department of Computer Sciences, COMSATS University Lahore. en_US
dc.relation.ispartofseries FA19-RCS-022;7599
dc.subject Assisted Living based,Multimodal Features using Deep Learning en_US
dc.title Activity Recognition for Assisted Living based on Multimodal Features using Deep Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • Thesis - MS / PhD
    This collection containts the Ms/PhD thesis of the studetns of Department of Computer Science

Show simple item record

Search DSpace


Advanced Search

Browse

My Account