Abstract:
Human activity recognition (HAR) is a prominent field in computer vision and signal
processing that analyzes the information obtained from numerous sensors, including
vision sensors and wearable sensors. The purpose of HAR is to recognize actions from a
sequence of observations on the activities of individuals and environment events. It
provides a broad variety of applications, including ambient assisted living, robotic
technology, intelligent surveillance, human-computer interaction, smart home,
transportation, and smart healthcare. As assisted living indicates the technological
services that help impaired people and senior citizens to spend independent life.
Therefore, HAR, which facilitates proactive gestures and interactions with their
surroundings, has become a significant precondition for assisted living applications. For
all that, tremendous efforts have been made to reliably capture human action and
behavior by manipulating single modality data, but the combined analysis of multimodal
data has received less attention. Different modalities usually contain complementary
information that must be combined for better learning of action recognition for Ambient
Assisted Living. In this research, a novel framework called “Activity Recognition for
Assisted Living based on Multimodal Features using Deep learning” is proposed to
leverage intra-modality discriminative features as well as inter-modality connection in
visual and inertial data using deep neural networks. Two separate unimodal, i.e., visual
and inertial models, are proposed to learn action recognition classifiers for these
modalities effectively. These models automatically acquire high-quality discriminative
action-related images and inertial features. Finally, these heterogeneous models are
combined into an end-to-end approach via decision-level fusion. The comprehensive
experiments are conducted using the publicly accessible benchmark C-MHAD dataset.
The outcomes showed that the proposed methodology surpassed existing methods in
action recognition by a significant margin, with an F1-score of 89%