Abstract:
Physically disabled individuals like deaf, mute and patients suffering from various disabilities require an effective communication device to make them independent.Traditionally flex sensors-based gloves have been used to identify the gestures of Sign languages. Current work is limited to only one glove used for capturing the gesture. Gesture recognition in the 3D environment has been a challenging task. The problem can be solved using Machine Learning techniques to separate the true gesture from the false gesture. Therefore, the aim of this project is to develop a portable universal communication device to assist patients with disabilities and provide them with better standards of living.
One of the goals of this project is to implement and compare the performance of Neural networks to identify the true gesture. The system will use sign language (gestures identification from flex sensors) to communicate with people around them. The data will then be analysed in Matlab based machine learning environment to identify the performance of Machine learning algorithms. This proposal will also look into the feasibility of implementing machine learning algorithms in Android phones for gesture recognition.