Sign Language Recognition is a project which is primarily focused on recognizing gestures (used by Deaf and Dumb) and process them into sentences, thereby making communication of such people easier with the common people. The project uses "Intel Real Sense" technology (Intel Real Sense Robotic Development Kit), which captures depth images of gestures made the user. Depth images are useful for making highly precise predictions on what is being communicated by the user. Early works used different kind of image convolution to form feature vectors based on a single RGB image of a hand. The authors use wavelet families, computed on edge images, as features to train a Neural Network for 24 sign classification. Haarlet-like features, computed on grey-scale images and on silhouettes were used for classification of 10 hand shapes. Principle Component Analysis (PCA) was applied directly on images to derive a subspace of hand poses, which is then used to classify the hand poses. A modification of HOG descriptors is employed to recognize static signs of the British Sign Language. SIFT-feature based description was used to recognize signs of ASL. All these methods depend heavily on the lighting conditions, subject’s appearance and Background. This project uses Convolutional Neural Networks for classifying gestures using depth data and its features for more accuracy

"/> ASL, Recognition, Depth Data, RealSense, Gesture Recognition.

"/> Sign Language Recognition is a project which is primarily focused on recognizing gestures (used by Deaf and Dumb) and process them into sentences, thereby making communication of such people easier with the common people. The project uses "Intel Real Sense" technology (Intel Real Sense Robotic Development Kit), which captures depth images of gestures made the user. Depth images are useful for making highly precise predictions on what is being communicated by the user. Early works used different kind of image convolution to form feature vectors based on a single RGB image of a hand. The authors use wavelet families, computed on edge images, as features to train a Neural Network for 24 sign classification. Haarlet-like features, computed on grey-scale images and on silhouettes were used for classification of 10 hand shapes. Principle Component Analysis (PCA) was applied directly on images to derive a subspace of hand poses, which is then used to classify the hand poses. A modification of HOG descriptors is employed to recognize static signs of the British Sign Language. SIFT-feature based description was used to recognize signs of ASL. All these methods depend heavily on the lighting conditions, subject’s appearance and Background. This project uses Convolutional Neural Networks for classifying gestures using depth data and its features for more accuracy

"/> Sign Language Recognition is a project which is primarily focused on recognizing gestures (used by Deaf and Dumb) and process them into sentences, thereby making communication of such people easier with the common people. The project uses "Intel Real Sense" technology (Intel Real Sense Robotic Development Kit), which captures depth images of gestures made the user. Depth images are useful for making highly precise predictions on what is being communicated by the user. Early works used different kind of image convolution to form feature vectors based on a single RGB image of a hand. The authors use wavelet families, computed on edge images, as features to train a Neural Network for 24 sign classification. Haarlet-like features, computed on grey-scale images and on silhouettes were used for classification of 10 hand shapes. Principle Component Analysis (PCA) was applied directly on images to derive a subspace of hand poses, which is then used to classify the hand poses. A modification of HOG descriptors is employed to recognize static signs of the British Sign Language. SIFT-feature based description was used to recognize signs of ASL. All these methods depend heavily on the lighting conditions, subject’s appearance and Background. This project uses Convolutional Neural Networks for classifying gestures using depth data and its features for more accuracy

"/> Sign Language Recognition is a project which is primarily focused on recognizing gestures (used by Deaf and Dumb) and process them into sentences, thereby making communication of such people easier with the common people. The project uses "Intel Real Sense" technology (Intel Real Sense Robotic Development Kit), which captures depth images of gestures made the user. Depth images are useful for making highly precise predictions on what is being communicated by the user. Early works used different kind of image convolution to form feature vectors based on a single RGB image of a hand. The authors use wavelet families, computed on edge images, as features to train a Neural Network for 24 sign classification. Haarlet-like features, computed on grey-scale images and on silhouettes were used for classification of 10 hand shapes. Principle Component Analysis (PCA) was applied directly on images to derive a subspace of hand poses, which is then used to classify the hand poses. A modification of HOG descriptors is employed to recognize static signs of the British Sign Language. SIFT-feature based description was used to recognize signs of ASL. All these methods depend heavily on the lighting conditions, subject’s appearance and Background. This project uses Convolutional Neural Networks for classifying gestures using depth data and its features for more accuracy

"/>

Sign Language Recognition using Depth Data and CNN

International Journal of Computer Science and Engineering
© 2019 by SSRG - IJCSE Journal
Volume 6 Issue 1
Year of Publication : 2019
Authors : Lakshman Karthik Ramkumar, Sudharsana Premchand, Gokul Karthi Vijayakumar

pdf
How to Cite?

Lakshman Karthik Ramkumar, Sudharsana Premchand, Gokul Karthi Vijayakumar, "Sign Language Recognition using Depth Data and CNN," SSRG International Journal of Computer Science and Engineering , vol. 6,  no. 1, pp. 9-14, 2019. Crossref, https://doi.org/10.14445/23488387/IJCSE-V6I1P102

Abstract:

Sign Language Recognition is a project which is primarily focused on recognizing gestures (used by Deaf and Dumb) and process them into sentences, thereby making communication of such people easier with the common people. The project uses "Intel Real Sense" technology (Intel Real Sense Robotic Development Kit), which captures depth images of gestures made the user. Depth images are useful for making highly precise predictions on what is being communicated by the user. Early works used different kind of image convolution to form feature vectors based on a single RGB image of a hand. The authors use wavelet families, computed on edge images, as features to train a Neural Network for 24 sign classification. Haarlet-like features, computed on grey-scale images and on silhouettes were used for classification of 10 hand shapes. Principle Component Analysis (PCA) was applied directly on images to derive a subspace of hand poses, which is then used to classify the hand poses. A modification of HOG descriptors is employed to recognize static signs of the British Sign Language. SIFT-feature based description was used to recognize signs of ASL. All these methods depend heavily on the lighting conditions, subject’s appearance and Background. This project uses Convolutional Neural Networks for classifying gestures using depth data and its features for more accuracy

Keywords:

ASL, Recognition, Depth Data, RealSense, Gesture Recognition.

References:

[1]    Lopez, S. Rio, J.M. Benitez and F. Herrera, “Real-time sign language recognition using a consumer depth camera”, IEEE International Conference on Computer Vision Workshops, 2013
[2]    Ankit Chaudhary, J.L. Raheja, “Light invariant real-time robust hand gesture recognition”, IEEE, 2016
[3]    Fabio M. Caputo, Pietro Prebianca, Andrea Carcangiu, Lucio D. Spano, Andrea Giachetti, “Comparing 3D trajectories for simple mid-air gesture recognition”, Published on Science Direct in December, 2017
[4]    Aashni Haria, Archanasri Subramanian, Nivedhitha Asokkumar, Shristi Poddar, Jyothi S Nayak,“Hand Gesture Recognition for Human Computer Interaction", 7th International Conference on Advances in Computing & Communications, ICACC-2017, Cochin, India, 2017
[5]    Chen, Stanford University, “Sign Language Recognition with Unsupervised Feature Learning”, IEEE, 2016
[6]    Anup Kumar, Karun Thankachan and Mevin M. Dominic, “Sign Language Recognition”, IEEE, 2016
[7]    Brandon Garcia, Sigberto Alarcon Viesca, “Real-time American Sign Language Recognition with Convolutional Neural Networks”, IEEE, 2016
[8]    Lionel Pigou, Sander Dieleman, Pieter-Jan Kindermans, Benjamin Schrauwen, “Sign LanguageRecognition using Convolutional Neural Networks”, IEEE, 2015
[9]    Purva C. Badhe, Vaishali Kulkarni, “Indian Sign Language Translator Using Gesture Recognition Algorithm”, Computer Graphics, Vision and Information Security (CGVIS), 2015 IEEE International Conference, 2015
[10]    Nagendraswamy H S, Chethana Kumara B M, Lekha Chinmayi R, “Indian Sign Language Recognition: An Approach Based on Fuzzy-Symbolic Data”, IEEE
[11]    Guillaume Plouffe and Ana-Maria Cretu, Member, IEEE, “Static and Dynamic Hand Gesture Recognition in Depth Data Using Dynamic Time Warping”, Publihed in International Conference on Advances in Computing, Communications and Informatics, 2016
[12]    Pichao Wang, Wanqing Li, Song Liu, Zhimin Gao, Chang Tang, Philip Ogunbona, “Large-scale Isolated Gesture Recognition Using Convolutional Neural Networks”, 23rd International Conference on Pattern Recognition (ICPR), IEEE, 2016
[13]    Chenyang Zhang, Yingli Tian, Matt Huenerfauth, “Multi-Modality American Sign Language Recognition”, Published in IEEE International Conference on Image Processing (ICIP), 2016
[14]    Lihong Zheng, Bin Liang, “Sign Language Recognition using Depth Images”, Published in 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2016
[15]    M. Mahadeva Prasad,“Gradient Feature based Static Sign Language Recognition”, Published in IJCSE , Vol.6, Issue-12, December, 2018.