Improved CNN Based Sign Langiage Recognition For Specially Disabled People

Authors

  • Sonali Rangdale, Nagesh Raykar, Prashant Kumbharkar, Santosh Borde

Abstract

Since the majority of people do not understand sign language, there needs to be a bridge built so that the community can interact with the deaf. The use of image processing technology as a trans-lator tool is one way that technology, which is constantly improving and working to better people, can be utilized to build a communication bridge between the community and deaf individuals. The goal of this study is to create a model for recognition of sign language with the help of Indian sign language. The model is implemented with modified CNN deep leaning methodology. The Convolution Neural Network (CNN) algorithm in the Deep Learning method can be a classification-tion tool, with the ability of the Convolution Neural Network (CNN) to learn several things. The model is implemented to capture images of palm signs using web camera, system conclude and display the name of capture images. With the use of convolution neural networks. The CNN is used in this model to convert the Indian Sign Language to text and voice. In this study, CNN method has been successfully carried out for sign language recognition with the results being able to increase the accuracy value to 99.4%. Sothat the results of the research increase the higher accuracy value.

Downloads

Published

2024-06-07

Issue

Section

Articles