Document Type


Publication Date

Spring 2021


Deaf and hearing-impaired persons learn American Sign Language (ASL) as their natural language. There is a need to a new innovative technology that will enable deaf and hearing-impaired persons to communicate without difficulty anytime and anywhere with persons who do not know ASL. The proposed research project will introduce a novel approach to explore the problem of automatic real-time conversion from ASL to speech using motion sensors, machine learning, and mobile technology. The goal of this project is to design a smart system to capture and analyze hand movement and gesture using different types of sensors and machine learning algorithms. The new innovative system will be able to work in an adaptive way to learn new signs and to expand and improve the dictionary of the sign language. This system will have a wide range of applications for healthcare, education, gamification, entrainment, and many other applications. An optical hand tracking module such as Leap Motion Controller is used to capture and track the movements of hands. These movements are analyzed using several supervised machine learning algorithms to build predictive models to recognize different ASL gestures with a focus on a set of words.