Project Description: 

Our project aims to classify American sign language gestures in real time. The goal of this project is to be able to assist those that use sign language to be able to communicate with a wider audience, including those that might not understand sign language. Our project uses an Intel Real Sense light coded depth camera, model SR305, to capture gestures. We have built an extensive machine learning model based on a transfer learning approach from the ResNet-18 convolutional neural network model. Our newly trained convolutional neural network will take these gestures and output a classification in real time to its best ability. Our interface allows for a user to capture gestures and see the classifications for the gestures that they have input, such that the user would be able to construct words and sentences to communicate with others that can read the screen.

Project Team Member(s): 
Shane Clancy
Ulises Zaragoza
Jonathan Hull
Nicholas Davies
Zhidong Zhang
Shihao Song
College of Engineering Unit(s): 
Electrical Engineering and Computer Science
Undergraduate Project
YouTube Video Link(s): 
Project Communication Piece(s): 
Industry Sponsor: 
Intel
Project ID: 
EECS14