Project thumbnail image

College of Engineering Unit(s): 
Electrical Engineering and Computer Science

Team: 
Pablo Moreno, Youngjoo Lee and Kira Jiroux

Project Description: 

The creation of smartphones has provided a fast and simple way for people to connect with each other around the world, as well as have access to a near-infinite wealth of knowledge. This is all in part because of one significant tool – the keyboard. But while the keyboard has made functionalities like global communication possible, it is also contingent on the fine motor skills we apply with our hands and fingers. This fine motor technique has left some individuals frustrated or unable to utilize their keyboards effectively and with ease. However, there is a solution; by utilizing the smartphone's sensors, via the CoreMotion library, it is possible to develop a new way of ‘typing’. By establishing a library of gestures captured by the sensors which then map to individual letters of the English alphabet and numbers, we can create a way to send messages that do not require the use of fine motor skills.

The current project, led by Scott Fairbanks, has no feasible alternatives, nor any previous material, code base, or existing libraries to work from. There are few alternatives to the traditional smartphone keyboard, like swiping across the keyboard rather than typing each individual letter or voice-to-text. However, these are not practical solutions; to someone without fine motor skills, the swiping feature could be troublesome in the path that it detects, and voice-to-text may prove to be more privacy concerning if out in public. Even though there are no current codebases that can be pulled from, Apple makes it easy to access data from their various sensors, through the CoreMotion framework. In the current iteration of the iOS Gesture Alphabet, we have created the functionality which transcribes motions made from the user on the device’s X, Y, and Z axis’, specifically utilizing the DeviceMotion sensors to retrieve the Roll, Pitch, and Yaw, or the device’s orientation. With 9 positions defined by the values detected from the sensors, we have created a mapped gesture set which corresponds to all letters, numbers, and a space.

We expect this project to persist into the 2021-2022 Senior Software Development course, and we have envisioned and begun work on the next iteration of the iOS Gesture Alphabet, to be handed to the next team to man this project. While the present mapped gesture set is functional, we see a lot of opportunity in being able to transcribe gestures with no real limitations other than being familiar and recognizable to both the app and user.  That is why we have a “Version 2” of our app which captures accelerometer data over the period of time the user completes the gesture, and have generated a machine learning model. This model is able to interpret accelerometer data and map it to the corresponding letter from a provided dataset. Unfortunately, this model was not implemented into the project. Our goal for next year's team is to be able to start developing from version 1 of the application and learn from our pitfalls from version 2.

Shark Tank/Beta Functionality Video