PROJECT TEAM MEMBERS

Tanya Sheckley

Wes Alvaro
PROJECT: Quick Speak

Quick : Speak :

About the project:
It is an app designed to work on a suite of devices to enable a person who is non-verbal, with limited fine motor skills, to talk in a "real time" manner, and have the ability to utilize smart technology.

The Problem:
Imagine your friends are all texting, but you don't know what they are saying. Imagine a friend asks you how your day was, and you don't have a way to express yourself. Current AAC devices require searching multiple screens and pressing multiple choices to speak even simple sentences. This process is difficult to learn and time consuming to use. What if we could create an app that is intuitive to your situation, listens to the conversation and allows the user to speak in entire sentences with the push of a button (or the tap of a watch, or the nod of the head)? Additionally, current devices do not offer the ability to connect to smart technology, so users cannot make phone calls, text, use apps or utilize other smartphone technology.

The Solution:
We've created a predictive conversation engine that learns and becomes personalized as it is used. The app uses a microphone to listen to the conversation and provides the user with appropriate responses. The user can then push one choice, and the answer is spoken through a blue tooth speaker. The app can be accessed on a phone, tablet, or google glass. Choices can be customized and the engine will learn your response and learn which you say most often to become easier to use and more personalized, it becomes your individual voice. Because we are using smart technology it will also allow access to text/phone/internet and other apps, allowing the user to be included in everyday conversations, and keep up on social media and messaging with their social peer group.




DISCUSSION