Click here to Skip to main content
Click here to Skip to main content

Sign2Speech

, 17 Aug 2013
Rate this:
Please Sign up or sign in to vote.
A multi-touch gesture language adapted from American Sign Language (ASL) that enables ASL users to communicate with hearing people.

Please note

This article is an entry in our AppInnovation Contest. Articles in this sub-section are not required to be full articles so care should be taken when voting.

 Introduction 

Sign2Speech is a proposed multi-touch gesture app to facilitate communication between people who use American Sign Language (ASL) and hearing people who do not know ASL. The gestures will be analogous to signs in ASL, just as the Palm OS Graffiti gestures are analogous to letters and symbols on a keyboard. One side of the screen will feature a head and torso image to facilitate gestures related to body features. The relationship to ASL signs will facilitate learning the gestures. The words and phrases the gestures refer to will be spoken by a text-to-speech (TTS) engine. To further facilitate communication with a speaking person, speech recognition will be used to display on-screen text or signs via a signing avatar (such as the one developed by Signing Science). 

Background  

Anne Sullivan and Helen Keller pioneered tactile sign language interpretation in the late 19th Century (see the memorable scene in The Miracle Worker (1962) where Anne Bancroft spells W-A-T-E-R into Patty Duke's hand). Xerox Corporation pioneered touchscreen gestures with Unistrokes (U.S. Patent 5,596,656, granted in 1997), a simplified handwriting recognition application.  Palm, Inc. expanded and popularized this idea with its Graffiti application for Palm OS. Advances in computing power and touchscreen design enabled the recognition of more sophisticated single-touch, and then multi-touch gestures. Multi-touch gesture interpretation requires very sophisticated algorithms. Computer languages have been developed to facilitate the use of multi-touch gestures in applications. One such language, Gesture Markup Language (GML) was developed by GestureWorks.  

More sophisticated video-based sign language recognition systems are also being developed. Microsoft Research Asia has developed a sign language recognition application for the Kinect system. The European Union has developed similar technology in its SignSpeak program.


Using the code   

A dictionary of words and phrases and their corresponding gestures will be included to help the user learn the gesture language. After executing a gesture or group of gestures, the user will see a list of candidate words or phrases from which to select. The translated gestures will appear as text on the screen. The user can then tap a Speak button when ready. If recognition is sufficiently accurate, a real-time mode may be implemented. A New Gesture feature will also be included so the user can add a word or phrase that is not yet in the dictionary. The user may opt to upload the new gesture as a candidate for inclusion in the next version of the Sign2Speech dictionary. Computer learning features such as those used in speech recognition will be included to improve recognition accuracy for each user. The app will detect spoken words and display them as text or signs via a signing avatar. 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

tanquist
Software Developer Minnesota Department of Transportation
United States United States
I develop computer applications that facilitate the design, construction and analysis of pavement structures.

Comments and Discussions

 
-- There are no messages in this forum --
| Advertise | Privacy | Mobile
Web04 | 2.8.140814.1 | Last Updated 17 Aug 2013
Article Copyright 2013 by tanquist
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid