Click here to Skip to main content
15,885,278 members
Please Sign up or sign in to vote.
4.00/5 (1 vote)
See more:
hi,
for our final year research project we hope to do a research on Sign-Language Detection using image processing and analyzing....In there we also have to implement some application to show some output.we hope to identify a sign through a camera and then divide that action into sequence of images and analyze those images to identify the sign...is it possible? (this is how we thought to do it)...need some suggestions form experts...Also what is the best way to analyze images? which DBMS can we use to store images for this purpose.... Thank you.......
Posted

1 solution

I suppose, you are not trying to "read" free gesture language also. That one is currently for humans only.
But the canonized sign gestures will be also not easy to do in real-life situations. I suppose it would be more AI than image processing. Signs are not static images they are gestures. Only the letters are (mostly) static, but nobody is using letters to communicate. And the gestures are not stand-alone, they are linked as well. And the link has it's own meaning as well.
The database engine will be the least interesting thing you will have to deal with.

This is an interesting initiative, that is using Kinect, to spare most of the image processing work: http://www.kinecthacks.net/american-sign-language-recognition-using-kinect/[^]
 
Share this answer
 
v2
Comments
Dilan Shaminda 7-Oct-12 2:15am    
hi sir..thanks a lot for the reply...actually this is what we want to do...same as the video, we want to identify deaf people actions and provide the word or phrase that they are trying to tell us..your link is very helpful... :) my main problem is how do we analyze the action..for an example "spider in box" sign shows by a deaf person, and we want to display that he says "spider in box"... so could you plz tell me how to do that? because early i thought to store that sign as sequence of images and while the person is doing the action,track it using a camera and divide them into set of images and then compare...now i realize that is not possible :( so could you plz tell where we have to start? Thank you....
Zoltán Zörgő 7-Oct-12 6:02am    
Well, this is not a "quick answer" kind of task. Storing images won't help you. You need to find a way to code code the gestures. First start thinking inverse: take a body animator (find one), and try to make an avatar move according to a sign. Won't be easy, but you will discover, what parts of the body are moved, and how. Than, try to figure out, how could you code this movements. It will be a sequence of movements of many body parts. After that you need to figure out, how to "see" the movements with your camera (this is where Kinect could help). Than you "just" have to train your system, and there you have it.
So: you will not be able to use the images directly for any kind of sign identification. First you have to make a processor that transcribes the movements seen by the camera to gesture codes. Than learn your system the signs you want. And you will have to find a way to match gesture code sequences. I would think about a neural network layer also to have my system continuously trained.

Oh, and you have to do a lot of research. Try to find publications about this or related topics. Since this has no ready-to-use way to do it, you will have to find your own way, based on your knowledge, your work, and that of others too. But don't forget to reference them.
Dilan Shaminda 7-Oct-12 21:54pm    
thanks a lot for the information and the advices sir....seems like it is a difficult task...will find our own way to do this....thank you very much :-)
Zoltán Zörgő 8-Oct-12 2:40am    
If you find my answers of use for you, feel free to accept them :)
Dilan Shaminda 8-Oct-12 9:47am    
:) thanks sir....

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900