I suppose, you are not trying to "read" free gesture language also. That one is currently for humans only.
But the canonized sign gestures will be also not easy to do in real-life situations. I suppose it would be more AI than image processing. Signs are not static images they are gestures. Only the letters are (mostly) static, but nobody is using letters to communicate. And the gestures are not stand-alone, they are linked as well. And the link has it's own meaning as well.
The database engine will be the least interesting thing you will have to deal with.
This is an interesting initiative, that is using Kinect, to spare most of the image processing work:
http://www.kinecthacks.net/american-sign-language-recognition-using-kinect/[
^]