This website describes our ongoing work at Boston University and the University of Texas at Arlington to develop a computer vision system that will identify a sign from video input for dictionary look-up. As part of that project, we are producing a large and expanding public dataset containing video sequences of thousands of distinct ASL signs (produced by native signers of ASL), along with annotations of those sequences, including start/end frames and class label (i.e., gloss-based identification) of every sign.
The data presented here have been collected as part of the NSF funded effort NSF0705749. This project is a collaboration among:
If you use these data in published work, please cite the following paper discussing the current project:
Isolated sign videos
Videos of isolated signs from our capture sessions are available from the links below. We used videos contained in the Gallaudet Dictionary of American Sign Language as stimuli during the video capture sessions. Signs in this collection are identified only by the filename for the Gallaudet Dictionary video presented as stimulus to the signer. Over the past two years we have devoted significant effort towards preparing linguistically meaningful annotations for the videos including unique gloss labels for each distinct sign. A discussion of the current status of our dataset and an analysis of the linguistic annotations carried out so far is presented in the updated version of this webpage. That page also includes an important caveat with respect to using a single identifier to cover the set of items produced in response to any given stimulus video.
Lossless compressed videos are available for three views: front, side and face region. QuickTime format (with MPEG4 encoding) videos are available for the first few sessions and will be updated as gloss annotations are prepared.
The lossless compressed videos are about 1Gb each. If you wish to download a large number of these files, please contact us and we can determine a suitable way for data transfer to not overload the fileserver. A C++ library to read frames for this video format is available here.
Test video sequences for isolated sign recognition
Video sequences used for testing performance of the isolated sign recognition system described in our CVPR4HB'08 paper are available from the links below.
For questions regarding data capture and file formats:
For queries related to sign language and linguistic annotations: