Skip Navigation Links

Use of Internet Explorer for eRA Modules to be Phased Out by July 19, 2021

eRA is phasing out the use of the Internet Explorer browser for eRA modules effective July 19, 2021. For tips and tricks on troubleshooting browser configuration issues, please go here: Tips & Tricks for Fixing Browser Configuration Issues When Using eRA Modules.

Project Information



National Science Foundation

Project Number:
Contact PI / Project Leader:
Awardee Organization:


Abstract Text:
It is surprisingly difficult to look up an unfamiliar sign in American Sign Language (ASL). Most ASL dictionaries list signs in alphabetical order based on approximate English translations, so a user who does not understand a sign or know its English translation would not know how to find it. ASL lacks a written form or intuitive "alphabetical sorting" based on such a writing system. Although some dictionaries make available alternative ways to search for a sign, based on explicit specification of various properties, a user must often still look through hundreds of pictures of signs to find a match to the unfamiliar sign (if it is present at all in that dictionary). This research will create a framework that will enable the development of a user-friendly, video-based sign-lookup interface, for use with online ASL video dictionaries and resources, and for facilitation of ASL annotation. Input will consist of either a webcam recording of a sign by the user, or user identification of the start and end frames of a sign from a digital video. To test the efficacy of the new tools in real-world applications, the team will partner with the leading producer of pedagogical materials for ASL instruction in high schools and colleges, which is developing the first multimedia ASL dictionary with video-based ASL definitions for signs. The lookup interface will be used experimentally to search the ASL dictionary in ASL classes at Boston University and RIT. Project outcomes will revolutionize how deaf children, students learning ASL, or families with deaf children search ASL dictionaries. They will accelerate research on ASL linguistics and technology, by increasing efficiency, accuracy, and consistency of annotations of ASL videos through video-based sign lookup. And they will lay the groundwork for future technologies to benefit deaf users, such as search by video example through ASL video collections, or ASL-to-English translation, for which sign-recognition is a precursor. The new linguistically annotated video data and software tools will be shared publicly, for use by others in linguistic and computer science research, as well as in education. Sign recognition from video is still an open and difficult problem because of the nonlinearities involved in recognizing 3D structures from 2D video, and the complex linguistic organization of sign languages. The linguistic parameters relevant to sign production and discrimination include hand configuration and orientation, location relative to the body or in signing space, movement trajectory, and in some cases, facial expressions/head movements. An additional complication is that signs belonging to different classes have distinct internal structures, and are thus subject to different linguistic constraints and require distinct recognition strategies; yet prior research has generally failed to address these distinctions. The challenges are compounded by inter- and intra- signer variations, and, in continuous signing, by co-articulation effects (i.e., influence from adjacent signs) with respect to several of the above parameters. Purely data-driven approaches are ill-suited to sign recognition given the limited quantities of available, consistently annotated data and the complexity of the linguistic structures involved, which are hard to infer. Prior research has, for this reason, generally focused on selected aspects of the problem, often restricting the work to a limited vocabulary, and therefore resulting in methods that are not scalable. More importantly, few if any methods involve 4D (spatio-temporal) modeling and attention to the linguistic properties of specific types of signs. A new approach to computer-based recognition of ASL from video is needed. In this research, the approach will be to build a new hybrid, scalable, computational framework for sign identification from a large vocabulary, which has never before been achieved. This research will strategically combine state-of-the-art computer vision, machine-learning methods, and linguistic modeling. It will leverage the team's existing publicly shared ASL corpora and Sign Bank - linguistically annotated and categorized video recordings produced by native signers - which will be augmented to meet the requirements of this project.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Project Terms:
Address; American Sign Language; Articulation; Attention; Award; base; Boston; Child; Collection; college; Complex; Complication; computer framework; computer science; Computer Vision Systems; Computers; Data; data integration; Development; Dictionary; digital; Discrimination; Education; efficacy testing; Evaluation; Facial Expression; Family; Foundations; Future; Hand; Head Movements; Hearing Impaired Persons; high school; Hybrids; Instruction; Intuition; Learning; learning strategy; Linguistics; Location; Machine Learning; Methods; Mission; Modeling; Movement; Multimedia; Names; novel strategies; Outcome; pedagogy; Production; Property; real world application; Research; Resources; Sign Language; Software Tools; Sorting - Cell Movement; spatiotemporal; Structure; Students; System; Technology; three dimensional structure; tool; Translations; Universities; user-friendly; Variant; Video Recording; Vocabulary; Work; Writing


Contact PI / Project Leader Information:
Other PI Information:
Not Applicable
Awardee Organization:
Congressional District:
State Code:  NJ
District:  06
Other Information:
Fiscal Year: 2018
Award Notice Date: 21-Jul-2018
DUNS Number: 001912864
Project Start Date: 01-Aug-2018
Budget Start Date:
CFDA Code: 47.070
Project End Date: 31-Jul-2021
Budget End Date:
Agency: ?

Agency: The entity responsible for the administering of a research grant, project, or contract. This may represent a federal department, agency, or sub-agency (institute or center). Details on agencies in Federal RePORTER can be found in the FAQ page.

National Science Foundation
Project Funding Information for 2018:
Year Agency

Agency: The entity responsible for the administering of a research grant, project, or contract. This may represent a federal department, agency, or sub-agency (institute or center). Details on agencies in Federal RePORTER can be found in the FAQ page.

FY Total Cost
2018 NSF

National Science Foundation




It is important to recognize, and consider in any interpretation of Federal RePORTER data, that the publication and patent information cannot be associated with any particular year of a research project. The lag between research being conducted and the availability of its results in a publication or patent award varies substantially. For that reason, it's difficult, if not impossible, to associate a publication or patent with any specific year of the project. Likewise, it is not possible to associate a publication or patent with any particular supplement to a research project or a particular subproject of a multi-project grant.


Publications: i

Click on the column header to sort the results

PubMed = PubMed PubMed Central = PubMed Central Google Scholar = Google Scholar

Patents: i

Click on the column header to sort the results

Similar Projects

Download Adobe Acrobat Reader:Adobe Acrobat VERSION: 3.41.0 Release Notes
Back to Top