Signing Avatars & Immersive Learning
This video will introduce SAIL, an NSF-funded project housed within the Action & Brain Lab and Motion Light Labs at Gallaudet University. This project involves the development and testing of a new immersive American Sign Language learning environment, which aims to teach non-signers basic ASL. Our team created signing avatars using motion capture recordings of deaf signers signing ASL. The avatars are placed in a virtual reality environment accessed via head-mounted goggles. The user’s own movements are captured via a gesture-tracking system. A “teacher” avatar guides users through an interactive ASL lesson involving both the observation and production of signs. Using principles from embodied learning, users learn ASL signs from both the first-person perspective and the third-person perspective. The SAIL project draws upon the integration of multiple technologies: avatars, motion capture systems, virtual reality, gesture tracking, and EEG, with the goal of creating a new way to learn sign language. The video will highlight recent developments in the project, including the creation of the virtual signing human avatar, the building of the virtual environment, and pilot testing of the system.
NSF Awards: 1839379