A lot of people think ‘it’s just movement.’ But, movement can be just as subtle, rich, and powerful as the human voice.
Rémi Brun, founder and CEO of Mocaplab
The team at Huenerfauth’s lab is occupied with similar nuances, but they take a different approach to computer animation, called motion capture. Their process starts with humans signing in special gloves and other clothing covered with tiny sensors that turn every movement into data that can be used with mathematical models to solve a linguistic challenge, such as how a signer uses the space around her body to “locate” certain objects she’s describing, creating invisible reference points that, for example, alter verb signs linked with direct objects.
“We publish the math, showing how we address these issues, and share all our motion-capture recordings with the world,” said Huenerfauth, so that other labs can replicate and build off their findings. While it may take decades before real-time, sign-language translation avatars are available to deaf students, other applications of this research could be ready much sooner, such as avatars translating the written text of online educational materials into sign language at the press of a button.
The signing avatars can also be used in apps and games to help deaf children get early exposure to language, which is critical for their cognitive development. More than 90 percent of deaf children are born to hearing parents who don’t sign, said Hamilton, which means, “a lot of deaf children grow up with almost no language until they hit school. And that has created language deprivation.”
Parents talking and reading to hearing children helps to develop the language-processing parts of their brains that will later help them to communicate and to learn. Recent studies indicate that early sign language can develop these same brain areas, and that the more proficient deaf and hard-of-hearing students are in sign language, the better they do academically.
Hoping to bolster the sign-language skills of young children, Hamilton and fellow CATS researchers are creating a game called CopyCat, in which kids communicate with a sign-language cat named Iris, directing the cat to play with toys or take other actions to win the game. A motion-sensing camera captures the child’s signs, and if they’re incorrect, Iris stops and looks baffled. The developers are still working out the kinks. For instance, the current version of CopyCat doesn’t do well with signs that require people to cross their hands.
Meanwhile, researchers at the Motion Light Lab at Gallaudet are creating sign-language avatars who tell nursery rhymes written for deaf children (rhyming is replaced by repetitive rhythms in the signs). The project uses motion-capture technology developed by a French animation and effects studio called Mocaplab, which is itself working on a sign-language translation avatar and an app in which an avatar teaching the user sign-language can be rotated for a first-person point-of-view for each sign.
“A lot of people think ‘it’s just movement,’” said Rémi Brun, founder and CEO of Mocaplab. “But, movement can be just as subtle, rich, and powerful as the human voice.”
The post Movie magic could be used to translate for the deaf appeared first on The Hechinger Report.
We cover inequality and innovation in education with in-depth journalism that uses research, data and stories from classrooms and campuses to show the public how education can be improved and why it matters.