Scientists have made tremendous leaps in getting computers to understand natural language, as well as in generating a series of physical poses to create realistic animations. These capabilities might as well exist in separate worlds, however, because the link between natural language and physical poses has been missing.
Louis-Philippe Morency, associate professor in the Language Technologies Institute (LTI), and Chaitanya Ahuja, an LTI Ph.D. student, are working to bring those worlds together using a neural architecture they call Joint Language-to-Pose, or JL2P. The JL2P model enables sentences and physical motions to be jointly embedded, so it can learn how language is related to action, gestures and movement. "I think we're in an early stage of this research, but from a modeling, artificial intelligence and theory perspective, it's a very exciting moment," Morency said. "Right now, we're talking about animating virtual characters. Eventually, this link between language and gestures could be applied to robots; we might be able to simply tell a personal assistant robot what we want it to do.
People Would Shell Out Money For It. Most people recognize that the best virtual reality headsets cost quite a lot. After all, the best virtual reality experience is worth spending money on. One study found that a majority of consumers would be willing to spend up to $500 for the right virtual reality gear. This is really good news, considering that some of the top headsets for virtual reality cost about $500. There are also plenty of lower-priced ones that can be used for virtual reality as well.
"We also could eventually go the other way -- using this link between language and animation so a computer could describe what is happening in a video," he added.Ahuja will present JL2P on Sept. 19 at the International Conference on 3D Vision in Quebec City, Canada. To create JL2P, Ahuja used a curriculum-learning approach that focuses on the model first learning short, easy sequences -- "A person walks forward" -- and then longer, harder sequences -- "A person steps forward, then turns around and steps forward again," or "A person jumps over an obstacle while running."
Verbs and adverbs describe the action and speed/acceleration of the action, while nouns and adjectives describe locations and directions. The ultimate goal is to animate complex sequences with multiple actions happening either simultaneously or in sequence, Ahuja said.
For now, the animations are for stick figures.
Making it more complicated is the fact that lots of things are happening at the same time, even in simple sequences, Morency explained.
"Synchrony between body parts is very important," Morency said. "Every time you move your legs, you also move your arms, your torso and possibly your head. The body animations need to coordinate these different components, while at the same time achieving complex actions. Bringing language narrative within this complex animation environment is both challenging and exciting. This is a path toward better understanding of speech and gestures."
"Virtual Reality" Was Coined in 1987. While immersive experiences (depending on the definition) have been around for decades, the actual term most people use to describe them is relatively new. The term “virtual reality” was conceived by Jaron Lanier in 1987, during an intense period of research around this form of technology.