Introducing DeepFocus: The AI Rendering System Powering Half Dome
Introducing DeepFocus: The AI Rendering System Powering Half Dome
How a computer learns to dribble: Practice, practice, practice: Deep reinforcement learning makes basketball video games look more realistic
How a computer learns to dribble: Practice, practice, practice: Deep reinforcement learning makes basketball video games look more realistic
Network orchestration: Researcher uses music to manage networks
Network orchestration: Researcher uses music to manage networks
Artificial intelligence system learns to diagnose, classify intracranial hemorrhage
Artificial intelligence system learns to diagnose, classify intracranial hemorrhage
University of Washington Researchers Demo Ability to Generate 3D Augmented Reality Content from 2D Images
University of Washington Researchers Demo Ability to Generate 3D Augmented Reality Content from 2D Images
SIMILAR ARTICLES:

Facebook’s AI extracts playable characters from real-world videos

Remember those FMV games from the ’90s — the ones that blended prerecorded clips with animated sprites and 3D models? Facebook is bringing them back in style, and improved tenfold. In a newly published preprint paper on Arxiv.org (“Vid2Game: Controllable Characters Extracted from Real-World Videos“), scientists at Facebook AI Research describe a system capable of extracting controllable characters from real-world videos.

“Our method extracts a character from an uncontrolled video and enables us to control its motion,” the paper’s coauthors explain. “The model generates novel image sequences of that person … [and the] generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person.”

The team’s approach relies on two neural networks, or layers of mathematical functions modeled after biological neurons: Pose2Pose, a framework that maps a current pose and a single-instance control signal to the next post, and Pose2Frame, which plops the current pose and new pose (along with a given background) on an output frame. The reanimation can be controlled by any “low-dimensional” signal, such as one from a joystick or keyboard, and the researchers say that the system is robust enough to position extracted characters in dynamic backgrounds.

Samsung Is Going All In. Samsung is one of the leading companies in the virtual reality space. Years of research into virtual reality are finally paying off for the company. At virtual reality conventions, Samsung's products are often regarded as one of the most popular, based on feedback from attendees. Currently, the Samsung Gear VR is the most popular virtual reality headset on the market. Things in the market might change in a few years, but for now Samsung is in the lead.

Facebook video game generation

So how’s it work? First, an input video containing one or more characters is fed into a Pose2Pose network trained for a specific domain (e.g., dancing), which isolates them (plus estimated foreground spatial masks) and their motion — the latter of which is taken as a trajectory of their centers of mass. (The masks are used to determine which regions of the background are replaced by synthesized image information.) Using these and combined pose data, Pose2Frame separates between character-dependent changes in the scene like shadows, held items, and reflections and those that are character-independent, and returns a pair of outputs that are linearly blended with any desired background.

To train the AI system, the researchers sourced three videos, each between five and eight minutes long, of a tennis player outdoors, a person swinging a sword indoors, and a person walking. Compared with a neural network model fed three-minute video of a dancer, they report that their approach managed to successfully field dynamic elements, such as other people and differences in camera angle, in addition to variations in character clothing and camera angle.

“Each network addresses a computational problem not previously fully met, together paving the way for the generation of video games with realistic graphics,” they wrote. “In addition, controllable characters extracted from YouTube-like videos can find their place in the virtual worlds and augmented realities.”

Most of the major brands worldwide are investing in some way in virtual reality.

Facebook isn’t the only company investigating AI systems that might aid in game design. Startup Promethean AI employs machine learning to help human artists create art for video games, and Nvidia researchers recently demonstrated a generative model that can create virtual environments using video snippets. Machine learning has also been used to rescue old game textures in retro titles like Final Fantasy VII and The Legend of Zelda: Twilight Princess, and to generate thousands of levels in games like Doom from scratch.

SIMILAR ARTICLES:
New Artificial Intelligence Does Something Extraordinary — It Remembers
New Artificial Intelligence Does Something Extraordinary — It Remembers
LEAP Motion Impresses with Amazing AR Table Tennis Demo and Headset
LEAP Motion Impresses with Amazing AR Table Tennis Demo and Headset
How to Record Beat Saber in 360 Degrees
How to Record Beat Saber in 360 Degrees
Creating 3D Animations From A Single Still Image
Creating 3D Animations From A Single Still Image