Microsoft’s HoloLens demonstrations were not always impressive, but the company’s new experience at the event was nothing short of an amazing miracle.

Using a combination of body and voice capture technologies, Azure AI and HoloLens, Microsoft created an almost photo-realistic hologram of executive Julia White, and then the hologram delivered part of White’s leitmotif in Japanese, a language that a real person cannot speak.

The demonstration took place when White wore a HoloLens headset, walked and watched her clone in 3D space. She began by imagining a “mini-me” that could vaguely “hold” in her hand. After a slight flourishing of sparkling green special effects, the puppet copy turned into a full-sized clone, which began to speak a foreign language, using White’s voice samples to speak sentences that were machine-translated into Japanese.

It only takes a moment to understand the huge potential of a new technology, assuming that it works in practice as easily as in a demo. Equipped with the proper 3D camera depth scanning equipment and AI translation tools, any speaker can quickly create believable regional presentations — the main report can be pre-recorded and shown simultaneously in 30 languages. Of course, the same technology can be used for less positive purposes, forging words or actions that would not be derived from a body scan model.

At the moment, to achieve this goal, you need access to some professional-caliber hardware, ranging from high-quality specialized cameras to expensive HoloLens headsets. But similar body scanning technologies are expected to break into next-generation smartphones over the next year or so, which can create the conditions for viewing photo-realistic avatars on their screens or on consumer AR headsets. Whether Microsoft will bring this concept to its own initiative on mixed reality headsets remains to be seen.

Stay up to date with the latest virtual reality news with VRcue.

Microsoft’s HoloLens translates on the fly

About The Author