Virtual Teleportation Is Coming: How Looking Glass Is Bringing Ideas of the Future to Life With NVIDIA Technology

Virtual Teleportation Is Coming: How Looking Glass Is Bringing Ideas of the Future to Life With NVIDIA Technology

Looking Glass is on a journey to push the boundaries of holographic technology. For the past decade, we've been driven by our vision of a future where everyone can experience the magic of holograms. Using NVIDIA Maxine technology and NVIDIA NIM inference microservices, we’re helping to bring virtual teleportation closer to reality than ever.

In the past year, we've showcased, in collaboration with NVIDIA, enhancements in how people experience 3D content. Crowds gathered at NVIDIA’s Emerging Technologies booth at SIGGRAPH 2023, as 3D video conferencing and 3D selfies—powered by NVIDIA technologies—were displayed on various Looking Glass Displays.

Looking Glass on LinkedIn: NVIDIA research demoed a jaw-dropping AI-based 3D holographic booth at ACM…
NVIDIA research demoed a jaw-dropping AI-based 3D holographic booth at ACM SIGGRAPH last week featuring the Looking Glass 32" display! The lines were long and…

The astonishing part was the simplicity and ease of use of these innovations. The AI-mediated 3D video conferencing demo worked in real time using only a single webcam and NVIDIA generative AI software and tools. Paired with our 32" Looking Glass Display, it felt like taking a call from the future, but it was happening in the present.

The demo was able to host 3D conferences for up to 3 users by creating 3D video from a monocular RGB input. Using a standard webcam to capture 2D scenes, NVIDIA RTX 6000 Ada Generation GPUs lift those 2D faces into neural radiance fields (NeRFs) and stream them in real time. The NeRFs are then rendered into light fields for our Looking Glass Displays with an NVIDIA GeForce RTX 4090 GPU.

From AI-mediated 3D Video Conferencing SIGGRAPH Emerging Technologies 2023

Also showcased was the 3D selfie, a new way to share your beautiful face. Using a standard selfie camera and NVIDIA technologies, we're able to generate an AI-powered 3D selfie in seconds. On an NVIDIA RTX A5000 GPU-powered laptop, photorealistic 3D images are rendered from a single, unposed photo in real time by generating a 45-view light field using instant AI super resolution. Participants could view their new 3D selfies on a Looking Glass Portrait display.

Generating AI-powered 3D selfies in seconds!

We had a lot of fun at SIGGRAPH last year, and it was a monumental time that made a lot of us realize our vision of making 3D accessible to everyone without headsets is now closer than ever. With NVIDIA's powerful technologies, Looking Glass is helping to make immersive experiences a seamless part of everyday life, allowing more people to engage with 3D content naturally.

That's why we were so excited to be at the NVIDIA GTC global AI conference earlier this year, to showcase updates in generating 3D visuals on our displays. At GTC, we took a deeper dive into the NVIDIA Research paper, Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis.

0:00
/0:18

What you're seeing here are the results of NVIDIA's one-shot method to infer and render a photorealistic 3D representation from a single unposed image in real time, shown in 3D on a Looking Glass 65" display.

From: Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis

The system works in real time, with just a single camera to render a photorealistic 3D representation of a single unposed image. It also works on video by processing frames independently. It can run at lightning speeds on consumer computers at 24 fps and still produce higher-quality results than GAN-inversion baselines that need test-time optimization, running 1000x slower. The model is trained only using synthetic data generated by EG3D, yet the encoder-based method still works well on out-of-domain images. The methodology predicts a triplane representation for volume rendering based on an unposed RGB image. The encoder's architecture is crucial, so Vision-Transformer (ViT) layers are incorporated to help the model learn the highly complex correspondences between 2D pixels and the 3D representation. Without such layers, it is still possible to render a 3D representation but fails to capture detailed features of the subject. To better align the synthetic training data to the real world, we vary geometric camera parameters (such as focal length, principal point, and camera roll) instead of keeping them constant, aligning the synthetic data more closely with real-world conditions. Without this, the model predicts incorrect geometries and renders unrealistic images.

This year, we're back at SIGGRAPH with our displays featured in NVIDIA’s Innovation Zone, showcasing the Magic Mirror demo powered by NVIDIA Maxine. The Magic Mirror utilizes a simple camera setup and Maxine's advanced 3D AI capabilities to generate a real-time holographic feed of users' faces on our newly launched, group-viewable Looking Glass 16" and 32" Spatial Displays. Thanks to NVIDIA Maxine NIM microservices, we're making incredible strides in simplifying and instantly generating 3D visuals, bringing us closer to our dream of holograms for everyone.

Building on years of research, demos, and technologies, this showcase promises to be jaw-dropping and offers an inspiring glimpse into the near future, and if you happen to be at SIGGRAPH this year, we hope you’ll stop by to check it out.

With the power of Maxine NIM microservices, Looking Glass is poised to help revolutionize the way people communicate. Our patented, groundbreaking 3D hardware and software suite is ushering in a new era of spatial technology beyond the headset. By enabling groups to experience 3D content together, we're transforming the future of how we communicate, collaborate, and share memories. We can’t wait to bring you along.

Until then, to the future!