Issue #3: Dimensional Dispatch

What a week! It’s the middle of August and we had this crazy notion that we'd see a slow couple of weeks given that it's Summer but we were wrong. The future waits for no one!

Can I just say — we love Siggraph? I look back fondly on my first Siggraph with Looking Glass back in 2017. We had a tiny 10×10 booth at the side of the hall where we showed HoloPlayer One to the world for the very first time and we were soon known as “the booth with the plants.

That is because, we were the booth with the plants.

A large percentage of our booth had missed a delivery and we were left with piecemeal-ing our booth from Target and Office Depot furniture. And plants. A lot of plants. If you don’t believe me, history will show:

The early team of Looking Glass at Siggraph 2017 doing the plank. Surrounded by plants.

As Shania said — and look how far we’ve come now, baby 🎵

One of the most exciting things to come out of Looking Glass land in the past week was the NVIDIA Research Emerging Technologies showcase at Siggraph, showcasing the Looking Glass Portrait, 32" and 65" displays. Check out this groundbreaking AI-mediated 3D video conferencing presentation seen here.

To break it down simply, researchers at NVIDIA presented a way to reconstruct and auto-stereoscopically display a life-sized talking head with minimal capture equipment. And truth be told, they’re being super modest. The experience was powered by just a standard web camera which drastically reduces the cost for 3D capture all the while providing a high-fidelity representation on the receiver’s end. What’s even cooler? This entire paper was presented in light field form on a Looking Glass 65”.

I KNOW RIGHT.

3D Selfie Booth at NVIDIA's booth

And just for good measure, there were also a couple Looking Glass Portraits set up (GIF above) around the area with a one-click capture photo to 3D selfie uploaded onto Blocks. Certainly no shortage of holograms at this booth.

This, along with other landmark announcements from NVIDIA and others at Siggraph, marks a moment where I feel AI advancements have greatly accelerated the speed at which we’ll all soon be creating and consuming 3D content. That's really the theme of the summer as far as I can tell. Most of the announcements we've seen over the last two weeks (and even some of our own in the coming weeks) is about the convergence of AI & 3D technology and how the former (AI) will allow the three-dimensional content and experiences spring up easier than before — even if 2D is the main input. I dive more into this on my own LinkedIn newsletter here.  

Move over NeRFS (already?!), Gaussian Splats are here.

Originally announced prior to Siggraph, the team behind 3D Gaussian Splatting for RealTime Radiance Fields have also released the code for their project. The full paper published here details a new way of rendering radiance fields that brings an increase in quality and speed improvements of around 10x faster than the previous performance king Instant-NGP.

If you are want to get deep into the weeds, we highly recommend reading the paper but the TLDR (which is what you’re all here for) is that instead of the ray-based approach that most NeRF techniques use, Gaussian Splatting uses a technique that’s able to take advantage of the rasterization hardware in graphics cards.

We're handing off some of the more fun deep dives this week to Jonathan Stephens, a favorite of ours in the NeRF space and someone who already has a handful of new videos up that cover some comparisons between NeRFs vs. Gaussian Splatting. We’d recommend just following along his channel and Twitter as he powers through some more experiments this weekend. Our eyes will certainly stay peeled.

Other landmark announcements from Siggraph

  • Nvidia releases NeuroAngelo, a hybrid NERF/photogrammetry pipeline: One of the areas NeRFs typically struggle in is in surface reconstruction — something that photogrammetry excelled in. NVIDIA’s new paper NeuroAngelo combines aspects of photogrammetry with the advantages of NeRFs so you get the best of both worlds with RGB-D video capture. More on that paper here. The code for Neuroangelo was just released over the weekend, you can find that here.
  • Generative AI takes over Siggraph: Siggraph’s popular realtime live presentation was chock full of promising new workflows, including AI based material generation, motion capture physics, with companies like Roblox investing in generative AI techniques to enable creators to build interactive objects and scenes without complex modeling or coding. It would be herculean feat to summarize all of this into one bullet point but I’ll do you one better: a full list of the abstract for all the talks at Siggraph can be found here. If you want the Two Minute Paper recap of these announcements, that can be found here.

Hello, Shutterstock.

Shutterstock made a few big splashes last week at Siggraph with a couple of announcements relevant to our audience here:

  • First, NeRF on Shutterstock: Luma Labs AI and RECON Labs’ 3Dpresso are exploring development and 3D asset publishing on Shutterstock’s Turbosquid platform. Built using advanced machine learning and computer vision, 3D creators can now produce high-resolution, photorealitic 3D scenes from 2D video in minutes instead of hours. (🔗)
  • Second, Shutterstock launches text-to-360 video capabilities: Using NVIDIA’s Picasso cloud platform, Shutterstock announced the ability for creators to create and customize 3D backgrounds. This will help artists enhance and light 3D scenes based on simple text or image prompts, all with AI models built using fully licensed, rights-reserved data. This is meant to help aritsts speed up the process of creating environment maps which then will allow them to focus more of their time on the hero 3D assets - arguably the most important part of an artist’s 3D scene. More details here.
Credit: NVIDIA 

I could keep adding to this list but at this point, I’ll always feel like I’m scratching the surface of a seminal year at Siggraph (especially for NVIDIA if you can’t already tell). In case you missed it, I'll leave a link to NVIDIA founder and CEO Jensen Huang's keynote below.

One. more. thing. Just as I was closing out the edits on this week's newsletter, our friends at Luma announced Flythroughs - a brand new app that allows anyone with an iPhone to capture fly through cinematic walkthrough of interior (and exterior) spaces without the need for a gimbal or drone. I'm  forever impressed by the speed and quality of releases by the team at Luma and I, for one, can't wait to give this one a go. You can download Flythroughs on the app store now.

I am personally really stoked about how quickly (and honestly a little overwhelmed sometimes) by the pace at which we're advancing with some of these new releases and I'm sure I'll have even more to share with you in two weeks time. We’re at the cusp of some of our very own announcements as well so if you’re not already signed up on to receive this newsletter via email, there is no time like the present. Until next week :-) 

To the future, as always!
-Nikki