Fast (and Local!) Holograms with ComfyUI

Fast (and Local!) Holograms with ComfyUI
A sneak peek of our advancing hologram-generating ComfyUI workflows

Hello, world! My name is Arturo, and I’m the Community Manager here at Looking Glass.

My face when I finally dialed in the workflow in this post.

I’m in the midst of setting up a mini production studio in our Brooklyn lab to make it easier for us to create and share tutorials, behind-the-scenes content, and more.

As part of that process, I wanted to test out an image-to-hologram ComfyUI workflow that Alec (from our Graphics team), Adrian (from our Software team), and I worked on together last year. This workflow takes an input image and converts it into a hologram for the Looking Glass in a matter of seconds, all on-device.

To stress test the laptop in the studio room, I downloaded 50+ images from Midjourney’s Top of the Month generations and got to work installing ComfyUI.

Getting Comfy with ComfyUI

ComfyUI is an open source and powerful node-based interface for creating and managing workflows with generative AI models. Ooh, that’s a mouthful.

The short version: ComfyUI lets you chain together different AI models, and it all runs on-device. This means you can rely on your own hardware instead of on internet speeds or the cloud.

These modular workflows in ComfyUI allow for much more than depth generation, though, and we’ll be covering how to get started modifying our existing ComfyUI-to-Looking Glass workflows in more depth next week – be sure to subscribe to this space so you don’t miss a thing!

To prepare for the future deep-dives into this workflow, check out the Getting Started with ComfyUI documentation available here.

The Workflow

Blink and you'll miss it!

Our ComfyUI workflow takes any input image and generates a high-quality depth map from that image.

It then stitches the two together into an RGB-D image (Red, Green, Blue, Depth) that can be played back dimensionally on a Looking Glass

The Output

An RGB-D image consists of the original image (RGB) and its depth component (D)
An RGB-D image consists of the original image (RGB) and its depth component (D)

Our software tools, like Looking Glass Studio and Bridge, allow for the loading and hologram playback of RGB-D images on Looking Glass displays. This video from our YouTube channel goes a little deeper into playing back RGB-D content on Looking Glass.

The laptop in the studio boasts an NVIDIA® GeForce RTX™ 4090 and took a little over a minute to take all of the 75 images from the folder I loaded in and generate a depth map AND RGB-D for each. A little over 1 second per image is insanely fast and has unlocked a faster and smoother process for us to generate and film holograms at the lab. Just imagine the possibilities when stringing together various generative models and techniques!

The Holograms

Our software tools, like Looking Glass Studio and Bridge, allow you to load RGB-D images and play them back as holograms on your Looking Glass display.

The Verdict

The freedoms afforded by local depth generation models and modular workflows like ComfyUI continue to expand the boundaries of high-fidelity holograms.

Taking an existing image and converting it into a hologram on your own is now easier than ever.

Our own platform, Blocks, allows you to do this on the cloud and generate the holograms in a format that's widely shareable across multiple platforms and devices.

You can try it yourself by interacting with this hologram or uploading your own image!: https://blocks.glass/arturojreal/178072