
Designing a Collaborative Art Creation Framework for VR and the Looking Glass

Hello! I‘m Maria, Co-Founder of a small studio called Synesthetic Echo, game developer and designer. Both me and my other Co-Founder Gordey develop for virtual, augmented and mixed reality. We also research and create experiences for emerging mediums and technologies.

This article describes the process of designing VR-enabled app for Looking Glass volumetric display and the development lessons we’ve learned along the way.
The Looking Glass is a patent-pending combination of lightfield and volumetric display technologies within a single three-dimensional display system. 45 unique simultaneous views of a virtual scene are captured on a computer at 60 frames per second.

The Looking Glass always seemed incredibly interesting for us as user experience (UX) researchers, developers and designers. We’ve been following Looking Glass Factory for a while, since they first launched their volumetric display prototype (Volume) back in 2015. This year, the Looking Glass finally became a small, versatile and truly magical fish tank with a handy Unity SDK. At Synesthetic Echo we decided to develop something daring, deeply challenging and also fun. So we made Finist Painter!
The Idea
We had been working with Looking Glass Factory for a while before we decided to create Finist Painter for the Looking Glass. We attended their first Holo-Hackathon (and won the award for best concept!) in Brooklyn back in September 2017 for their previous HoloPlayer One lightfield aerial display product. Since then, we’ve also created responsive digital avatars, music installations and touch-responsive poisonous jellyfishes, some of which premiered when they launched their Kickstarter campaign this past summer.
Everything that Looking Glass Factory has since developed has all been extremely interesting technology to work with mostly because it enables a new mode of interaction for users that is both intimate and engaging. However, we soon realized that it wasn’t enough for us to just create characters and games for the Looking Glass. We wanted to create something that was never before possible.
By way of introduction, VR art has slowly and steadily become part of many enterprise workflows in the 3D creation space — architects use HMDs to sketch buildings spatially, real-time VJ’s create background graphics for concerts, 3D modelers make initial sketches in VR, which allows them to iterate faster. VR art communities are growing worldwide. Artists are being invited to operas, museums and concerts to paint live. What’s been missing is a tool or medium to let people see VR art in a way that it’s supposed to be viewed — in 3D! So far, the only way to experience spatial VR painting is either by wearing the headset or looking at the 2D representation of a piece on a monitor.


We started to play around with the idea of using the Looking Glass as a device to preview and potentially interact with VR painting and sculpting software. First, we imagined potential use case scenarios — ranging from art direction meetings where a group of people can preview and edit a VR project or design brief in real-time without wearing the headset all the way to showcasing public VR art in exhibitions and conferences without sacrificing the attention of a large group of people having to line up to wait their turn for the headset.
The first thing we tried was to investigate the possibility of how to connect existing app ecosystems to a Looking Glass. Because Looking Glass has a very versatile Unity SDK, it could potentially work with existing art software the following ways:
- Networked interaction — will work even on mediocre GPU, but won’t be real-time and involves big amount of data being transferred.
- Texture sharing — easier to program, but requires two computers anyway.
- All-in-one app — requires only one computer, but involves non-trivial setup and also it would be heavy on performance.
Some painting apps could theoretically be compatible with the Looking Glass HoloPlay SDK for Unity, because all we would need is to render the VR painting again on the second viewing device (the Looking Glass). After doing some quick research, we concluded that it was not currently possible since none of the major VR painting apps in the market are open-sourced and even if they did decide to include Looking Glass support, that meant take real-time rendering power from the app itself and spending it to creating 45 more views (as that’s how Looking Glass renders 3D).
Earlier in September, we attended a special speaker session about the Quill and Medium apps at the Oculus Connect 5 conference, and their creators confirmed that their number one goal is to have maximum performance for the apps. They clearly see them as professional tools and that they were not willing to jeopardize performance of their own apps in order to support interfaces other than the Oculus device. At that moment, we realized that we have to create this app ourselves.
Gordey — the technical artist and programmer for the project — was always a huge fan of creative applications in VR. After trying Quill, Tilt Brush and Gravity Sketch, he concluded that none of them were providing enough tools to be performative and free and thus started to work on his own painting framework, which he called Dreamcatcher Painter. We later renamed this framework to Finist [fee-nee-st]Painter.


Finist Painter allows the artist to use their whole body to paint. The massive amount of gestures in the program makes the painting process itself a performative experience. Finist Painter employs an ultra-fluid UI that brings the artist into the flow state immediately. Since we had full control over all Finist Painter source code and design, we decided to try integrating Looking Glass support into it.

Prototype considerations
From the very first ideation session we decided to follow some conceptual guidelines:
- Since the Looking Glass is a very experimental and new medium, be cautious and careful about designing all the interactions. The key is to make interactions intuitive and easy to grasp.
- Include a collaborative element for the Looking Glass user.
- Carefully balance the app capabilities.
In the early stages, we focused on conducting useful user research, running a ton of experiments, and sharing the results. We were not setting out to create a professional-grade app, but rather, a cool and engaging toy (or experience) that has the potential to help people create art together. Of course, if artists like it, and both VR headsets and interfaces like the Looking Glass become a norm to have at the workbench of a visual creator in the future, who knows, perhaps Finist Painter will one day soon become part of that workflow too!
After making and testing the first prototype, it became obvious that we far underestimated our challenges and the possible problems that we eventually ran into. In the rest of this article, I’ll be describing most of the development and design challenges that we faced during the development process and our general approach of overcoming them. As most App developers can attest to, we’ve found only temporary solutions for some of these challenges, Finist Painter is evolving constantly and if you’re working with emerging tech, the work just never ends!
Challenge #1 — Optimizing for VR frame rate
One of the most brutal challenges we faced was keeping frame rate high. Since everything in Finist Painter — Looking Glass Editionhappens inside one Unity scene, we had to find a way to render 2 high-resolution views for the VR headset and 45 views of the same scene for the Looking Glass display. And on top of that, we also had real-time mesh creation and some vertex shader computation. If the framerate in VR is lower than 90 FPS, users might feel motion sick and uncomfortable. As an extra layer of added challenges, frame rate lags lead to Leap Motion lags, so hand detection stops being fluid and slows down. In short, while some people can handle 60–80 FPS, anything below 60 just ruins the experience.


How we solved it — we disabled the shadows and left only two lights (the rest we are now emulating through the vertex shader). Design-wise, we got rid of post-processing shaders and limited the amount of meshes that both users can create. Once the limit is hit, the old meshes disappear.


Considering that the app is supposed to be more for experience and play, rather than for asset creation, we decided this sacrifice could be done without much consequence.
Moving ahead, we plan to set that limit dynamically, based on available GPU power. Speaking of GPU, we can’t forget about…
Challenge #2 — Setup
The minimum spec for running Finist Painter: Looking Glass Edition is Nvidia 1060 series and up. 1080 is optimal. We tried our hand at working with Unity’s new Lightweight Rendering Pipeline. That was supposed to help with situations like this by giving access to render order operations. However, the current level of documentation for the Lightweight Render Pipeline and its beta status are slightly discouraging.
On top of having a beefy GPU, you’ll need a handful of adapters to make the app work. For our Razer laptops, we use USB-C adapters with an extra HDMI which we connect to the Looking Glass. For a PC, you can use a DisplayPort output cable to connect the Looking Glass, and the HDMI output to connect to the Oculus Rift. The setup is individual for every computer, but generally you have to make sure your computer is capable to output at least three monitors —a computer screen, a VR headset and a Looking Glass. Remember, that big Looking Glasses can only be used with 4K-capable cables and sometime, the Oculus Rift is also finicky cable-wise (you can never go wrong with 4K-enabled HDMI cables). We watch every optimization opportunity closely and will update this post as we learn more!

Challenge #3 — Too many variables to explain to the testers
Since we are using four fairly new interfaces to power the Finist Painter experience (a VR headset, a pair of VR hand trackers, a Looking Glass and a Leap Motion Controller), it became very difficult to set up a good and optimal user experience. There are just too many variables to explain! We accepted early on that the “what you see is what you get” approach here is not super useful and a fair share of users actually wanted to know what the devices were and how they worked even before trying it on for the first time. We were careful to always explain that it’s not possible to play “wrong” or “break something” and that it’s OK to explore on your own, but if the person still wants to know the specifics of how it’s all working — we are always happy to explain.
Challenge #4 — Imbalances and the asynchronicity of user experience
There were fundamentally two very different “modes” of interaction — the granular control a VR user had vs. that of a Looking Glass user. A VR user has their body, the whole virtual world, and two controllers with multiple buttons, whereas a Looking Glass user only has a handful of gestures to collaborate using when part of the same creation.

In an attempt to make the experience and the collaboration between the two users more balanced, we made complementary brushes (one user has a branch brush, another one has a leaf brush) and introduced different types of agency for both users.
The Looking Glass user is The Reviewer which means that they are limited to panning, zooming and rotating the painting. The Reviewer is also able to create their own art, add some details to the piece and highlight certain parts with hand movements. The VR user is The Creator and can view the Looking Glass user as giant hands. This subtle re-balancing of roles eventually equates both users in their agency levels. In this case, giving both users an opportunity not to collaborate, but just painting their own thing if they want (some people prefer to do that). Additionally, both users can only erase the work they‘ve created but can never affect the work of the other person. When playtesting, we paid close attention to what people said and did after they switched roles and we made sure that both roles were equally fun and satisfying.
Challenge #5 — When Leap Motion behaves very badly
Using the Leap Motion as a main interaction input for Looking Glass seemed like a good idea from the beginning as Leap Motion has more or less defined practices for interaction design, a lot of applications and generally works well with the public. However, Leap Motion’s latest SDK was created primarily for use on VR headsets. Non-VR usage is limited and unpredictable.

Gesture design is very limited on Leap Motion. It’s usually hard to get robust and reliable tracking with the same result over and over again. The tracking varies from user to user and rarely is as accurate as we want it to be. Once we started to use more than 3 gestures, combine them together and define our own gestures — the Leap Motion would start glitching out.
During the process of playtesting, we discovered that the camera vision library in the SDK we were using didn’t work well with female hands or with any additional accessories like rings and bracelets. Sleeves on women clothing also present a problem for the Leap Motion. A theory is that Leap Motion gesture recognition might have been trained predominantly with male hands, which was very upsetting and I am hoping that this will change moving forward.
To combat this issue, we eventually reduced the amount of gestures and made sure all that was left worked well with female hands.
Challenge #6 — Public playtesting weirdness
We use an iterative playtesting approach for almost all our projects. That means that playtesting feedback guides us towards changes, fixes and priorities. We playtested Finist Painter more than 15 times, at very different events. We were at the VR artists meetup, at two XR expos in New York, multiple community Playtest Thursdays at NYU Game Center, etc. We tested with friends, colleagues and kids and almost every time it was an absolute nightmare.

One of the most important playtest rules is that the creator should be completely silent and shouldn’t try to explain designs unless someone asks. With our 4 variables to explain through the design and tutorials maintaining this playtest rule was very hard. We tried not to prompt anything and witnessed all the struggles people had with the testing. That helped us to iterate on UX and make a better tutorial. Surprisingly, tutorial was the hardest part of the design.

We discovered that people really really hate tutorials. We tried several types of tutorials before we finally decided to make the current version of the tutorial fully gamified and not obligatory. Unfortunately, unlike other aspects of the interaction design, it’s very hard to make a draft version of a tutorial. The tutorial should be as complete and polished as possible to avoid any possibility of misunderstanding.
As a result, we spent a lot of time designing and iterating multiple versions of the tutorial that failed spectacularly. So here are three hard lessons we learned about how to do tutorials for emerging tech:
- Make the tutorial as much or even more interesting than the experience itself.
- Make it possible to skip.
- Try to make multiple versions and make a separate playtest for them — tutorials actually require more rigorous testing than anything else!



Call for feedback and beta-testers
Overcoming all those challenges wasn’t easy, and we never could have done it without the Looking Glass Factory team! We were working closely with them on user testing, SDK integrations and user experience designs. Their trust and support made Finist Painter: Looking Glass Edition a reality.
As we continue our development we hope to soon deliver the demo to work with the brand new Looking Glass App Library. We often demo the experience at events and festivals and are always happy to accept honest feedback.

Currently Finist Painter: Looking Glass Edition works for the Oculus Rift with the Looking Glass, as well as on the Looking Glass independently. The Oculus-only Finist Painter app will come to the Oculus store in 2019. If you have both Looking Glass and Oculus Rift and want to be beta-tester, please get in touch!
Maria Mishurenko and Gordey Chernyy are founders of Synesthetic Echo, a small studio called that focuses on research and creating new experiences for emerging mediums and technologies. You can get in touch with them here.
If you want to explore ways in which you can start experimenting with new interfaces and experiences like the Looking Glass, find out more here.