Happy Hologram, Mr. President 🎶
Design Challenge: To create an app experience that uses speech recognition so users can converse with a historic public figure. The content is based on archival footage. The interactions should uphold the integrity of the figure at hand, and the experience should strive for historical accuracy.
Holo There! Angela here, a Creative Technologist with a passion for interacting with Holograms. At Looking Glass, we’re building a mixed reality world where reality replaces headsets. It’s a holo-dream come true!
As we continue to experiment with different holographic realities at the Looking Glass lab, I could not think of something more real than history and its people. My hologram hankering goes back to the age old question: If you could talk to anyone from history who would it be? H.G. Wells? Amelia Earhart? Your Great Great Aunt who you never met but have always been told you are exactly like? With the HoloPlayer One, that historical impossibility can now come to life.
“SAY WHAT? GET THE HOLO OUTTA HERE!”
Imagine a world where you could learn history just by interacting with it. Instead of reading about the advent of space travel or pop culture, you could talk directly to JFK about his plans for going to the moon or ask Ethel Merman to sing a song from her repertoire. A Holo-ed History takes old footage of these iconic (and comic) figures, and brings them to life through the magic of video editing and speech recognition.
A Holo-ed History with JFK is the first edition of this concept of jumping into history to experience it first hand in 3-Dimensional lenticular space. This interactive experience gives anyone the opportunity to talk with President Kennedy in real time, hear his ideas and get responses in his own words.
What I am saying is, the future is the past. So step back a few weeks in time with me to explore how I brought President John F. Kennedy (back) to life in the HoloPlayer One.
The impossible is fun to play with; exploring the vast terrain of “what ifs” and transforming this into a land of “what’s happening IRL.” I spent a lot of time wondering, “What if I could have a conversation with Lincoln?” I can’t say for certain why my mind kept circling back to the 16th president but I do have some guesses. Perhaps its because he was one of the most photographically documented icons of our time. Maybe it had something to do with his exquisite cheek bones. Or perhaps its simply because of how influenced I am by the incredible audio-animatronic experience at Disneyland’s Great Moments with Mr. Lincoln. Regardless, I was set on resurrecting a president and I had set my eyes on Lincoln.
It is with choosing Lincoln that I reached my first challenge. A major aspect of this project was to create an experience based on sourced footage found in the public domain. Unfortunately there are very few moving images of good ol’ Abe. Thus, I needed to find an iconic figure where ample moving content was available and that is how A Holo-ed History with JFK was born.
Step 1: Gathering content & assets
First, I began gathering the necessary content that would form the basis of the app. It was in the John F. Kennedy Presidential Library and Museum where I found the perfect footage for this — Kennedy’s iconic 1961 Inauguration Address.
After parsing through the footage, I found that the 16-minute speech provided enough content for at least 16 different possible conversation triggers.
The interaction would begin with JFK in a base stance moving subtly, with the background removed. The user would ask JFK a question and in turn, JFK would then respond. The response would be a video triggered by preprogrammed words recognized through speech recognition.
First, I cut the base clip from the address footage of him in a minimal, yet subtle moving stance. I also cut the clips for a few possible questions/ interactions. I brought these clips into Adobe After Effects to mask out the background so only JFK would be visible.
One early challenge I encountered with the video clips was with the export, which I discovered when trying to bring the alpha/masked clips into Unity. With the export settings of [Video Codec →Animation and Video Output → Channels → RGB + Alpha], I was able to export a perfect file of the alpha and masked footage. However, this file would not work in Unity. I had to then export the video as [Video Codec → H.264 with Video Output → Channels → RGB]. In this case the background was not transparent, but black. However this file codec did import nicely into Unity. (This comes into play again later.)
The next content piece was creating a 3D podium asset. Sticking with the sourced footage, I brought a screen shot of the inauguration into photoshop, masked out the background and extruded the podium into a 3D object — this worked nicely.
Step 2: Hopping into Unity
Unity 5.6 offers a pretty straight forward way to work with video through the Video Player component. Create a plane or object, add the Video Player component and you’re off to a good start.
I started off prototyping the interactions with a keypress to see how the transitions worked. After some user testing I decided to add a backdrop curtain to give the space more depth. So I jump back into asset building for a beat to create this new 3D object.
Step 3: Windows Speech Library
Another amazing upgrade to the latest editions of Unity is the seamless integration with Windows Speech Library readily available in Windows 10.
Outside of the Unity documentation, this is a great tutorial to get started with Windows Speech. The most important step before starting to build your Windows Speech Recognition in the Windows-based machine is to be sure that the speech services are turned on. To do this, go to Settings → Speech, inking, & typing and turn on the speech services. Without this on you may slowly go crazy wondering why the speech isn’t being heard even though the code is perfect, the mic is on and you’ve been repeating the same word for three days to no avail. You’re welcome!
For the first prototype of the speech recognition, I used the keyword recognizer function. It worked ok. However, after a few rounds of play testing, it was obvious that the interactions were not smooth and conversational. The program was listening for key words, which was also very limiting.
Given this, I then decided to try out the dictation recognizer function. This allows the program to listen to everything being said until a trigger word is heard. I was able to ask the program to listen for a specific number of words to trigger a specific response. This created smoother and more natural interactions. It just goes to show how important it is to have someone just sit and listen sometimes, even if it is a machine!
The program is set to listen for key words that will trigger a response, so any number of possible questions can get him talking to you. He is currently programmed with 4 possible answers.
Conversation starters with anyone can be a drag. These are just some samples to get you going, if you don’t know how to start:
- Who was the former President/Vice President?
- What is the greatest power man has?
- What is something we should not forget/where do the rights of man
- What message would you like to give?
Step 4: Masking out the background
Everything was finally coming together! However, I met with another visual snafu. Remember the video exporting issue? Since the JFK clips were unable to be imported with a transparent background, they were appearing in front of the new, vibrant red curtain as a black solid video screen surrounding JFK, breaking the reality of the environment. I needed to find a way to truly mask out the background so only JFK would be visible in front of the curtain. After some research I found this wonderful tutorial that provides an almost perfect solution to the issue.
Short story long, you have to export the videos separately in both the RGB and Alpha forms, put them together in one plane, import that clip in unity and then add a custom shader. The script on the tutorial did not exactly work, but we did play around with it to get the desired outcome.
Step 5: Animating the opening
To ground the experience in a historically relevant context, I wanted to create an opening scene to set the tone. The user is first greeted with Kennedy’s gravesite, the Eternal Flame. I built a 3D model of the flame in Vectorworks and the flame itself was a particle system built in Unity.
Animated titles float on screen then fly off. The user then blows out the flame, signifying the rebirth of JFK. As the eternal flame fades out, JFK fades in, inviting the user to begin the experience. The JFK circular logo was inspired by marketing materials from his actual campaign. The opening also incorporates the age-old classic, Hail to the Chief along with this incredible US Declaration font.
I do have one note on animating texts in Unity. I faced an issue where the build of the app did not export properly and the animation and the event that triggers the scene change did not port over into the build. The program would stay stuck in the opening. However, it had all worked when testing the interaction in Unity pre-build. Once I changed the “culling mode” of the animator from “cull completely” to “always animate,” this fixed the issue in the build.
It‘s been really great spending time with President Kennedy. You might be thinking — why a hologram? Why not just a regular ol’ screen or smart phone? The answer is simple: there is something very special about sharing the same 3-Dimensional space with someone — especially someone otherwise impossible to have this kind of one-on-one encounter with. Whether it be in reality or mixed reality. For a few moments, you get to share a very special and rare experience in time and space with someone who had an impact on the current future that we live in.
Cent’ anni, Mr. President.
Ask not what the HoloPlayer can do you for you, ask what you can do for the HoloPlayer!
If you’d like to jump into the hologram revolution and be one of the first innovators to prototype experiences on the HoloPlayer One, you can get your hands on one here!