Designing a Musical Instrument for a 3D Volumetric Display

Hi! MDN and I are continuing to develop our multi-dimensional audiovisual instrument in Volume (known as Tesseract) and I’d like to note some interesting technical tidbits involved in making digital audio in Unity, and in particular, our polyrhythmic drum machine.


There’s an interesting little background to this method of rhythm generation, the origin of which lies with the ancient math-wizard Euclid (inventor of all shapes, before whom there were no shapes). The idea of distributing events geometrically was refined as an algorithm for timing patterns in neutron accelerators in the 70’s, and finally identified by Godfried Toussaint in 2005 to be an underlying structure in many classic rhythmic patterns (particularly in African percussion). As for its use as an algorithm, the result is simpler than the inner workings of course, but the general idea is to distribute a number of pulses over a number of possible steps as evenly as possible.

There’s an excellent playable polyrhythm machine here, and Toussaint’s paper on the subject can be found here.

I can’t speak to quality of these shaders, but the polyrhythm is neat- note how it loops!

This kind of rhythm in a computer needs an accurate, reliable means of time measurement. In the case of our development platform — Unity — the issue is almost identical to one in game-making: frame rate-dependence. The rate at which Unity updates (increments, calculates, draws to screen) depends on a few things and it’s kind of irregular, but most importantly relies on how taxing your game is on the hardware. It means that if you script the player to move one meter per frame, a fast computer running your game at 200FPS will move you an average of 200 meters in a second, whereas some potato running it at 30FPS will move you something around 30 meters. The solution is to multiply these time-sensitive inputs by the time it took to complete the last frame, Time.deltaTime in Unity. A longer time to complete the last frame (a larger multiplier) simply means a larger output for movement in the current frame, and it evens out nicely! Your character might teleport a bit, but will be in the right place in the right time which is generally more important.

Showing Tesseract 2.0 at a recent Playtest Friday

To wrangle this tangent back into relevance for Tesseract, the irregularity of frame rate would result in beats playing out of sync entirely (which is unthinkably horrible for a visual instrument), so we can’t just tell it to wait some period before playing each beat. For one, we’re using pre-recorded drum samples for Tesseract, and there’s an additional issue to playing audio repeatedly in a frame rate-dependent manner: if a clip is told to play again before finishing, terrible audible clicks occur due simply to an instant of erroneous audio data upon stopping and starting. This also happens if the sample audio doesn’t start and end with complete silence, an amplitude of zero.

The solution for audio timing isn’t quite as simple as using Time.deltaTime, but one possibility seemed to be the FixedUpdate loop in Unity, which claims to run in fixed intervals for physics calculations with rigid bodies. However, and it’s quite a little-known fact: FixedUpdate isn’t accurately timed either! It can actually happen quite irregularly, yet it makes up for lost time dynamically and simulates a fixed time step specifically for internal physics. Using it for almost any other application is apparently misguided.

The more fool-proof and possibly intended method is to use another of Unity’s built-in functions, OnAudioFilterRead (or OAFR, interesting documentation page about it here). OAFR creates a custom filter in a sense, which you can fill with any data you want. It deals with sound in 20 millisecond chunks of a certain buffer size, a standard being 2048 points of data. The entire, complex audio output comes down to a series of values between -1 and and 1!

It took quite some time to figure out how OAFR works, and there’s not a lot of people working with generative audio in Unity and it’s frankly not the best platform for it. Here’s where Keijiro Takahashi — patron saint of unity developers — comes to the rescue with his excellent method of using OAFR both for playing back samples accurately and keeping time for sequencing them.

This method first saves an audio clip as an array of values, tens of thousands for each second of audio in fact. I think the nature and quantity of samples depends on the bitrate of the file, but I’m not sure what determines it (it uses AudioClip.GetData if you’re interested). Furthermore, Keijiro-san’s setup increments from within OAFR, so time is scaled dependent on audio processing and the hardware’s sample rate(44.1 kHz), which gives perfectly timed beats and sequencing!

If you ever want to dig in to audio synthesis in Unity, the best resource without any trace of a doubt lies right here! Worth noting is also that it’s a five year old project that opens without any issue in Unity 5.5, it’s REMARKABLE!

Thanks for reading- expect another post from me about our FM synthesis soon!

Follow Looking Glass on Facebook, Instagram or Twitter.
Join the conversation with our team on Slack!