Spatialized audio and lights in a suspended spherical auditorium


The Sonic Sphere KA11 is the the 11th kugel auditorium (KA), built and displayed for a 6 week exhibit at The Shed in New York City. Designed to seat 200+ in auditorium-style seating with a sound and light transparent stage, 128 individually addressable speakers surrounded by 64,000 LEDs combined to create a sonic experience designed to alter the listener's state of consciousness.
Much of the previous team from the KA9 and KA10 (presented at LoveBurn in Miami, 2023) came back to help with this super ambitious entirely new build. An increased radius of 20 meters (from 14 m previously), more than double the speakers, and significant structural challenges to suspend it.
Maestro Ed Cooke, with lead sound engineer Meriyn Royaards and lead structural engineer Nicholas Christie, orchestrated this complex and rapid turnaround, going from ideation to completed build in 3 months.
The Sonic Sphere shows were organized as multiple preset audio shows, with one or more matching light shows. It also allowed for live performances in the stage, and DJ + VJ driving of the spherical audio and lights, which upped the complexity requirements.
The exhibition resulted in notable press, being called "The Show of the Summer in NYC" for 2023 . From hosting live performances such as Igor Levit and Madame Ghandi to the deep and intricate soundscapes in music created and selected by The XX, Carl Craig, Steve Reich, and Yaeji, my experience of designing the complex data system to share information between the sound and visual components--while under deep time pressure--was so much fun.


The software system needed to track the spatial locations of the audio voices, via the show timeline and cues in Reaper. This data, along with the OSC parameter cues and MIDI commands were received by either TouchDesigner or Chromatik(with Mark Slee's assistance), allowing artists to create their visual shows that match the spatial and sonic content.
The challenge was getting all of the possible software solutions to integrate in near-real time, to result in a flawless (and stutter-less) performance.
My task was to create the brain, the control software that could communicate across the entire system.
While other team members worked on the LED hardware and mapping, and others handled the spatialized audio, I created the software abstraction layers using a combination of Java, Javascript, Python, and using OSC and MIDI data layers for cross-platform networked parameter sharing.
The software team I led had 4 weeks to deliver four fully functional and production-ready shows. Starting from nothing, and rapidly reaching out to other artists, we had over 16 collaborators from around the globe come together, building the components and together stitching the data layer for integrating such a unique presentation of sound, light, time, and space.
