Enhancing Immersive Theater with AI: A Collaborative Journey by UCLA’s OARC and REMAP
Transforming Theater with Technology: The UCLA Production of Xanadu
This post was co-written with Andrew Browning, Anthony Doolan, Jerome Ronquillo, Jeff Burke, Chiheb Boussema, and Naisha Agarwal from UCLA.
Introduction
The University of California, Los Angeles (UCLA) stands out as a beacon of academic excellence, housing 16 Nobel Laureates and earning the title of the #1 public university in the United States for eight consecutive years. The Office of Advanced Research Computing (OARC) has emerged as a crucial partner in advancing UCLA’s research endeavors, equipping researchers with the necessary technological resources to turn their innovative ideas into reality. This collaboration takes center stage in the recent immersive production of the musical Xanadu, which embarked on an exciting journey to integrate artificial intelligence (AI) into a traditional theater experience.
The Vision for Xanadu
The UCLA Center for Research and Engineering in Media and Performance (REMAP) sought OARC’s expertise to create AI microservices that would enhance the audience experience during Xanadu. This production, in collaboration with the UCLA Department of Theater’s Ray Bolger Musical Theater program, aimed to engage audiences in a novel way. Spectators weren’t merely passive viewers; they became co-creators of the media displayed throughout the performance.
Imagine a stage where audience members, using only their mobile phones, could sketch images that would then be transformed into vibrant visual elements on massive LED screens—referred to as "shrines." This immersive setup was made possible through the expertise of 4Wall Entertainment and advanced tracking technologies like Mo-Sys StarTrackers.
Over the course of seven performances held between May 15 and May 23, 2025, about 500 audience members collaborated in this breathtaking digital co-creation, with up to 65 actively contributing media at any one time.
Overcoming Constraints
Creating a live performance bejeweled with real-time generated media posed several technical challenges. The requirements included:
-
Minimum Users: The system had to support a minimum of 80 concurrent mobile users (65 audience plus 15 performers).
-
Response Time: The mean round-trip time from sketch to media presentation needed to be under 2 minutes.
-
Fault Tolerance: The AI infrastructure must be highly available and fault tolerant during performances; graceful degradation was simply not an option.
-
Human Oversight: A human-in-the-loop dashboard allowed for manual control over resources when necessary.
-
Flexibility: The architecture needed to adapt quickly to changes and show-to-show modifications.
These constraints shaped OARC’s approach of using serverless architecture to meet the demanding requirements.
The Serverless Solution
The design incorporated Amazon Web Services (AWS) to enable a robust, event-driven architecture. Key components included:
-
Microservices: A low-latency Firebase orchestration layer facilitated communication between audience members’ sketches and the OARC’s AWS microservices.
-
Inference Pipeline: The use of Hugging Face models through Amazon SageMaker AI alongside Amazon Bedrock provided a comprehensive inference pipeline that blended flexibility with reliability.
For tracing the flow of user sketches to media production, messages were sent to Amazon SQS, which orchestrated the processing based on the type of media requested. This system ensured efficiency, allowing the application to manage multiple pipelines without interruption.
Modular AI Workflows
The production relied on three distinct inference modules, each tailored to generate 2D images and 3D mesh objects based on audience sketches. Participants would begin by sketching their ideas, followed by the AI evaluating these sketches and generating relevant artistic representations.
Utilizing models from both SageMaker AI and Amazon Bedrock, the workflows capitalized on the strengths of each. The audience’s contributions guided the creation of customized assets, enhancing the immersive experience.
Cost Management and Future Considerations
While the infrastructure proved reliable, the financial aspect of utilizing SageMaker AI also presented challenges, constituting roughly 40% of overall costs. To mitigate this, automatic shutdown processes were implemented to prevent idle endpoints from incurring unnecessary expenses during downtime.
Looking ahead, the team’s recommendations emphasize the use of tools like AWS CloudFormation to automate deployments, thereby reducing manual errors and enhancing overall efficiency. By continuing to leverage AWS’s generative AI and managed services, REMAP can further realize the vision for immersive theater experiences.
Conclusion
The integration of AI into the Xanadu production was groundbreaking, illustrating how new technologies can breathe life into traditional storytelling. By harnessing the power of AWS, OARC and REMAP showcased a forward-thinking approach that not only captivated audiences but also set a precedent for future immersive artistic endeavors.
Xanadu is more than just a musical—it’s a testament to the fusion of technology and art, exemplifying what’s possible when creative expression and cutting-edge technology come together.
About the Authors
- Andrew Browning: Research Data and Web Platforms Manager at OARC, UCLA.
- Anthony Doolan: Application Programmer and AV Specialist at OARC, UCLA.
- Jerome Ronquillo: Web Developer & Cloud Architect at OARC, UCLA.
- Jeff Burke: Professor and Chair of the Department of Theater at UCLA, co-director of REMAP.
- Chiheb Boussema: Applied AI Scientist at REMAP, UCLA.
- Naisha Agarwal: Rising senior in computer science at UCLA and co-lead for generative AI workflows in Xanadu.
By embracing technology in this way, we not only find innovative solutions to complex problems but also expand the horizons of what theater can be.