[talks] Colloquium Speaker, Tues April 6- Rebecca Fiebrink

Nicole E. Wagenblast nwagenbl at CS.Princeton.EDU
Wed Mar 31 14:57:26 EDT 2010


Real-time Human-Computer Interaction with Supervised Learning Algorithms for Music Composition and Performance 
Rebecca Fiebrink, Princeton University, Computer Science Department
Tuesday, April 6, 2010- 4:30pm
Small Auditorium CS105

Supervised learning offers a useful set of algorithmic tools for many problems in computer music composition and performance. Through the use of training examples, these algorithms offer human musicians a means to implicitly specify the relationship between low-level, human-generated control signals (such as gesturally-manipulated sensor outputs or audio captured by a microphone) and the desired computer response (such as a change in synthesis or structural parameters of dynamically-generated audio).

In my work, I explore how to most effectively enable users to interact with supervised learning algorithms to compose and perform new music. I have built a general-purpose software system for applying standard supervised learning algorithms in real-time problem domains. This system, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training examples and the application of trained models to real-time inputs. Already, the Wekinator has enabled the creation of several new compositions and instruments. Furthermore, this system has enabled me to study several aspects of human-computer interaction with supervised learning in computer music. I have used the Wekinator as a foundation for a participatory design process with practicing composers, ongoing work with non-expert users in a classroom context, and the design of a gesture recognition system for a sensor-augmented cello bow.

This research has led to a clearer characterization of the requirements and goals of instrument builders and composers, a better understanding of how to design user interfaces for supervised learning in both real-time and creative application domains, and a greater insight into the roles that interaction (encompassing both human-computer control and computer-human feedback) can play in the development of systems containing supervised learning components. This work highlights how music and other creative endeavors differ from more traditional applications of supervised learning, and it contributes to a broader HCI perspective on machine learning practice. 








More information about the talks mailing list