[talks] R Fiebrink preFPO

Rebecca A. Fiebrink fiebrink at CS.Princeton.EDU
Mon Jun 7 14:31:48 EDT 2010

Rebecca Fiebrink will present her preFPO on Monday, June 14 at 3:00 PM in room 402.  The members of her committee are:  Perry Cook, advisor; Dan Morris of Microsoft Research and Dan Trueman, readers; Adam Finkelstein and Ken Steiglitz, nonreaders.  Everyone is invited to attend her talk. Her abstract follows below.

Title: Real-time Human-Computer Interaction with Supervised Learning Algorithms for Music Composition and Performance


Supervised learning offers a useful set of algorithmic tools for many problems in computer music composition and performance. Through the use of training examples, these algorithms offer human musicians a means to implicitly specify the relationship between low-level, human-generated control signals (such as gesturally-manipulated sensor outputs or audio captured by a microphone) and the desired computer response (such as a change in synthesis or structural parameters of dynamically-generated audio). 

In my work, I explore how to most effectively enable users to interact with supervised learning algorithms to compose and perform new music. I have built a general-purpose software system for applying standard supervised learning algorithms in real-time problem domains. This system, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training examples and the application of trained models to real-time inputs. Already, the Wekinator has enabled the creation of several new compositions and instruments. Furthermore, this system has enabled me to study several aspects of human-computer interaction with supervised learning in computer music. I have used the Wekinator as a foundation for a participatory design process with practicing composers, work with non-expert users in a classroom context, and the design of a gesture recognition system for a sensor-augmented cello bow. 

This research has led to a clearer characterization of the requirements and goals of instrument builders and composers, a better understanding of how to design user interfaces for supervised learning in both real-time and creative application domains, and a greater insight into the roles that interaction (encompassing both human-computer control and computer-human feedback) can play in the development of systems containing supervised learning components. This work highlights how music and other creative endeavors differ from more traditional applications of supervised learning, and it contributes to a broader HCI perspective on machine learning practice.

More information about the talks mailing list