Rebecca Friebrink: Real-time, Interactive Machine Learning for Music Composition and Performanc
Rebecca Friebrink shows how supervised learning offers a useful set of computational tools for many problems in computer music composition and performance.
Mar 09, 2012
from 04:00 PM to 05:00 PM
|Where||35 W 4th St Room 610 New York, NY 10003|
|Add event to calendar||
Supervised learning offers a useful set of computational tools for many problems in computer music composition and performance. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters that dynamically drive computer-generated audio). In my recent work, I have focused on building tools that enable composers, musicians, and instrument-builders to interactively apply supervised learning to their work designing human-computer music systems. In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will then show a live musical demo of the software that I have created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs. In the rest of the talk, I will discuss some of my research with users applying the Wekinator to real-world problems. I will describe how Wekinator users have applied machine learning to accomplish diverse musical goals, how users often learn and adapt as a result of their interactions with the machine learning process, and how interactive supervised learning can function as a tool for supporting creativity and an embodied approach to design.
Rebecca Fiebrink is a Rebecca is an Assistant Professor of Computer Science and Affiliated Faculty in Music at Princeton University. As both a computer scientist and a musician, she creates, studies, and uses new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments, by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.
Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She is a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR's All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, Inc., where she helped to build the #1 iTunes app "I am T-Pain." Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra in Timmins, ON, as the keyboardist in the University of Washington computer science rock band, "The Parody Bits," and as a laptopist in the new Princeton-based digital music ensemble, Sideband.