|When:||Friday, May 25, 2012|
12:00 PM - 1:00 PM
|Where:||Ford Motor Company Engineering Design Center, 1.350 (ITW Classroom)
2133 Sheridan Road
Evanston, IL 60208 map it
|Audience:||- Faculty/Staff - Student - Public|
|Costs:|| - 0 |
(847) 497-0028 |
|Group:||Electrical Engineering & Computer Science|
|Category:||Lectures & Meetings|
The EECS Department welcomes George Tzanetakis of the University of Victoria, British Columbia, Canada.
Prof. Tzanetakis will speak on Friday May 25 at 12 Noon in the ITW Classroom at the Ford Motor Company Engineering Design Center at Northwestern University.
Abstract: Music today is to a large extent produced, distributed and consumed digitally. Music Information Retrieval (MIR) is the interdisciplinary reserch field that deals with all aspects of extracting information from and about music using computers. Applications of MIR such as automatic music recommendation, query by singing, and predicting music mood that used to be research curiosities are now part of commercially available systems. In this talk I will describe three fringe case studies of MIR research that my group has been working on.
Physical modelling synthesis refers to methods that synthetically generate musical instrument sounds by using a set of equations and algorithms that simulate the physics of sound production. They provide realistic sounds with controls that are physically meaningful. However the control of physical modelling algorithms is challenging. Using machine learning techniques we show how a virtual violinist can "learn" to bow in a similar way to a beginning violin student.
Hyperinstruments are acoustic instruments that are augmented with digital sensors to provide information about what is being played to a computer. Although they provide enhanced digital control possibilities they are hard to build and require invasive modifications to the instrument.
Surrogate sensors is a technique were direct sensors are used to train a "surrogate sensor" that is based on audio analysis. The direct sensors are used to provide "ground truth" to a machine learning process that learns the mapping from audio features to sensor values. Finally I will discuss how we can imbue robotic percussion instruments with the ability to listen to themselves and other musicians in the context of live music improvisation involving humans and machines. A common theme underlying thesethree case studies is the use of machine learning and the blending of the physical and virtual world.
To read the Prof. Tzanetakis' bio please click the "More Info" link above.