Northwestern Events Calendar


Data Science Nights - January 2021 Meeting (Speaker: Bryan Pardo)

Data Science Nights

When: Monday, January 25, 2021
5:15 PM - 7:30 PM Central

Where: Online
Webcast Link

Audience: Faculty/Staff - Student - Public - Post Docs/Docs - Graduate Students

Contact: Sarah Ben Maamar  

Group: Northwestern Institute on Complex Systems (NICO)

Category: Academic


JANUARY MEETING: Wednesday, January 27, 2021 at 5:15pm (Central) via Zoom and Gather

DATA SCIENCE NIGHTS are monthly hack nights on popular data science topics, organized by Northwestern University graduate students and scholars. Aspiring, beginning, and advanced data scientists are welcome!


5:15: Welcome to Data Science Nights via Zoom
* Zoom Link:
* Passcode: DSN2021
5:30: Presentation by Bryan Pardo, Northwestern University
6:00: Hacking session via Gather
* Gather link:

SPEAKER: Bryan Pardo, Associate Professor, McCormick School of Engineering, Northwestern University

TOPIC: New directions in deep audio source separation: training without ground truth and automatic model selection

Audio source separation is the task of separating an audio scene containing multiple concurrent sound sources into individual streams/tracks, each containing a source (or group of sources) of interest to the user.  Source separation is an enabling technology for a variety of tasks, including speech recognition, music transcription, sound object ID, and hearing assistance.  Deep learning models are the state-of-the-art in source separation, but they are typically trained on synthetic audio mixtures made from isolated sound source recordings so that ground-truth for the separation is known. However, the vast majority of available audio is not isolated, limiting the range of scenes where deep models trained on isolated data are effective.  Furthermore, a deep model is typically only successful in separating audio mixtures similar to the mixtures it was trained on.  This requires the end user to know enough about each model’s training to select the correct model for a given audio mixture. In this talk, Prof. Pardo will outline proposed solutions to both problems. First, he will present a method to train a deep source separation model in an unsupervised way by bootstrapping using multiple primitive cues, without the need for ground truth isolated sources or artificial training mixtures.  He will then outline a proposed confidence measure that can be broadly applied to any clustering-based source separation model. The proposed confidence measure does not require ground truth to estimate the quality of a separated source. This allows automatic selection of the appropriate deep clustering model for an audio mixture.

SPEAKER BIO: Bryan Pardo is head of Northwestern University’s Interactive Audio Lab and co-director of the Northwestern University HCI+Design institute. Prof. Pardo has appointments in in the Department of Computer Science and Department of Radio, Television and Film. He received a M. Mus. in Jazz Studies in 2001 and a Ph.D. in Computer Science in 2005, both from the University of Michigan. He has authored over 100 peer-reviewed publications. He has developed speech analysis software for the Speech and Hearing department of the Ohio State University, statistical software for SPSS and worked as a machine learning researcher for General Dynamics. He has collaborated on and developed technologies acquired and patented by companies like Bose, Adobe and Ear Machine. While finishing his doctorate, he taught in the Music Department of Madonna University. When he is not teaching or researching, he performs on saxophone and clarinet with the bands Son Monarcas and The East Loop.

For more info:

Supporting Groups:

This event is supported by the Northwestern Institute for Complex Systems and the Northwestern Data Science Initiative.

More Info Add to Calendar

Add Event To My Group:

Please sign-in