Northwestern Events Calendar

Oct
9
2024

WED@NICO SEMINAR: Asma Ghandeharioun, Google DeepMind "Model Interpretability: from Illusions to Opportunities"

Asma Ghandeharioun

When: Wednesday, October 9, 2024
12:00 PM - 1:00 PM CT

Where: Chambers Hall, Lower Level, 600 Foster St, Evanston, IL 60208 map it

Audience: Faculty/Staff - Student - Public - Post Docs/Docs - Graduate Students

Cost: Free

Contact: Emily Rosman   (847) 491-2527

Group: Northwestern Institute on Complex Systems (NICO)

Category: Academic, Lectures & Meetings

Description:

Speaker:

Asma Ghandeharioun, Senior Research Scientist, People + AI Research Team, Google DeepMind

Title:

Model Interpretability: from Illusions to Opportunities

Abstract: 

While the capabilities of today’s large language models (LLMs) are reaching—and even surpassing—what was once thought impossible, concerns remain regarding their misalignment, such as generating misinformation or harmful text, which continues to be an open area of research. Understanding LLMs’ internal representations can help explain their behavior, verify their alignment with human values, and mitigate instances where they produce errors. In this talk, I begin by challenging common misconceptions about the connections between LLMs' hidden representations and their downstream behavior, highlighting several “interpretability illusions.” For example, I demonstrate that, counterintuitively, localizing and editing facts within an LLM’s hidden representations can be disconnected; model failure and success in the wild cannot necessarily be predicted based on a relatively faithful proxy at training time; and even within the same architecture, representation similarity is not always indicative of prediction similarity.

Next, I introduce Patchscopes, a new framework that leverages the model itself to explain its internal representations in natural language. I’ll show how it can be used to answer a wide range of questions about an LLM's computation. I also demonstrate that many prior interpretability methods—based on projecting representations into the vocabulary space and intervening in LLM computation—can be viewed as instances of this framework. Furthermore, several of their shortcomings, such as difficulty inspecting early layers or lack of expressivity, can be mitigated by Patchscopes. Beyond unifying prior inspection techniques, Patchscopes opens up new possibilities, such as using a more capable model to explain the representations of a smaller model and multihop reasoning error correction.

Finally, I discuss a few failure cases in today’s most capable LLMs and show how Patchscopes can shed light on their mechanics and suggest mitigation strategies. For example, we observe that safety-tuned models may still divulge harmful information, and whether they do so often depends significantly on who they are talking to—what we refer to as the user persona. Using Patchscopes, we show that harmful content can persist in hidden representations and can be easily extracted. Additionally, we demonstrate that certain user personas can induce the model to form more charitable interpretations of otherwise dangerous queries

Speaker Bio:

Asma Ghandeharioun, Ph.D., is a senior research scientist with the People + AI Research team at Google DeepMind. She works on aligning AI with human values through better understanding [1] and controlling (language) models [2], uniquely by demystifying their inner workings [3] and correcting collective misconceptions along the way [4, 5]. While her current research is mostly focused on machine learning interpretability, her previous work spans conversational AI, affective computing, and, more broadly, human-centered AI. She holds a doctorate and master’s degree from MIT and a bachelor’s degree from the Sharif University of Technology. She has been trained as a computer scientist/engineer and has research experience at MIT, Google Research, Microsoft Research, Ecole Polytechnique Fédérale de Lausanne (EPFL), to name a few.

Her work has been published in premier peer-reviewed machine learning venues such as ICLR, NeurIPS, ICML, EMNLP, AAAI, ACII, and AISTATS. She has received awards at NeurIPS and her work has been featured in Wired, Wall Street Journal, and New Scientist.

Location:

In person: Chambers Hall, 600 Foster Street, Lower Level
Remote option: https://northwestern.zoom.us/j/91475935376
Passcode: NICO24

About the Speaker Series:

Wednesdays@NICO is a vibrant weekly seminar series focusing broadly on the topics of complex systems, data science and network science. It brings together attendees ranging from graduate students to senior faculty who span all of the schools across Northwestern, from applied math to sociology to biology and every discipline in-between. Please visit: https://bit.ly/WedatNICO for information on future speakers.

More Info Add to Calendar

Add Event To My Group:

Please sign-in