Northwestern Events Calendar

Apr
29
2022

Linguistics Colloquium Series: Bob Frank - Linguistic Productivity in Neural Networks: Representation and Inductive Bias

When: Friday, April 29, 2022
3:30 PM - 6:00 PM CT

Where: Chambers Hall, Hybrid (in person and Zoom), 600 Foster St, Evanston, IL 60208 map it

Audience: Faculty/Staff - Student - Public - Post Docs/Docs - Graduate Students

Contact: Talant Abdykairov   (847) 467-3384

Group: Linguistics Department

Category: Academic, Lectures & Meetings

Description:

A fundamental fact about human language is its productivity: speakers are able to understand and produce forms different from those that they have previously encountered. Linguists typically account for this fact by positing abstract grammars that characterize structural representations for an infinity of possible forms. At the same time, recent neural network models have achieved extraordinary levels of performance on practical NLP tasks without any explicit abstract grammar or structured representations. This remarkable success raises the question of whether these models do in fact exhibit productivity of the sort human speakers are capable of, even in the absence of abstract grammar. In this talk, I will explore this question from two perspectives. First, I will discuss a line of work that investigates this question in the context of large pre-trained language models that have been at the forefront of contemporary NLP. We ask whether these models show evidence of productive knowledge of selectional restrictions that cuts across variation in syntactic context (deriving from argument structure alternations like dative shift or syntactic “transformations” like passivization). For the second part of the talk, I will drill down to the properties of the neural network models themselves, and consider what kinds of biases they exhibit when they learn: how do they perform in the face of ambiguous data, and how do their biases differ from human generalization? To conclude, I will consider the viability of different approaches to fostering linguistic abstraction in these models, by modifying the structure of the model itself or the way in which training takes place.

Register More Info Add to Calendar

Add Event To My Group:

Please sign-in