Northwestern Events Calendar


CS Colloquium - Yejin Choi - The Enigma of Neural Text Degeneration as the First Defense Against Neural Fake News

When: Wednesday, October 9, 2019
2:00 PM - 3:00 PM  

Where: Mudd Hall ( formerly Seeley G. Mudd Library), 3514, 2233 Tech Drive, Evanston, IL 60208 map it

Audience: Faculty/Staff - Student - Post Docs/Docs - Graduate Students

Contact: Brianna White   847.467.6558

Group: Department of Computer Science

Category: Academic



Despite considerable advances in deep neural language models, the enigma of neural text degeneration persists when these models are used as text generators—especially when used for open-ended long-form text generation. Nonetheless, there has been a growing concern about the potential misuse of neural language models, in particular, potential mass production of neural fake news. How can we effectively address the major limitations of neural text generation, while also getting better prepared against potential misuse?

In this talk, I will introduce GROVER, a fake news generator (and a detector) that can generate fake news with remarkable coherence and quality that sometimes appeal better to human readers than human-written propaganda. Perhaps counter-intuitively however, the best defense against neural fake news turns out to be the generator itself. Why is this the case? In order to answer this question, I will first share another seemingly counter-intuitive observation that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. I will then reveal surprising distributional differences between human text and machine text, and propose Nucleus Sampling, a simple but effective approach that can dramatically improve neural text generation. Despite the remarkable improvement however, neural text leaves distinct distributional signature that is easy to detect to machines, especially the generator itself. I will conclude the talk by discussing the importance of threat modeling and platform-based approaches to be better prepared against neural fake news. I will also discuss the fundamental limits of current neural language models for robust text generation and advocate the need for "knowledge models” to represent the true understanding about how the physical and social world works, with distinct emphasis on commonsense knowledge.



Yejin Choi is an associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2  overseeing the project Mosaic. Her research interests include language grounding with vision, physical and social commonsense knowledge, language generation with long-term coherence, conversational AI, and AI for social good. She was a recepient of Borg Early Career Award (BECA) in 2018, among the IEEE’s AI Top 10 to Watch in 2015, a co-recipient of the Marr Prize at ICCV 2013, and a faculty advisor for the Sounding Board team that won the inaugural Alexa Prize Challenge in 2017.  Her work on detecting deceptive reviews, predicting the literary success, and interpreting bias and connotation has been featured by numerous media outlets including NBC News for New York, NPR Radio, New York Times, and Bloomberg Business Week. She received her Ph.D. in Computer Science from Cornell University.

More Info Add to Calendar

Add Event To My Group:

Please sign-in