- When: Tuesday, March 01, 2022 from 02:00 PM to 03:00 PM
- Speakers: Abhilasha Ravichander
- Location: ZOOM only
- Export to iCal
Abstract: Millions of users interact with technologies built on top of computational natural language understanding (NLU) systems everyday, such as voice assistants, search engines and dialog agents. While these technologies play an increasingly central role in modern life, they can also magnify risks, inequities and dissatisfaction when practitioners deploy unreliable systems. In this talk, we will examine obstacles blocking our progress towards robust and trustworthy natural language understanding.
First, I will discuss undesirable mechanisms employed by neural models to solve NLU tasks, such as exploiting dataset-specific “shortcuts”. While these biases can help models exhibit high accuracy on a particular dataset, they are harmful for generalization. In the second part, I will discuss desirable reasoning capabilities we would like NLU models to have, particularly focusing on the extent to which they perform numerical reasoning. I will conclude with a roadmap for making natural language understanding systems more interpretable, robust and usable.
Bio:
Abhilasha is a Ph.D. student at the Language Technologies Institute, Carnegie Mellon University. Her research focuses on understanding neural model performance, with the goal of facilitating more robust and trustworthy NLP technologies. In the past, she interned at Allen Institute for AI and Microsoft Research, where she worked on understanding how deep learning models process challenging semantic phenomena in natural language. Her work received the "Area Chair Favorite Paper" award at COLING 2018, and she was selected as a “Rising Star in Data Science” by the University of Chicago Rising Stars workshop. She also serves as co-chair of the socio-cultural inclusion committee for NAACL 2022, and co-organizes the ‘NLP WIth Friends’ seminar series
Posted 2 years, 9 months ago