•   When: Tuesday, February 23, 2021 from 11:00 AM to 12:00 PM
  •   Speakers: Shagun Jhaver, Postdoctoral Scholar =, Allen School of Computer Science & Engineering at the University of Washington: Affiliate, Berkman Klein Center for Internet & Society, Harvard University
  •   Location: ZOOM
  •   Export to iCal

 Abstract

Social media sites like Facebook, Twitter and Reddit make millions of decisions everyday about which posts are allowed to stay online and which posts are removed. How these moderation decisions are made has important consequences for many of the key problems that the Internet faces today - fake news and misinformation campaigns, trolling and online harassment, virulent misogyny and online radicalization. My research builds a foundation for designing fair and efficient content moderation systems. In this talk, I will focus on two challenges of content moderation: (1) ensuring fairness in content removals and (2) addressing the rise of online hate groups.

 First, I will discuss how I use a mixed-methods approach to explore what fairness in content moderation means from the perspective of Reddit users whose posts get removed. In the first phase of this study, I conduct a survey of 907 moderated users to show that users who are notified of their post removals or who receive explanations for post removals are more likely to perceive the removal as fair. In the second phase, I present a large-scale behavioral analysis of moderated Reddit users to prove that offering explanations for moderation decisions reduces the odds of future post removals. Together, these analyses show that increasing transparency in content moderation improves both user attitudes and user behaviors.

 Next, I will describe how I audit different moderation interventions for their role in disrupting online hate groups. I will examine the effectiveness of a community-wide moderation intervention called quarantining, a softer alternative to outright bans on Reddit that impedes direct access to controversial communities. Applying causal inference methods on over 85M Reddit posts, this research shows that quarantining makes it more difficult for hate groups to recruit new members. I will also present another study that shows how deplatforming offensive influencers curbs the spread of hate speech on Twitter. This line of research contributes computational frameworks to systematically examine the effectiveness of different moderation strategies.

 I will articulate the lessons learned from this work for the benefit of site managers, moderators, and designers of moderation systems. Finally, I will present future directions for my research.

 Bio

Shagun Jhaver is a postdoctoral scholar in the Allen School of Computer Science & Engineering at the University of Washington and an affiliate at the Berkman Klein Center for Internet & Society at Harvard University. He received a PhD in Computer Science from the School of Interactive Computing at Georgia Tech. His research examines the governance mechanisms of internet platforms to understand how their design, technical affordances, and policies affect public discourse. He has worked with social media sites like Facebook, Reddit and Twitch, and his research has impacted their efforts to improve online governance. His work has been published in prestigious HCI venues such as CHI, CSCW, TOCHI and ICWSM. It has received two Best Paper Awards (at CSCW and ICWSM), one Best Paper Honorable Mention Award (at CSCW), and been featured in Editor’s Spotlight in TOCHI. His research has also received attention in the popular press, including The Washington Post, Forbes, New Scientist, and MIT Technology Review.

 

Posted 3 years, 4 months ago