Arunesh Mathur will present his FPO "Identifying and measuring manipulative user interfaces at scale on the web" on Wednesday, August 26, 2020 at 2pm via Zoom.


The members of his committee are as follows: Marshini Chetty (adviser); Readers: Marshini Chetty and Arvind Narayanan; Examiners: Jonathan Mayer, Brandon Stewart, and Janet Vertesi.

A copy of his thesis, is available upon request. Please email ngotsis@cs.princeton if you would like a copy of the thesis.

Everyone is invited to attend his talk.  Abstract follows below.

Powerful and otherwise trustworthy actors on the web gain from
manipulating users and pushing them into making sub-optimal decisions.
While prior work has documented examples of such manipulative
practices, we lack a systematic understanding of their characteristics and
their prevalence on the web. Building up this knowledge can lead to
solutions that protect individuals and society from their harms.
In this dissertation, I focus on manipulative practices that manifest in the
user interface. I first describe the attributes of manipulative user interfaces.
I show that these interfaces engineer users' choice architectures by either
modifying the information available to users, or by modifying the set of
choices available to users—eliminating and suppressing choices that
disadvantage the manipulator.
I then present the core contribution of this dissertation: automated methods
that combine web automation and machine learning to identify
manipulative interfaces at scale on the web. Using these methods, I conduct
three measurements. First, I examine the extent to which content creators
fail to disclose their endorsements on social media—misleading users into
believing they are viewing unbiased, non-advertising content. Collecting
and analyzing a dataset of 500K YouTube videos and 2 million Pinterest
pins, I discover that ~90% of these endorsements go undisclosed. Second, I
quantify the prevalence of dark patterns on shopping websites. Analyzing
data I collected from 11K shopping websites, I discover 1,818 dark patterns
on 1,254 websites that mislead, deceive, or coerce users into making more
purchases or disclosing more information than they would otherwise.
Finally, I quantify the prevalence of dark patterns and clickbait in political
campaign emails. Collecting and analyzing a dataset of over 100K
campaign emails in the U.S., I discover a long tail of candidates and
organizations that use these interfaces to influence public opinion and to
solicit donations. The median political campaign sends emails containing
such interfaces with a probability of ~30%. I conclude with how the lessons
learned from these measurements can be used to build technical defenses
and to lay out policy recommendations to mitigate the spread of these
interfaces