
Kart Kandula will present his General Exam "Competition by Design: Experimental Evidence on Search Engine Use and Choice Architecture Interventions" on May 6, 2025 at 2:30pm in 306 Sherrerd Hall. Committee: Jonathan Mayer (adviser), Peter Henderson, Andrés Monroy-Hernández All are welcome to attend the talk. Abstract: On both sides of the Atlantic, regulators have intensified scrutiny of Google’s dominance in the market for general search services, concerned that the company has engaged in exclusionary practices to maintain its position as the default gateway to the internet. In the United States, this culminated in a landmark ruling on August 5, 2024, in which a federal court found Google in violation of § 2 of the Sherman Act for unlawfully monopolizing the online search market. Now in the remedies phase, the U.S. Department of Justice has proposed a suite of interventions, including the use of “choice screens” to prompt users to select a default search engine. Yet critics argue that such measures, introduced too late and with limited enforcement mechanisms, fall short of meaningfully disrupting Google's entrenchment—an argument bolstered by the European Union’s tepid results following a similar Android-based remedy in 2019. To assess the real-world effectiveness of such interventions, this study aims to examine how default search engine settings shape user choices and how a variety of design interventions—modeled on plausible remedial frameworks available to regulators and courts —influence search behavior. We employ a two-phase experimental design targeting both desktop and mobile platforms, with participants recruited via Prolific. The first phase establishes a behavioral baseline while administering the intervention across the experimental conditions. The second phase, conducted after a delay, assesses the durability of observed behavioral shifts, enabling us to evaluate the long-term efficacy of choice architecture interventions. For mobile participants, we target iPhone users and implement choice architecture interventions using iOS Shortcuts, collecting behavioral data through Safari history exports. These methods, made possible by recent iOS updates, enable a novel and scalable approach to studying user behavior in naturalistic settings and open new possibilities for field experiments in digital platform research. Our findings will provide empirical insight into the real-world potency of behavioral remedies in digital platform markets—an area where policymaking has outpaced empirical evidence and measurement. Reading List 1. Dark Patterns and Privacy Choice 1. Jamie Luguri and Lior Jacob Strahilevitz. 2021. Shining a light on dark patterns. Journal of Legal Analysis 13, 1 (2021), 43–109. 2. Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What makes a dark pattern . . . dark? Design attributes, normative considerations, and measurement methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . ACM, New York, NY, USA, 1–18. 3. Christine Utz, Martin Degeling, Sascha Fahl, Florian Schaub, and Thorsten Holz. 2019. (Un)informed consent: Studying GDPR consent notices in the field. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS’19) . ACM, 973–990. 4. Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, and Lalana Kagal. 2020. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . 1–13. 2. Field Experiments Involving Online Activity 1. Hunt Allcott, Juan Camilo Castillo, Matthew Gentzkow, Leon Musolff, and Tobias Salz. 2024. Sources of Market Power in Web Search: Evidence from a Field Experiment. Working Paper (2024). 2. Andrew M Guess, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, et al. 2023. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381, 6656 (2023), 398–404. 3. Erik Brynjolfsson, Avinash Collis, and Felix Eggers. 2019. Using massive online choice experiments to measure changes in well-being. Proceedings of the National Academy of Sciences 116, 15 (2019), 7250–7255. 3. Human-Computer Interaction 1. Alma Whitten and J. Doug Tygar. 1999. Why Johnny can’t encrypt: A usability evaluation of PGP 5.0. In Proceedings of the USENIX Security Symposium . USENIX Association, Berkeley, CA, 1–16. Alan Dix, Janet Finlay, Gregory D. Abowd, and Russel Beale. 2004. Human-Computer Interaction (3rd ed.). Pearson Education Limited.