November 6, 2018
PDFProceedings of the ACM: Human-Computer Interaction
There is a growing consensus that online platforms have a systematic influence on the democratic process. However, research beyond social media is limited. In this paper, we report the results of a mixed-methods algorithm audit of partisan audience bias and personalization within Google Search. Following Donald Trump's inauguration, we recruited 187 participants to complete a survey and install a browser extension that enabled us to collect Search Engine Results Pages (SERPs) from their computers. To quantify partisan audience bias, we developed a domain-level score by leveraging the sharing propensities of registered voters on a large Twitter panel. We found little evidence for the "filter bubble'' hypothesis. Instead, we found that results positioned toward the bottom of Google SERPs were more left-leaning than results positioned toward the top, and that the direction and magnitude of overall lean varied by search query, component type (e.g. "answer boxes"), and other factors. Utilizing rank-weighted metrics that we adapted from prior work, we also found that Google's rankings shifted the average lean of SERPs to the right of their unweighted average.
November 1, 2018
PDFProceedings of the ACM: Human-Computer Interaction
A recent series of experiments demonstrated that introducing ranking bias to election-related search engine results can have a strong and undetectable influence on the preferences of undecided voters. This phenomenon, called the Search Engine Manipulation Effect (SEME), exerts influence largely through order effects that are enhanced in a digital context. We present data from three new experiments involving 3,600 subjects in 39 countries in which we replicate SEME and test design interventions for suppressing the effect. In the replication, voting preferences shifted by 39.0%, a number almost identical to the shift found in a previously published experiment (37.1%). Alerting users to the ranking bias reduced the shift to 22.1%, and more detailed alerts reduced it to 13.8%. Users' browsing behaviors were also significantly altered by the alerts, with more clicks and time going to lower-ranked search results. Although bias alerts were effective in suppressing SEME, we found that SEME could be completely eliminated only by alternating search results -- in effect, with an equal-time rule. We propose a browser extension capable of deploying bias alerts in real-time and speculate that SEME might be impacting a wide range of decision-making, not just voting, in which case search engines might need to be strictly regulated.
September 6, 2018
PDFCambridge University Press
August 13, 2018
PDFPNAS
People influence each other when they interact to solve problems. Such social influence introduces both benefits (higher average solution quality due to exploitation of existing answers through social learning) and costs (lower maximum solution quality due to a reduction in individual exploration for novel answers) relative to independent problem solving. In contrast to prior work, which has focused on how the presence and network structure of social influence affect performance, here we investigate the effects of time. We show that when social influence is intermittent it provides the benefits of constant social influence without the costs. Human subjects solved the canonical traveling salesperson problem in groups of three, randomized into treatments with constant social influence, intermittent social influence, or no social influence. Groups in the intermittent social-influence treatment found the optimum solution frequently (like groups without influence) but had a high mean performance (like groups with constant influence); they learned from each other, while maintaining a high level of exploration. Solutions improved most on rounds with social influence after a period of separation. We also show that storing subjects' best solutions so that they could be reloaded and possibly modified in subsequent rounds—a ubiquitous feature of personal productivity software—is similar to constant social influence: It increases mean performance but decreases exploration.
April 23, 2018
PDFWWW '18
Search engines are a primary means through which people obtain information in today»s connected world. Yet, apart from the search engine companies themselves, little is known about how their algorithms filter, rank, and present the web to users. This question is especially pertinent with respect to political queries, given growing concerns about filter bubbles, and the recent finding that bias or favoritism in search rankings can influence voting behavior. In this study, we conduct a targeted algorithm audit of Google Search using a dynamic set of political queries. We designed a Chrome extension to survey participants and collect the Search Engine Results Pages (SERPs) and autocomplete suggestions that they would have been exposed to while searching our set of political queries during the month after Donald Trump»s Presidential inauguration. Using this data, we found significant differences in the composition and personalization of politically-related SERPs by query type, subjects» characteristics, and date.
March 9, 2018
PDFScience
The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.