Publications

Recent publications

July 6, 2019

PDF
Kenneth Joseph, Briony Swire-Thompson, Hannah Masuga, Matthew A. Baum, David Lazer

Proceedings of the International AAAI Conference on Web and Social Media

Abstract

+

Using both survey- and platform-based measures of support, we study how polarization manifests for 4,313 of President Donald Trump's tweets since he was inaugurated in 2017. We find high levels of polarization in response to Trump's tweets. However, after controlling for mean differences, we surprisingly find a high degree of agreement across partisan lines across both survey and platform-based measures. This suggests that Republicans and Democrats, while disagreeing on an absolute level, tend to agree on the relative quality of Trump's tweets. We assess potential reasons for this, for example, by studying how support changes in response to tweets containing positive versus negative language. We also explore how Democrats and Republicans respond to tweets containing insults of individuals with particular socio-demographics, finding that Republican support decreases when Republicans, relative to Democrats, are insulted, and Democrats respond negatively to insults of women and members of the media.

January 25, 2019

PDF
Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, David Lazer

Abstract

+

The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.

November 6, 2018

PDF
Ronald E. Robertson, Shan Jiang, Kenneth Joseph, Lisa Friedland, David Lazer, Christo Wilson

Proceedings of the ACM: Human-Computer Interaction

Abstract

+

There is a growing consensus that online platforms have a systematic influence on the democratic process. However, research beyond social media is limited. In this paper, we report the results of a mixed-methods algorithm audit of partisan audience bias and personalization within Google Search. Following Donald Trump's inauguration, we recruited 187 participants to complete a survey and install a browser extension that enabled us to collect Search Engine Results Pages (SERPs) from their computers. To quantify partisan audience bias, we developed a domain-level score by leveraging the sharing propensities of registered voters on a large Twitter panel. We found little evidence for the "filter bubble'' hypothesis. Instead, we found that results positioned toward the bottom of Google SERPs were more left-leaning than results positioned toward the top, and that the direction and magnitude of overall lean varied by search query, component type (e.g. "answer boxes"), and other factors. Utilizing rank-weighted metrics that we adapted from prior work, we also found that Google's rankings shifted the average lean of SERPs to the right of their unweighted average.

November 1, 2018

PDF
Epstein, R., Robertson, R.E., Lazer, D., & Wilson, C.

Proceedings of the ACM: Human-Computer Interaction

Abstract

+

A recent series of experiments demonstrated that introducing ranking bias to election-related search engine results can have a strong and undetectable influence on the preferences of undecided voters. This phenomenon, called the Search Engine Manipulation Effect (SEME), exerts influence largely through order effects that are enhanced in a digital context. We present data from three new experiments involving 3,600 subjects in 39 countries in which we replicate SEME and test design interventions for suppressing the effect. In the replication, voting preferences shifted by 39.0%, a number almost identical to the shift found in a previously published experiment (37.1%). Alerting users to the ranking bias reduced the shift to 22.1%, and more detailed alerts reduced it to 13.8%. Users' browsing behaviors were also significantly altered by the alerts, with more clicks and time going to lower-ranked search results. Although bias alerts were effective in suppressing SEME, we found that SEME could be completely eliminated only by alternating search results -- in effect, with an equal-time rule. We propose a browser extension capable of deploying bias alerts in real-time and speculate that SEME might be impacting a wide range of decision-making, not just voting, in which case search engines might need to be strictly regulated.

September 6, 2018

PDF
Michael A. Neblo, Kevin M. Esterling, David M. J. Lazer

Cambridge University Press

Abstract

+

August 13, 2018

PDF
Ethan Bernstein, Jesse Shore, and David Lazer

Abstract

+

People influence each other when they interact to solve problems. Such social influence introduces both benefits (higher average solution quality due to exploitation of existing answers through social learning) and costs (lower maximum solution quality due to a reduction in individual exploration for novel answers) relative to independent problem solving. In contrast to prior work, which has focused on how the presence and network structure of social influence affect performance, here we investigate the effects of time. We show that when social influence is intermittent it provides the benefits of constant social influence without the costs. Human subjects solved the canonical traveling salesperson problem in groups of three, randomized into treatments with constant social influence, intermittent social influence, or no social influence. Groups in the intermittent social-influence treatment found the optimum solution frequently (like groups without influence) but had a high mean performance (like groups with constant influence); they learned from each other, while maintaining a high level of exploration. Solutions improved most on rounds with social influence after a period of separation. We also show that storing subjects' best solutions so that they could be reloaded and possibly modified in subsequent rounds—a ubiquitous feature of personal productivity software—is similar to constant social influence: It increases mean performance but decreases exploration.

April 23, 2018

PDF
Ronald E. Robertson, David Lazer, Christo Wilson

Abstract

+

Search engines are a primary means through which people obtain information in today»s connected world. Yet, apart from the search engine companies themselves, little is known about how their algorithms filter, rank, and present the web to users. This question is especially pertinent with respect to political queries, given growing concerns about filter bubbles, and the recent finding that bias or favoritism in search rankings can influence voting behavior. In this study, we conduct a targeted algorithm audit of Google Search using a dynamic set of political queries. We designed a Chrome extension to survey participants and collect the Search Engine Results Pages (SERPs) and autocomplete suggestions that they would have been exposed to while searching our set of political queries during the month after Donald Trump»s Presidential inauguration. Using this data, we found significant differences in the composition and personalization of politically-related SERPs by query type, subjects» characteristics, and date.

March 9, 2018

PDF
David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, Jonathan L. Zittrain

Abstract

+

The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.

August 25, 2017

PDF
B. R. Jasny, N. Wigginton, M. McNutt, T. Bubela, S. Buck, R. Cook-Deegan, T. Gardner, B. Hanson, C. Hustad, V. Kiermer, D. Lazer, A. Lupia, A. Manrai, L. McConnell, K. Noonan, E. Phimister, B. Simon, K. Strandburg, Z. Summers, D. Watts

Abstract

+

Many companies have proprietary resources and/or data that are indispensable for research, and academics provide the creative fuel for much early-stage research that leads to industrial innovation. It is essential to the health of the research enterprise that collaborations between industrial and university researchers flourish. This system of collaboration is under strain. Financial motivations driving product development have led to concerns that industry-sponsored research comes at the expense of transparency (1). Yet many industry researchers distrust quality control in academia (2) and question whether academics value reproducibility as much as rapid publication. Cultural differences between industry and academia can create or increase difficulties in reproducing research findings. We discuss key aspects of this problem that industry-academia collaborations must address and for which other stakeholders, from funding agencies to journals, can provide leadership and support. Here we are not talking about irreproducibility caused by fundamental gaps in knowledge, which are intrinsic to the nature of scientific research, but situations in which incomplete communication and sharing of techniques, data, or materials interferes with independent validation or future investigations. Irreproducibility has serious economic consequences. Representatives of venture firms and industries such as biopharma argue that they must replicate findings from academic research before investing. For preclinical research, this can involve, on average, two to six researchers, 1 to 2 years, and $500,000 to $2,000,000 per project (2). For academic scientists, an inability to trust research findings means an erosion of confidence from the scientific community, decision-makers, and the general public, as well as the waste of scarce resources.

March 3, 2017

PDF
Michael A. Neblo, William Minozzi, Kevin M. Esterling, Jon Green, Jonathon Kingzette, David M. J. Lazer

Abstract

+