August 16, 2022
PDFFollowing the 2020 general election, Republican elected officials, including then-President Donald Trump, promoted conspiracy theories claiming that Joe Biden’s close victory in Georgia was fraudulent. Such conspiratorial claims could implicate participation in the Georgia Senate runoff election in different ways—signaling that voting doesn’t matter, distracting from ongoing campaigns, stoking political anger at out-partisans, or providing rationalizations for (lack of) enthusiasm for voting during a transfer of power. Here, we evaluate the possibility of any on-average relationship with turnout by combining behavioral measures of engagement with election conspiracies online and administrative data on voter turnout for 40,000 Twitter users registered to vote in Georgia. We find small, limited associations. Liking or sharing messages opposed to conspiracy theories was associated with higher turnout than expected in the runoff election, and those who liked or shared tweets promoting fraud-related conspiracy theories were slightly less likely to vote.
June 23, 2022
PDFWith a dataset of testing and case counts from over 1,400 institutions of higher education (IHEs) in the United States, we analyze the number of infections and deaths from SARS-CoV-2 in the counties surrounding these IHEs during the Fall 2020 semester (August to December, 2020). We find that counties with IHEs that remained primarily online experienced fewer cases and deaths during the Fall 2020 semester; whereas before and after the semester, these two groups had almost identical COVID-19 incidence. Additionally, we see fewer cases and deaths in counties with IHEs that reported conducting any on-campus testing compared to those that reported none. To perform these two comparisons, we used a matching procedure designed to create well-balanced groups of counties that are aligned as much as possible along age, race, income, population, and urban/rural categories—demographic variables that have been shown to be correlated with COVID-19 outcomes. We conclude with a case study of IHEs in Massachusetts—a state with especially high detail in our dataset—which further highlights the importance of IHE-affiliated testing for the broader community. The results in this work suggest that campus testing can itself be thought of as a mitigation policy and that allocating additional resources to IHEs to support efforts to regularly test students and staff would be beneficial to mitigating the spread of COVID-19 in a pre-vaccine environment.
May 5, 2022
PDFThe public often turns to science for accurate health information, which, in an ideal world, would be error free. However, limitations of scientific institutions and scientific processes can sometimes amplify misinformation and disinformation. The current review examines four mechanisms through which this occurs: (1) predatory journals that accept publications for monetary gain but do not engage in rigorous peer review; (2) pseudoscientists who provide scientific-sounding information but whose advice is inaccurate, unfalsifiable, or inconsistent with the scientific method; (3) occasions when legitimate scientists spread misinformation or disinformation; and (4) miscommunication of science by the media and other communicators. We characterize this article as a “call to arms,” given the urgent need for the scientific information ecosystem to improve. Improvements are necessary to maintain the public’s trust in science, foster robust discourse, and encourage a well-educated citizenry.
May 5, 2022
PDFScholars have long documented unequal access to the benefits of science among different groups in the United States. Particular populations, such as low-income, non–white people, and Indigenous people, fare worse when it comes to health care, infectious diseases, climate change, and access to technology. These types of inequities can be partially addressed with targeted interventions aimed at facilitating access to scientific information. Doing so requires knowledge about what different groups think when it comes to relevant scientific topics. Yet data collection efforts for the study of most science-based issues do not include enough respondents from these populations. We discuss this gap and offer an overview of pertinent sampling and administrative considerations in studying underserved populations. A sustained effort to study diverse populations, including through community partnerships, can help to address extant inequities.
April 24, 2022
PDFResearch on complex contagions suggests that individuals need social reinforcement from multiple sources before they are convinced to adopt costly behaviors. We tested the causal foundation of complex contagions in a country-scale viral marketing field experiment. The experiment used a peer encouragement design in which a randomly sampled set of customers were encouraged to share a coupon for a mobile data product with their friends. This experimental design allowed us to test the causal effects of neighboring adopters on the product adoption of their own neighbors. We find causal evidence of complex contagions in viral marketing: contact with one neighboring adopter increases product adoption 3.5-fold, while contact with a second neighbor increases it 4-fold. We also find that social reinforcement crucially depends on the local network structure that supports or constrains peer influences: the more friends two individuals have in common—the more embedded their relationship is—the stronger the effect. While the effect of social reinforcement is quite large, we show that the ability to generate such reinforcement in a realistic setting may be limited when the marketer cannot directly control the messages that customers send to their friends.
December 31, 2021
PDFPopular online platforms such as Google Search have the capacity to expose billions of users to partisan and unreliable news. Yet, the content they show real users is understudied due to the technical challenges of independently obtaining such data, and the lack of data sharing agreements that include it. Here we advance on existing digital trace methods using a two-wave study in which we captured not only the URLs participants clicked on while browsing the web (engagement), but also the URLs they saw while using Google Search (exposure). Using surveys paired with engagement and exposure data collected around the 2018 and 2020 US elections, we found that strong Republicans engaged with more partisan and unreliable news than strong Democrats did, despite the two groups being exposed to similar amounts of partisan and unreliable news in their Google search results. Our results suggest the search engine is not pushing strong partisans into filter bubbles, but strong Republicans are asymmetrically selecting into echo chambers. These findings hold across both study waves, align with work on social media and web browsing, and provide a rare look at the relationship between exposure and engagement. Our research highlights the importance of users' choices, and our approach moves the field closer to the independent, longitudinal, and cross-platform studies it needs to evaluate the impact of online search and social media platforms.
December 29, 2021
PDFGiven that being misinformed can have negative ramifications, finding optimal corrective techniques has become a key focus of research. In recent years, several divergent correction formats have been proposed as superior based on distinct theoretical frameworks. However, these correction formats have not been compared in controlled settings, so the suggested superiority of each format remains speculative. Across four experiments, the current paper investigated how altering the format of corrections influences people’s subsequent reliance on misinformation. We examined whether myth-first, fact-first, fact-only, or myth-only correction formats were most effective, using a range of different materials and participant pools. Experiments 1 and 2 focused on climate change misconceptions; participants were Qualtrics online panel members and students taking part in a massive open online course, respectively. Experiments 3 and 4 used misconceptions from a diverse set of topics, with Amazon Mechanical Turk crowdworkers and university student participants. We found that the impact of a correction on beliefs and inferential reasoning was largely independent of the specific format used. The clearest evidence for any potential relative superiority emerged in Experiment 4, which found that the myth-first format was more effective at myth correction than the fact-first format after a delayed retention interval. However, in general it appeared that as long as the key ingredients of a correction were presented, format did not make a considerable difference. This suggests that simply providing corrective information, regardless of format, is far more important than how the correction is presented.
November 1, 2021
PDFProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Stance detection, which aims to determine whether an individual is for or against a target concept, promises to uncover public opinion from large streams of social media data. Yet even human annotation of social media content does not always capture “stance” as measured by public opinion polls. We demonstrate this by directly comparing an individual’s self-reported stance to the stance inferred from their social media data. Leveraging a longitudinal public opinion survey with respondent Twitter handles, we conducted this comparison for 1,129 individuals across four salient targets. We find that recall is high for both “Pro’’ and “Anti’’ stance classifications but precision is variable in a number of cases. We identify three factors leading to the disconnect between text and author stance: temporal inconsistencies, differences in constructs, and measurement errors from both survey respondents and annotators. By presenting a framework for assessing the limitations of stance detection models, this work provides important insight into what stance detection truly measures.
August 5, 2021
PDFSocial media data can provide new insights into political phenomena, but users do not always represent people, posts and accounts are not typically linked to demographic variables for use as statistical controls or in subgroup comparisons, and activities on social media can be difficult to interpret. For data scientists, adding demographic variables and comparisons to closed-ended survey responses have the potential to improve interpretations of inferences drawn from social media—for example, through comparisons of online expressions and survey responses, and by assessing associations with offline outcomes like voting. For survey methodologists, adding social media data to surveys allows for rich behavioral measurements, including comparisons of public expressions with attitudes elicited in a structured survey. Here, we evaluate two popular forms of linkages—administrative and survey—focusing on two questions: How does the method of creating a sample of Twitter users affect its behavioral and demographic profile? What are the relative advantages of each of these methods? Our analyses illustrate where and to what extent the sample based on administrative data diverges in demographic and partisan composition from surveyed Twitter users who report being registered to vote. Despite demographic differences, each linkage method results in behaviorally similar samples, especially in activity levels; however, conventionally sized surveys are likely to lack the statistical power to study subgroups and heterogeneity (e.g., comparing conversations of Democrats and Republicans) within even highly salient political topics. We conclude by developing general recommendations for researchers looking to study social media by linking accounts with external benchmark data sources
July 12, 2021
PDFThe backfire effect is when a correction increases belief in the very misconception it is attempting to correct, and it is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation increases belief more than a no-correction control. Furthermore, we aimed to examine whether item-level differences in backfire rates were associated with test-retest reliability or theoretically meaningful factors. These factors included worldview-related attributes, namely perceived importance and strength of pre-correction belief, and familiarity-related attributes, namely perceived novelty and the illusory truth effect. In two nearly identical experiments, we conducted a longitudinal pre/post design with N = 388 and 532 participants. Participants rated 21 misinformation items and were assigned to a correction condition or test-retest control. We found that no items backfired more in the correction condition compared to test-retest control or initial belief ratings. Item backfire rates were strongly negatively correlated with item reliability (⍴ = -.61 / -.73) and did not correlate with worldview-related attributes. Familiarity-related attributes were significantly correlated with backfire rate, though they did not consistently account for unique variance beyond reliability. While there have been previous papers highlighting the non-replicable nature of backfire effects, the current findings provide a potential mechanism for this poor replicability. It is crucial for future research into backfire effects to use reliable measures, report the reliability of their measures, and take reliability into account in analyses. Furthermore, fact-checkers and communicators should not avoid giving corrective information due to backfire concerns.