This report is based on work supported by the National Science Foundation under grants SES2029292 and SES-2029297. Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of the National Science Foundation. This research will also be supported in part by a generous grant from the Knight Foundation.
An individual’s issue preferences are non-separable when they depend on other issue outcomes (Lacy 2001a), presenting measurement challenges for traditional survey research. We extend this logic to the broader case of conditional preferences, in which policy preferences depend on the status of conditions with inherent levels of uncertainty -- and are not necessarily policies themselves.
Social media platforms rarely provide data to misinformation researchers. This is problematic as platforms play a major role in the diffusion and amplification of mis- and disinformation narratives. Scientists are often left working with partial or biased data and must rush to archive relevant data as soon as it appears on the platforms, before it is suddenly and permanently removed by deplatforming operations.
One of the most concerning notions for science communicators, fact-checkers, and advocates of truth, is the backfire effect; this is when a correction leads to an individual increasing their belief in the very misconception the correction is aiming to rectify.
Agent-based models present an ideal tool for interrogating the dynamics of communication and exchange. Such models allow individual aspects of human interaction to be isolated and controlled in a way that sheds new insight into complex behavioral phenomena. This approach is particularly valuable in settings beset by confounding factors and mixed empirical evidence.
Research at the intersection of machine learning and the social sciences has provided critical new insights into social behavior. At the same time, a variety of issues have been identified with the machine learning models used to analyze social data.
The internet has become a popular resource to learn about health and to investigate one's own health condition. However, given the large amount of inaccurate information online, people can easily become misinformed. Individuals have always obtained information from outside the formal health care system, so how has the internet changed people's engagement with health information?
Using both survey- and platform-based measures of support, we study how polarization manifests for 4,313 of President Donald Trump’s tweets since he was inaugurated in 2017. We find high levels of polarization in response to Trump’s tweets. However, after controlling for mean differences, we surprisingly find a high degree of agreement across partisan lines across both survey and platform-based measures.
Individuals acquire increasingly more of their political information from social media, and ever more of that online time is spent in interpersonal, peer-to-peer communication and conversation. Yet, many of these conversations can be either acrimoniously unpleasant or pleasantly uninformative. Why do we seek out and engage in these interactions? Who do people choose to argue with, and what brings them back to repeated exchanges?
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared.