Question
summaries the artical. Misinformation: Strategic sharing, homophily, and endogenous echo chambers Asuman Ozdaglar Daron Acemolu /30 Jun 2021 Misinformation spreads rapidly on social media platforms.
summaries the artical.
Misinformation: Strategic sharing, homophily, and endogenous echo chambers
- Asuman Ozdaglar
- Daron Acemolu /30 Jun 2021
Misinformation spreads rapidly on social media platforms. This column uses a model of online content-sharing to show thata social media platform that wishes to maximise content engagement will propagate extreme articles amongst its most extremist users.'Filter bubbles' prevent the content from spreading beyond its extremist demographic, creating 'echo chambers' in which misinformation circulates. The threat of censorship and a corresponding loss in engagement could pressure platforms to fact-check themselves, while regulating their algorithms could mitigate the consequences of filter bubbles.
"Virginia is eliminating advanced high-school math courses." "Donald Trump tried to impeach Mike Pence." "President Biden is passing a bill forcing all Americans to cut out red meat."
These headlines were among the many circulating on social media over the last few months. Each of the articles was found to contain misinformation - i.e. misleading information or arguments, often aiming to influence (a subset of) the public. Articles containing misinformation were also among the most viral content, with "falsehoods diffusing significantly farther, faster, deeper, and more broadly than the truth in all categories of information" (Vosoughi et al. 2018). There are increasing concerns that misinformation propagated on social media is further polarising the electorate and undermining democratic discourse.
Why does misinformation spread?
What makes misinformation spread virally on social media? What role do the algorithms of social media platforms play in this process? How can we control misinformation? In recent work (Acemoglu et al. 2021), we address these questions.
As Pennycook et al. (2021) show experimentally, social media users care about sharing accurate content online. Sharing misinformation, and being called out by others, can give the user a reputation for irresponsibility or recklessness and reduce her status on social media (see Altay et al. 2020). At the same time, online users obtain value from social affirmation, or 'peer encouragement', in the form of likes or re-tweets (see Eckles et al. 2016).
We capture these choices by allowing users to decide whether to share an article, to kill it (not share it at all), or to inspect (fact-check) it to find out whether it contains misinformation. Sharing brings direct benefits but may be costly if the article contains misinformation that is discovered by some of its recipients. This choice trade-off has two considerations for the user. The first is whether the article is likely to contain misinformation. Because the user has a pre-existing belief/ideology, she will assess the veracity of the article depending on the distance between its message and her viewpoint, and is more likely to share content that is ideologically aligned with her views. The second is how the article shared will be perceived by those who receive it in her social circle. This depends, among other things, on whether her followers will fact-check it themselves, which in turn depends on the degree of 'homophily' in her network meaning whether her social circle shares her views. The importance of strategic calculations is evident here. If she expects recipients to fact-check an article, this will encourage her to fact-check it first, since misinformation is more likely to be discovered. In our paper, we explore these strategic considerations and how they affect the spread of misinformation.
A user's sharing network, and the degree of homophily therein, is determined by her social network and the platform's recommendation algorithm. Perhaps unsurprisingly, social media users tend to engage (e.g. 'follow' or 'friend') other users with similar ideological beliefs (Bakshy et al. 2015). In other words, conservatives tend to interact with other conservatives and liberals tend to interact with other liberals. This forms anexogenous'echo chamber' with a high degree of homophily, whereby users associate with other like-minded users who echo each other's opinions. Social media platform's algorithms can exacerbate homophily by linking users of similar beliefs, and not linking those of opposing beliefs. This creates anendogenousecho chamber (or 'filter bubble').
Our findings
One of our main findings is the role of echo chambers in the spread of misinformation. When echo chambers and the extent of homophily are limited, misinformation does not spread very far. A piece of online content will circulate until it appears to a user who disagrees with it, who will then fact-check it and reveal if it contains misinformation. This fact-checking disciplines other users, who will then be induced to inspect the articles themselves before sharing them. Conversely, when homophily is high and there are extensive exogenous or endogenous echo chambers, users of similar beliefs associate strongly with each other and, recognizing this, fact-check much less. As a result, misinformation spreads virally.
Another main conclusion of our analysis is the role of platforms in propagating misinformation. For platforms that wish to maximise user engagement - in the form of clicks or shares on their site - echo chambers can be highly advantageous. When the platform recommends content to the demographic most likely to agree with it, the content is more likely to be received positively and less likely to be fact-checked and discarded (when it contains misinformation), increasing engagement. This engagement effect can lead to endogenous echo chambers as documented by Levy (2020) for Facebook.
In fact, our results show that echo chambers and the viral spread of misinformation are more likely when articles contain extreme content. When content is not politically charged, such as wedding photos or cooking videos, the platform does not have a strong incentive to create filter bubbles, and may even decide to inspect the veracity of an article and eradicate the misinformation itself. The same is true when the platform's users hold moderate ideological beliefs. This is because viral spread is less likely for moderate content or among users with moderate ideologies. In contrast, with politically divisive content or strong polarization of beliefs in the community, not only will the platform find it beneficial to create an echo chamber in order to maximise engagement, but it will do so without verifying the veracity of the article. In other words, the optimal platform algorithm is to recommend extreme content that aligns with the most extremist users, while adopting a filter bubble that prevents the content from spreading beyond this demographic. Though beneficial for the platform, these endogenous echo chambers for politically charged content lead to the viral spread of misinformation.
Regulation can help
Regulation can help mitigate the effects of endogenous echo chambers. We show that three types of policies can be effective: article provenance, censorship, and algorithm regulation. First, if the platform must be more transparent about the provenance of an article, it will encourage users to fact-check content from less reliable sources more often. However, we also find that such a policy can backfire because of an 'implied truth' effect: Content coming from well-known sources can lead to lower than optimal fact-checking. Second, if regulators threaten to censor a small subset of articles that contain extreme messages or might contain misinformation, the platform is incentivized to act more responsibly. In particular, the threat of censorship and corresponding loss in engagement is enough to push the platform toward reducing the extent of homophily and fact-checking itself in instances which, without censorship, would have created filter bubbles. Finally, a policy that directly regulates the platform's algorithms can mitigate the consequences of filter bubbles. An ideological segregation standard whereby echo chambers within the sharing network are limited and content across the ideological spectrum is presented to all users can lead both to more responsible platform algorithms and to more fact-checking by users themselves.
References
Acemoglu, D, A Ozdaglar and J Siderius (2021), "Misinformation: Strategic Sharing, Homophily, and Endogenous Echo Chambers", NBER Working Paper 28884.
Altay, S, A-S Hacquin and H Mercier (2020), "Why do so few people share fake news? It hurts their reputation",New Media & Society, 24 November.
Bakshy, E, S Messing and L A Adamic (2015), "Exposure to ideologically diverse news and opinion on Facebook",Science, 348: 1130-1132.
Eckles, D, R F Kizilcec and E Bakshy (2016), "Estimating peer effects in networks with peer encouragement designs",Proceedings of the National Academy of Sciences, 113: 7316-7322.
Levy, R (2020), "Social Media, News Consumption, and Polarization: Evidence from a Field Experiment", SSRN Scholarly Paper ID 3653388.
Pennycook, G, Z Epstein, M Mosleh, A A Arechar, D Eckles and D Rand (2021), "Shifting attention to accuracy can reduce misinformation online",Nature, 592: 590-595.
Vosoughi, S, D Roy and S Aral (2018), "The spread of true and false news online",Science, 359: 1146-1151.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started