*Note author affiliations are from 2022*
Infodemic: Misinformation in the Age of COVID-19
Steven Luo1, Jessica Guo2, Sneha Sunder3, Jacob Zerykier4, Junzhi Xie5, Matthew Lim6, Brooke Ellison7
1Evergreen Valley High School, San Jose, CA 95148; 2Ward Melville High School, East Setauket, NY 11733; 3Fairfield Warde High School, Fairfield, CT 06824; 4Rambam Mesivta-Maimonides High School, Lawrence, NY 11559; 5Richard Montgomery High School, Rockville, MD 20852; 6Plainview-Old Bethpage JFK High School, Plainview, NY 11803; 7Center for Compassionate Care, Medical Humanities, and Bioethics, Health Science Center, Stony Brook University, Stony Brook, NY 11794
Abstract
The COVID-19 pandemic has witnessed a rise in the spread of misinformation and disinformation through both traditional and technological means. It is becoming increasingly evident that there is a second pandemic running in tandem with COVID-19: the “infodemic” (a portmanteau of “information” and “pandemic”). False information on health precautions, safety, and the vaccine, plague American society all over the news and social media[1]. In particular, twelve powerful and respected individuals, now coined the “disinformation dozen” were the force behind anti-vaccine claims that have been reshared over 812,000 times throughout different online interfaces due to perceived credibility[2]. This is amplified by social media algorithms originally designed to provide users with relevant and interesting content. Instead of following its original procedures, it began to create “echo chambers” and exacerbated the spread of false information all while benefiting the companies who designed them[3]. Furthermore, individuals and organizations who spread this information are not simply acting out of ignorance; rather, they exploit racial demographics who are more susceptible to medical misinformation based on their history[4], and capitalize on behavioral economics to profit off neglect of truthful information[5]. In this paper, we examine the causes, motives, and implications of false information in the age of COVID-19. We also discuss the ethics of misinformation from the lens of the news media, social media users, corporations, and other affected groups while proposing possible solutions.
- Introduction
The 21st century has led to a rise in the spread of information on a global scale leading many to call it (and part of the 20th century) “the information age”. With the rise of a pandemic in this new global age, these new modes for the spread of information have proven consequential. The COVID-19 pandemic has witnessed a rise in the spread of disinformation through both traditional and technological means. Though the spread of information had traditionally been considered a positive affair, due to the very negative and dynamic nature of the pandemic, the negative side of the information age is heavily present. This has led some to say that there is a second pandemic running in tandem with the first, called the infodemic (derived from a portmanteau of “information” and “pandemic”). Fundamentally, the infodemic is a symptom of there being not too little, but too much data within the sea of articles which hides false information. These effects have been especially pronounced during COVID-19. The mountain of daily news published in regards to cases, policy, and announcements makes it difficult to keep up with which stories are true and which ones are headline-grabbing speculations. The disinformation (which has also earned it the name “disinfodemic”) disseminated by the COVID-19 infodemic has and will continue to prove fatal to every facet of society.
- News Media
Last year during the COVID-19 news coverage, Bruce Sacerdote, an economics professor, noticed a pattern. In his paper titled “Why Is All COVID-19 News Bad News?,”[6] Sacerdote employed linguistic analysis to determine that eighty-seven percent of stories covered by U.S. media was negative compared to fifty percent from international sources. This leaves the U.S as an outlier which may be explained by the fact that the news companies are responding to the consumers’ demands, while well established news sources such as BBC with government funding are less prone to consumer demands. David Leonhardt[7] , a New York Times writer states on the matter: “If we’re constantly telling a negative story, we are not giving our audience the most accurate portrait of reality. We are shading it.”
While some aver that news bias should be eliminated, the American Press Institute claims that bias may sometimes be beneficial and that understanding and using bias when appropriate, should be the focus of journalism[8]. Media, by definition, is the plural term for medium (of information). If media bias should be subdued, what difference will the mediums of information have? Marjorie Hershey[9], a professor of Political Science at Indiana University, states that “The media present a variety of different perspectives. That’s the way a free press works.” In addition, the first amendment guarantees the right to freedom of press; it does not imply an unbiased press. On the other hand, misinformation regarding the pandemic and vaccines could be attributed to a drop in vaccine administrations, leading to a less safe and virus-free America. Thus, while media bias can affect and radicalize the perspectives of Americans, it is necessary to the overall democracy and integrity of journalism in the United States.
One approach to ease the problem of news radicalization and misinformation is to consume information from multiple news sources regardless of political or social spectrum. This way, a wide variety of opinions can be compared and analyzed to reach a decision which is deemed ethically correct. Another approach to consider with regards to COVID misinformation is the use of independent fact checkers. For instance, Facebook has recently adopted an independent fact checking system that reduces a certain news article’s distribution based on the validity of facts published. This may be a viable solution for major publications, as many rely on in house fact checking which may not filter out some inherent bias. According to Ballotpedia, the three main methods that are used to fact-check a statement are selection process, research methods, and claim evaluations. The selection process is used to evaluate which statements to fact check using metrics such as the statement’s significance, possible misleading impression, and possibility to spread. Some ethical concerns may be raised as some organizations refuse to fact-check certain statements such as opinions; however, opinions and controversial statements must be fact checked thoroughly as they impart a lasting impact on citizens which can alter perspectives and promote radicalization. In an ever polarizing age, the crackdown of significant speeches, debates, social media posts, and unofficial statements through independent fact checkers must be employed.
This can lead to issues though when it comes to social media fact checking, according to Scientific America[11]. With a world of opinions it becomes hard to determine what is fact and what is fiction, making the very idea of “fact checking” repulsive to some. Although bias could sometimes shed light on different views, when this is playing out against a world of social media misinformation, something has to be done.
- Social Media
- Experts
In early February 2021, physician Joseph Mercola and twelve others released an article declaring that coronavirus vaccines are a “medical fraud,” leading to 812,000 Facebook shares[12]. Mercola, an osteopathic physician, believes all policies implemented by the government for COVID-19 were done with ulterior motives. He writes that everything the government has done, including work with companies to discover the COVID-19 vaccine, was done to gain power and wealth. Instead of treatment through vaccination and medication, Mercola practices alternative methods. He also believes that the side effects of COVID-19 and other medical issues can be treated through a healthy diet, vitamin C, and natural supplements — which he sells. Mercola’s education history and savvy internet use have gained him millions of followers on Facebook and Twitter, allowing him to further disseminate his false claims. His team of twelve are also quite popular, and among this “disinformation dozen” is a bodybuilder, wellness blogger, religious zealot, and most notably Robert F. Kennedy Jr., who has attributed vaccines to autism and 5G broadband cellular networks to the cause of the coronavirus pandemic[13]. A majority of the dozen’s posts have since been taken down, but the damage has already been done. Vaccine hesitancy is large and looming within the United States, and it is still on the rise. Statements made by credentialed individuals, such as Joseph Mercola, carry more weight than that of a layperson, even if the statement is false — making it difficult to reason with vaccine skeptics and believers of misinformation.
This same mistrust dilemma is seen in scientific papers as well. Scientific studies with inaccurate findings are being published and exacerbating the already urgent infodemic[14]. Particularly as the Delta variant of COVID-19 surges around the world, discouraging vaccination can potentially put an immense number of lives at risk. In June 2021, a peer-reviewed paper titled “The Safety of Covid-19 vaccinations — we should rethink the policy” was published and concluded the vaccine shots were causing two people to die for every three they saved[14]. Scientist and COVID-19 vaccine critic Robert Malone tweeted a summary of the paper that accumulated thousands of retweets, combined with a video of conservative pundit Liz Wheeler agreeing with the study. This misinformation has been reshared more than 250,000 times across different social media platforms[14]. Later, the paper was retracted and Malone’s tweet was no longer available, but by then the damage had already been done. While civilians spreading false claims is dangerous enough, experts doing so in some of the most reputable scientific journals — such as the New England Journal of Medicine — can elevate the crisis to new levels.
Thus, the user’s autonomy and beneficence are violated. Autonomy requires individuals to have full control over their decisions and to be aware of all the risks and benefits. By spreading false information, some will not be given the proper knowledge that they need to make a well informed decision. Additionally, beneficence requires the expert to give advice that is in the patient’s best interest. However, as previously discussed, Dr. Joseph Mercola may have intentionally distributed anti-vaccine content in an effort to sell his supplemental medications and earn a profit.
Two measures need to take place to mitigate the spread of misinformation. First, scientific experts must take extra precautions to ensure their claims are validated with thorough data. Second, online interfaces — such as Google, Twitter, Facebook, and Instagram — must incorporate stronger algorithms that prevent false claims from going viral, if not completely removing the misinformation immediately, especially as it relates to vaccines.
- Algorithms
Misinformation doesn’t spread in a purely organic way — the same algorithms that serve concert videos to classical music aficionados and “ASMR” clips to slime-loving kids help propagate the spread of misinformation.
Even before the beginnings of the coronavirus pandemic, social media had been a breeding ground for anti-vaccination sentiment. For example, distrust in modern medicine and scientific advancements had been shown in a Polish anti-vaccine Facebook group, where tens of thousands of comments were posted with a majority containing negative sentiment towards vaccines (Klimluk et al., 2021)[15]. However, as the Federal Drug Administration began to issue its first Emergency Use Authorizations, anti-vaccine accounts, posts, and pages gained newfound traction, and vaccine skepticism became more mainstream.
Social media algorithms, due to their nature of being different applications, serve content and pursue user retention in different ways. For example, Facebook prioritizes posts that come from those with some connection to the user, such as a “friend” from the friends list.
Entry points into opinion “echo chambers,” or a scenario where a user is exposed to like-minded others who share opinions reinforcing an existing belief (homogeneous thinking). “Echo chambers” differ from “epistemic bubbles” in that echo chambers are a structure where different opinions are actively excluded and maligned, but not always intentionally (Nguyen 2018)[16] . In this case, we use the two terms interchangeably. Though different platforms influence the creation of echo chambers in different ways, Twitter and Facebook uniquely promote this due to their emphasis on social networking and their news algorithm. Unfortunately, this can lead to the segregation of users based on their interests and opinions, leading to distrust of those with opposing or varying ones[17]. Such polarization is dangerous from a public health perspective, making it harder for scientific institutions to factually communicate with users within a bubble and promote preventative and responsive action to respond to epidemics as well as disease outbreaks. Once again, this is best reflected by anti-COVID vaccine users who do not trust scientific institutions (such as the U.S. Centers for Disease Control) and leaders (including advice from President Donald Trump, President Joseph Biden, and Dr. Anthony Fauci), resulting in their refusal and hesitancy to get vaccinated or hesitancy.
In fact, social media algorithms generally favor the creation of echo chambers by design. From a graph theory-inspired perspective, each interaction with a piece of content strengthens the link between two nodes (where a node represents connections, such as accounts following each other) on Twitter, making it more likely for the user to see content similar to what the social networking algorithm infers that the user would be interested in. “Likes” on Facebook produce a similar effect, changing the Facebook algorithm’s quantification of a user’s viewpoint on a particular subject. Therefore, if a user were to repeatedly positively engage with vaccine skeptics or COVID-related conspiracy theories, their learning would be defined in a way such that their COVID skepticism would be known to the Facebook algorithm (Cinelli et al., 2021)[18]. While this behavior does cause the formation of echo chambers, it is by design; through such formation, social media sites can push fans of a TV show together, nudge gardeners looking to buy a new set of tools to a particular brand, and in general bring users and their interests together. This usually provides a more relevant, interesting experience for the user while benefiting the social media company by bolstering content engagement, incentivizing users to remain active on the social media application, and generating revenue through interest-based advertisements and targeted product placements.
In addition, algorithms are not necessarily designed to connect people or make them happy, they’re designed to make money[19] due to social media companies’ nature of being for-profit, despite their mission statements. The primary way these platforms generate revenue is through advertisements[20] , meaning that these algorithms aim to keep you engaged and remain on the platform — showing something novel or counterintuitive is usually against this motivation as it might pull people off their platform to further research it. Additionally, exposing users to opposing viewpoints is counterproductive, as users become disengaged when exposed to content they are not interested in.
Many social media networks, notably Facebook[21] and Twitter[22], have begun adding “warning labels” to COVID-19-related content; however, the efficacy of this action is unclear. First, users who hold strong, preexisting beliefs, particularly if they are already distrustful of scientific authority, are unlikely to trust linked information from content labels. On Facebook (and Instagram), the “COVID-19 Information Center” sources information from corporations like the World Health Organization as well as local health authorities, which would likely be ineffective against close-minded individuals who do not trust these institutions. This is exacerbated by the human connection aspect, where one is more likely to trust a genuine person they can speak to[23], rather than a large corporation who they feel is purely doing things for profit. In addition, particularly Instagram’s ability to label all COVID-related posts has sparked debate over whether or not content containing offensive material should also be labeled to provide background information and informational resources. Examples are the National Association for the Advancement of Colored People uploading a post containing a racial slur and the Trevor Project containing a homophobic slur. Next, labeled content continues to spread, particularly on Twitter where an analysis of tweets posted by political leaders that had been flagged continued to receive interactions despite limits on likes and retweets[24]. That is not to say this cautionary action was useless; rather there are more effective ways that social media companies must consider in order to counter their role in the spread of COVID-19 misinformation.
One-on-one conversations have been effective in countering vaccine hesitancy caused by COVID-19 related misinformation, and social media companies can implement this by connecting users with local health providers and medical professionals. For example, physicians who have prior history with a patient are in the unique position to dispel medical misinformation and provide sound advice and credible information. Though prompting user interaction and exploration of such resources on the platform may prove challenging given a user’s beliefs, this can be addressed through pop-ups, an easily accessible and highly visible placement of resources within the platform, unskippable informational videos, and repeated notifications allowing the user to explore provided resources, particularly if the user’s content preferences and opinions result in them being served COVID-19 misinformation. Social media platforms should also limit the creation of COVID-misinformation echo chambers by limiting or even preventing their algorithms from repeatedly serving labeled or dubious COVID-related content. This would essentially stop the formation of echo chambers featuring COVID misinformation, and severely limit the reach of the “disinformation dozen” and similar users online. Corporations should also restrict advertising of sensitive content, as it can be used to target misinformation towards certain users based on their interests and demographics, some of whom may be uniquely susceptible to medical disinformation. Finally, algorithmic transparency should be provided to better understand how echo chambers form as a result of collected user data, as well as what other steps we can take to prevent the echoing of misinformation. Though algorithms are generally proprietary and confidential, partial disclosures can be made without exposing trade secrets, and would result in benefits for user privacy and give researchers a better understanding of how echo chambers are formed and how they can be stopped.
- Racial Targeting Through Misinformation
Though the most prominent voices in the anti-vaccine movement are often those of white conservatives, vaccine misinformation targets Black and Hispanic communities too, often on a community-level. According to a March 2021 analysis by The New York Times, the vaccination rate for Black Americans is half of that of white Americans, and the rate for Hispanic Americans is even lower[25]. While this disparity between vaccination rates among different races is often attributed to a lack of access to vaccines in minority communities, the greater cause may be vaccine hesitancy.
To combat the lack of access to vaccine issues, several states have taken affirmative action to extend vaccine access to Black and Hispanic communities. For example, North Carolina has partnered with faith leaders to extend access through methods such as releasing appointments to Black and Latinx church attendees before the general public[26]. Some states have offered special registration codes to Black and Hispanic residents to encourage them to sign up for vaccine appointments[25]. However, these efforts may simply add fuel to the rampant fire of vaccine misinformation. To some minority communities, what is intended to be affirmative action to benefit them actually bares an eerie semblance to a long history of being deceived and exploited by American medicine.
Perhaps the most infamous example of doctors experimenting on Black Americans is the U.S. Public Health Service Syphilis Study at Tuskegee, Alabama. At the beginning of the study, 399 of the 600 participants, all of whom were black men, had syphilis. The men were offered free medical exams and meals to entice them to participate in this study. However, the researchers collected information without the informed consent of the participants. Furthermore, penicillin was widely available as a treatment for syphilis by 1943, but treatment was withheld from the participants, making this treatment both unethical and medically unnecessary, since it was not for the advancement of the treatment of syphilis[27].
This is not just one isolated instance of exploitation and immorality toward people of color in medical research, rather it is part of a frequent, yet often forgotten pattern in medical history. In colonial times, J. Marion Sims performed experimental gynecological procedures on enslaved women and African-American corpses[28]. 70 years ago, a Baltimore hospital collected cancer cells from Henrietta Lacks, a Black woman, without her consent[25]. Anti-vaccine activists are swift at drawing upon these historical examples to spread misinformation among demographics already prone to skepticism toward medical developments.
Children’s Health Defense, a prominent organization in the anti-vaccine movement, released a film entitled Medical Racism: The New Apartheid. The film seized upon interviews with experts, such as medical historian Naomi Rogers of Yale University, without making the context of the interview segments clear — essentially “cherrypicking” the interviewees’ words.
Rogers believed that she was being interviewed for a documentary about past issues of racism and experimentation; however, she felt “used” when she realized her words were twisted to support an “advocacy piece for anti-vaxxers”[25]. Educating and informing people about medical racism throughout history is a good thing. After all, we must raise awareness of the tragic history of exploitation of people of color by the medical industry in order to prevent it from repeating itself. However, it is unethical of anti-vaccine agencies to use such information to manipulate susceptible populations to fit their own agenda, especially when the vaccine has been repeatedly proven safe. The spread of historical information is ethical when the purpose is to educate, but unethical when the purpose is to manipulate and mislead people into endangering themselves.
People of color have been exploited by the medical community throughout history, and now many view the government’s efforts to vaccinate them as a continuation of medical racism. Thus, these groups are more vulnerable to the specific targeting of anti-vaccine misinformation groups. The only solution is to restore peoples’ trust, often on an individual level. Volunteers of color have experienced success going door-to-door within their own communities, encouraging others to register for the vaccine and addressing their apprehensions personally[25]. Because misinformation often targets minority communities directly, perhaps the best remedy to the misinformation epidemic will come from figures within these communities. Local leaders are more familiar within these societies, making them more effective in fighting vaccine hesitancy at a grassroots level. Celebrity participation in public health campaigns can also bolster the appeal of governmental efforts to promote vaccination. The racial vaccination rate gap will continue to bridge not merely from governmental efforts, but also from people of color encouraging each other to get vaccinated.
- Behavioral Economics
There are several ways the COVID-19 infodemic exploited principles of behavioral economics, or the factors behind economic decision making, to facilitate mass public danger.
First, when individuals stand to only marginally benefit by making the correct decision, misinformation is spread more prevalently.[29] While curricula about internet safety and evidence standards are being increasingly taught in public education[30], it often requires what some may view as “unnecessary” work. This may be a reason why internet users continue to magnify the COVID-19 infodemic from the perspective of the consumer. This results in consequences of misperceiving a piece of information that are too detached to necesitate thorough investigation. The scope of wearing a mask, for example, doesn’t carry much individual benefit on a macroscopic scale, especially to one who has never encountered the virus first-hand.
Second, media outlets stand to gain large returns by magnifying the spread of misinformation.[31] A similar phenomenon can be seen in public elections, where politicians stand to gain favor in particular districts by campaigning on misinformation about minorities.[32] Since much of the prominence of corporations, especially those based in information-sharing, is contingent on the popularity of their information, many may seek to amplify leads with poorly vetted verification. Despite the cost-benefit analysis that might be derived from potential loss of trust in consumers, the unique context of COVID-19 assuaged these concerns: large amounts of competition, in tandem with weak verification and the volume of primary sources, gave outlets the opportunity to capitalize on information sharing with little concern for focused backlash.[33] This shift in equilibrium meant that the cost of consumers developing false beliefs yielded little losses to suppliers which created an inelastic demand for information about the pandemic, leading to a dangerously stable market.
Third, consumers are less likely to challenge misconceptions if they experience more happiness as a result.[34] This was magnified by the large amount of claims being made in the early days of the COVID-19 pandemic: in tandem with the heavy value placed in individual liberty in countries like the United States, misconceptions about social distancing, mask-wearing, and lockdowns were easy to selectively consume.[35] This made efforts to reverse these claims particularly challenging, as many of those who adhered to them were already set in their beliefs. This principle was particularly difficult in that it lowered the entry threshold for third-party suppliers, who were able to appeal to the demand for new claims that did not have to be thoroughly substantiated to be believable.[36]
- Conclusion
Evidently, the misinformation surrounding COVID-19 has become another pandemic itself. False information about health precautions, safety, and vaccine information plague American society, spanning news media and social media. Scientific experts faltering and publishing these mistakes has caused mistrust of the COVID-19 vaccine, the government, and scientists in the general. The fame from public figures and interface algorithms have all allowed the rapid spread of misinformation. Furthermore, individuals and organizations who spread this information are not simply acting out of ignorance; rather, they exploit racial demographics who are more susceptible to medical misinformation based on tragic history, and manipulate their audience by carefully applying the principles of behavioral economics.
Many news media channels provide biased opinions, and those interested should read/watch multiple sources to understand the full picture. Regarding the expansion of independent fact checkers on news media, a wider range of statements, including political opinions, must be analyzed, as they influence the decisions of Americans.
For social media, experts should conduct extensive research before publishing and retweeting details to ensure the information they disseminate is correct. Algorithms should also be improved to prevent false information from going viral. It seems that the only way to truly solve the issue is through government intervention or at least the threat of it. Until such a major change occurs, social media companies will continue to do what is best for shareholders and not for their users. These resulting online echo chambers violate the population’s autonomy and beneficence because they force content upon the users due to ulterior motives.
Additionally, despite efforts to equalize access to vaccines, Black and Hispanic vaccination rates still trail white vaccination rates—evidence of the severe impact of vaccine misinformation within these groups especially. Anti-vaccination organizations manipulate historical information about medical history for unethical purposes. In order to inspire trust in those affected by racial targeting through misinformation, people of color must encourage members of their own communities to get vaccinated.
Another fundamental concern of the infodemic is also a conflicting one: with so much information readily available, why do consumers gravitate towards the least informed ones? Looking at the patterns behind rational decision making points towards a tangle of communication that makes the solution all the more difficult. This leads to questions such as, who is at fault?
Overall, a world in which the public cannot trust the government or science cannot continue. The integrity and credibility of professional expertise must be rebuilt. This paper integrated all the aforementioned issues, explored their possible ethical violations, and proposed next steps. Hopefully, these steps can be taken and prevent the reoccurrence of an infodemic that was perhaps more deadly than the underlying pandemic.
CRediT Statement:
Steven Luo: Abstract, Introduction, Social Media – Algorithms, Conclusion, Writing – Review & Editing
Jessica Guo: Abstract, Social Media – Experts, Conclusion, Writing – Review & Editing
Sneha Sunder: Abstract, Racial Targeting, Conclusion
Jacob Zerykier: Abstract, Introduction, Social Media – Algorithms, Conclusion
Junzhi Xie: Introduction, Behavioral Economics, Conclusion
Matthew Lim: News Media, Conclusion
Acknowledgements:
We would like to thank the Garcia Summer Research Program as well Dr. Brooke Ellison for providing this opportunity as well as all of their help and support in writing this ethics paper.
References:
[1] Melki, Jad, et al. “Mitigating Infodemics: The Relationship between NEWS Exposure and Trust and Belief IN COVID-19 Fake News and Social Media Spreading.” PLOS ONE, Public Library of Science, 4 June 2021, journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0252830.
[2] Bond, Shannon. “Just 12 People Are behind MOST Vaccine Hoaxes on Social Media, Research Shows.” NPR, NPR, 14 May 2021, www.npr.org/2021/05/13/996570855/disinformation-dozen-test-facebooks-twitters-ability-to-curb-vaccine-hoaxes.
[3] Nguyen, C. Thi. “ECHO CHAMBERS AND EPISTEMIC BUBBLES.” Episteme, vol. 17, no. 2, 2020, pp. 141–161., doi:10.1017/epi.2018.32.
[4] Frenkel, Sheera. “Black and Hispanic Communities Grapple with Vaccine Misinformation.”
The New York Times, The New York Times, 10 Mar. 2021, www.nytimes.com/2021/03/10/technology/vaccine-misinformation.html.
[5] Glaeser, Edward L. “Psychology and the Market.” The American Economic Review, vol. 94, no. 2, 2004, pp. 408–413. JSTOR, www.jstor.org/stable/3592919. Accessed 12 Aug. 2021.
[6] Sacerdote, Bruce, et al. “Why Is All Covid-19 News Bad News?” National Bureau of Economic Research, 2020, doi:10.3386/w28110.
[7] Leonhardt, David. “Bad News Bias.” The New York Times, The New York Times, 24 Mar. 2021, www.nytimes.com/2021/03/24/briefing/boulder-shooting-george-segal-astrazeneca.html.
[8] “Understanding Bias.” American Press Institute, 18 July 2017, www.americanpressinstitute.org/journalism-essentials/bias-objectivity/understanding-bias/.
[9] Marjorie Hershey Professor Emeritus of Political Science. “Political Bias in Media Doesn’t Threaten Democracy – Other, Less Visible Biases Do.” The Conversation, 20 Nov. 2020, theconversation.com/political-bias-in-media-doesnt-threaten-democracy-other-less-visible-biases-do-144844.
[10]
[11] Ceci, Stephen J. “The Psychology of Fact-Checking.” Scientific American, Scientific American, 25 Oct. 2020, www.scientificamerican.com/article/the-psychology-of-fact-checking1/.
[12] “Majority of Covid Misinformation Came from 12 People, Report Finds.” The Guardian, Guardian News and Media, 17 July 2021, www.theguardian.com/world/2021/jul/17/covid-misinformation-conspiracy-theories-ccdh-report.
[13] “One of the Most Influential Voices in Vaccine Misinformation Is a Doctor.” NPR, NPR, 8 Aug. 2021, www.npr.org/2021/08/08/1025845675/one-of-the-most-influential-voices-in-vaccine-misinformation-is-a-doctor.
[14] France 24. “Flawed Scientific Papers Fueling Covid-19 Misinformation.” France 24, France 24, 30 July 2021, www.france24.com/en/live-news/20210730-flawed-scientific-papers-fueling-covid-19-misinformation.
[15] Klimiuk, Krzysztof, et al. “Vaccine Misinformation on Social Media – Topic-Based Content and Sentiment Analysis of Polish VACCINE-DENIERS’ Comments on Facebook.” Human Vaccines & Immunotherapeutics, vol. 17, no. 7, 2021, pp. 2026–2035., doi:10.1080/21645515.2020.1850072.
[16] Nguyen, C. Thi. “ECHO Chambers and Epistemic BUBBLES.” Episteme, vol. 17, no. 2, 2018, pp. 141–161., doi:10.1017/epi.2018.32.
[17] CNguyen Associate Professor of Philosophy, C. Thi. “The Problem of Living inside Echo Chambers.” The Conversation, 28 Apr. 2021, theconversation.com/the-problem-of-living-inside-echo-chambers-110486.
[18] Cinelli, Matteo, et al. The Echo Chamber Effect on Social Media, 21 Mar. 2021, doi:10.1073/pnas.2023301118.
[19] Rose-Stockwell, Tobias. “This Is How Your Fear and Outrage Are Being Sold for Profit.” Quartz, Quartz, qz.com/1039910/how-facebooks-news-feed-algorithm-sells-our-fear-and-outrage-for-profit/.
[20] McFarlane, Greg. “How Facebook, Twitter, Social Media Make Money from You.” Investopedia, Investopedia, 19 May 2021, www.investopedia.com/stock-analysis/032114/how-facebook-twitter-social-media-make-money-you-twtr-lnkd-fb-goog.aspx.
[21] Culliford, Elizabeth. “Facebook to Label All Posts about COVID-19 VACCINES.” Reuters, Thomson Reuters, 15 Mar. 2021, www.reuters.com/article/us-health-coronavirus-facebook/facebook-to-label-all-posts-about-covid-19-vaccines-idUSKBN2B70NJ.
[22] Culliford, Elizabeth. “COVID-19 Misleading Information Policy.” Twitter, Twitter, 2021, help.twitter.com/en/rules-and-policies/medical-misinformation-policy.
[23] Brown, Eileen. “9 Out of 10 Americans Don’t Fact-Check Information They Read on Social Media.” ZDNet, ZDNet, 10 May 2017, www.zdnet.com/article/nine-out-of-ten-americans-dont-fact-check-information-they-read-on-social-media/.
[24] Megan A. Brown, Zeve Sanderson. “Analysis | Twitter Put Warning Labels on Hundreds of Thousands of TWEETS. Our Research Examined Which Worked Best.” The Washington Post, WP Company, 10 Dec. 2020, www.washingtonpost.com/politics/2020/12/09/twitter-put-warning-labels-hundreds-thousands-tweets-our-research-examined-which-worked-best/.
[25] Frenkel, Sheera. “Black and Hispanic COMMUNITIES Grapple with Vaccine Misinformation.” The New York Times, The New York Times, 10 Mar. 2021, www.nytimes.com/2021/03/10/technology/vaccine-misinformation.html.
[26] Levisohn, Ariella. “The Disturbing History of African-Americans and Medical Research Goes Beyond Henrietta Lacks.” The National Academy for State Health Policy, Ariella Levisohn Https://Www.nashp.org/Wp-Content/Uploads/2019/06/NASHP-Logo.png, 26 May 2021, www.nashp.org/states-identify-and-address-covid-19-vaccine-disparities-through-targeted-rollout-and-outreach/.
[27] “Tuskegee Study – Timeline – CDC – Nchhstp.” Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, 22 Apr. 2021, www.cdc.gov/tuskegee/timeline.htm.
[28] Rothman, Lily. “Immortal Life of Henrietta Lacks on HBO: Her History.” Time, Time, 7 Mar. 2019, time.com/4746297/henrietta-lacks-movie-history-research-oprah/.
[29] Glaeser, Edward L. “Political Economy of Hatred.” OUP Academic, Oxford University Press, 1 Feb. 2005, academic.oup.com/qje/article-abstract/120/1/45/1931471.
[30] Tugend, Alina. “These Students Are Learning about Fake News and How to Spot It.” The New York Times, The New York Times, 20 Feb. 2020, www.nytimes.com/2020/02/20/education/learning/news-literacy-2016-election.html.
[31] Edward L. Glaeser, 2004. “Psychology and the Market,” NBER Working Papers 10203, National Bureau of Economic Research, Inc. https://ideas.repec.org/p/nbr/nberwo/10203.html
[32] Glaeser, Edward L. “Political Economy of Hatred.” OUP Academic, Oxford University Press, 1 Feb. 2005, academic.oup.com/qje/article-abstract/120/1/45/1931471.
[33] “Fighting the Spread of Covid-19 Misinformation.” News, 5 Aug. 2021, www.hsph.harvard.edu/news/features/fighting-the-spread-of-covid-19-misinformation/.
[34] “Optimal Expectations | Markus K. BRUNNERMEIER.” Princeton University, The Trustees of Princeton University, scholar.princeton.edu/markus/publications/optimal-expectations.
[35] Anthony, Kwame A. “The True Face of Freedom Wears a Mask.” Wall Street Journal, 16 Aug. 2020. https://www.wsj.com/articles/the-true-face-of-freedom-wears-a-mask-11596727495
[36] Simpson, Eric, and Conner, Adam. “Fighting Coronavirus Misinformation and Disinformation.” Center for American Progress, 18 Aug. 2020. https://www.americanprogress.org/issues/technology-policy/reports/2020/08/18/488714/fighting-coronavirus-misinformation-disinformation/