Is the moral panic generated by people’s concerns and fear around misinformation reasonable, and if not, who is served by the moral panic being generated?

MISINFORMATION AND MORAL PANIC

The riots of January 6, 2021, in the United States Capitol Hill building and the legacy of COVID-19 are two factors that have led to increased awareness about the rise of misinformation. With this increased awareness, there has also been a rise in moral panic about the impact of misinformation, particularly on democracies and individuals' crucial health-seeking behaviour.

While it seems the amount of misinformation circulating within society is greater than previously, has it increased, or is it simply a consequence of the heightened fear caused by the moral panic? Furthermore, is the moral panic generated by people’s concerns and fear around misinformation reasonable, and if not, who is served by the moral panic being generated?

Misinformation – old tricks in new clothing

It is easy to think of misinformation as a new phenomenon that came with the digital age and declining trust in mainstream media.

This view fails to take account of the long history of misinformation dating back to the Roman Empire, where it was used to manipulate public opinion and discredit opponents. With the development of the printing press in the 15th century, sensational stories and fake news became common tools to attract readers to newspapers. In the 19th century, the term “yellow journalism” was coined to describe the practice of newspapers to create sensational and fabricated stories to generate sales.

Misinformation has existed as long as people have been in communities. However, it has evolved with technology, and with the Internet and social media, it can spread faster than with older technologies.

What is misinformation

Given the historical and ongoing presence of misinformation within societies, what is it? If we claim it is on the increase in the 21st century, we must be able to define the term to know if it is increasing.

This is where things become challenging because there is no agreed-upon definition of misinformation. It is often linked with disinformation, which is inaccurate information spread maliciously, and propaganda, which is biased information designed to sway people politically. The most common definition of misinformation is anything factually inaccurate but not intended to deceive.

The difficulty with this common definition is that almost anything could qualify as misinformation. For example, if the weather is forecasted as 20 degrees with the possibility of rain, and the day turns out to be 25 degrees with no rain, does the forecast qualify as misinformation?

Even with COVID-19, as more scientific information and facts were discovered throughout the pandemic, did the new data mean that earlier data was misinformation? Most reasonable people would acknowledge that information can change and evolve without earlier information being labelled as misinformation.

Claims about the rise of misinformation often bundle together these innocuous inaccuracies with more extreme and potentially serious conspiracy theories. To support any claim that misinformation is on the increase would require an agreed standard definition and one that excluded these innocuous inaccuracies.

Scale and Impact of Misinformation

Is the scale and impact of misinformation as great as many would have us believe?

A recent paper by Ceren Budak et al. found three common misperceptions about misinformation.

      That average exposure to problematic content is high.

      That algorithms are primarily responsible for this exposure; and

      That social media is a primary cause of broader social problems such as polarisation.

The key findings from the authors were.

      Exposure to false and inflammatory content is remarkably low. For example, only 6.3% of YouTube users were responsible for 79.8% of exposure to extremist channels from July to December 2020, and 85% of vaccine-sceptical content was consumed by less than 1% of US citizens in the 2016 – 19 period.

      User preferences play an outsized role in accessing misinformation, not the platform algorithms, as conventional wisdom expounds. For example, 0.04% of YouTube algorithmic recommendations directed users to extremist content.

      Many draw a straight line between social media usage and societal ills. This assumption takes the form of a casual chain: misinformation leads to bad beliefs, resulting in bad behaviour.  Human behaviour is much more nuanced, and studies designed to untangle the cause-and-effect chain invariably come up short.

It may be that there is misinformation about the extent and impact of misinformation.

Misinformation and moral panic

Perhaps our concern and fear around the rise and spread of misinformation has more to do with our fears and concerns about the impact of technology on society and how it is changing our daily lives.

What is moral panic?

Moral panic is the widespread feeling of fear that some evil threatens the values and well-being of the community and society. The fear is often exaggerated, or the perception of threat is false, but it leads to heightened public concern and drastic measures to address the perceived issue.

The late South African criminologist Stanley Cohen developed the concept of moral panic when he explained the public reaction to “mods and rockers,” groups of young people in Brighton, England, during the 1960s. The public’s response to the mods and rockers influenced the formation and enforcement of social policy, law, and societal perceptions of the threat posed by groups of young people.

Distinguishing characteristics of moral panics

There are three distinguishing characteristics of issues that become moral panics.

1. Firstly, attention is focused on the behaviour or issue, whether real or imagined. The media         strips the issue or the group of people of any positive characteristics, and the negative                   qualities are emphasised.

2. The second characteristic is a gap between the concern and the objective threat it poses.              Typically, the objective threat is far less than the popularly perceived threat.

3. Thirdly, the level of concern fluctuates over time. There is often a rapid rise in concern when       the “threat” is first discovered. This peaks and then subsequently can quickly subside.                     However, while the public concern is high, it often results in legislation being passed that is         punitive and unnecessary but shows the public that those in power are “doing something.”

These three characteristics are seen in the concerns over misinformation, particularly on social media. There is focused attention on the evils and dangers of social media, especially for young people. This focused attention is based on a reductionist model that assumes young people lack agency and are trapped by the algorithms created by social media. In a reductionist model, young people get caught by these algorithms and start believing false information. This, in turn, leads to harmful and/or destructive behaviour that threatens society.

As the findings from Ceren Budak point out, there is a gap between the general public’s perception of the amount of misinformation and its impact and what the data says. Despite this gap, public concern feeds into the political arena. Politicians hold inquiries and seek to pass legislation to control social media platforms and young people’s access to them. How successful such legislation will be remains to be seen.

Technological moral panics of the past

It is important to remember that the current moral panic about misinformation and social media is not the first technologically inspired moral panic. Virtually every form of communication technology has met with moral panic.

In mid-15th-century Europe, people destroyed print shops in a wave of anti-Gutenberg sentiment, fearing the printing press would ruin society. The New York Times in 1858 stated that:

“so far as the influence of the newspaper upon the mind and moral of the people is concerned, there can be no rational doubt that the telegraph has caused vast injury.

It was assumed that the telegraph would spread propaganda, destabilize society, disconnect people from the real world, and instill false ideas in their heads.

The introduction of radio in the 1930s concerned some American parents about the corrupting influence of radio on their children.

We may look at these historical examples and view them as quaint histrionics over progress. Yet, the fears and concerns of people in mid-15th-century Europe or Americans in 1858 or the 1930s were no less real than the fears many have today about the extent and impact of misinformation.

How, then, should we manage the complexities around misinformation and the moral panic many feel?

Managing the complexities of misinformation and moral panic

How do we manage the influence and impact of misinformation in a way that is constructive and helpful rather than becoming caught up in the fear response of moral panic and overreacting?

For many, getting caught up in the moral panic surrounding misinformation solves their general sense of anxiety and uncertainty. The anxiety has a focus rather than it being free-floating. At a personal level, the moral panic justifies and provides reasons for their anxiety and concern. For example, social media platforms are wrong, and the government should be doing more to protect vulnerable young people or citizens.

Maintaining a middle ground that holds in tension the need to combat misinformation without getting caught up in the panic can be challenging because it recognises there are nuances and that things are never as right or wrong as we could like them to be.

Constructively managing the fear of misinformation involves a combination of education, critical thinking, and proactive strategies. Examples of these strategies include

      Promoting media literacy. It is easy to ban what we think is wrong; educating people on critically evaluating information is more challenging. The ability to think critically includes understanding biases, verifying facts, and checking sources. These skills can empower people to distinguish misinformation from factual information.

      Encouraging skepticism and reflection. Educating people on being constructively skeptical of information and reflecting on its validity can help them avoid falling for misinformation. Part of the skill of reflection is the willingness to consider alternative viewpoints. This willingness links to another strategy.

      Promoting and fostering open communication. To consider alternative viewpoints, we must create environments where open and honest discussion is encouraged and where people feel safe to express different views. Often, social media platforms are not safe spaces because dissenting views result in a media pile-on where the person who is trying to express a different point of view is attacked and vilified. Misinformation can spread quickly when open communication and other opinions are shut down.

      Teaching people to use technology wisely. This includes leveraging technology to flag or filter out misinformation. Social media platforms and search engines can help identify and limit the spread of false information.

We will never eliminate misinformation, but that doesn’t mean we shouldn’t do everything we can to combat its spread. Rather than being caught up in the moral panic surrounding its spread, educating ourselves and those in our networks about identifying and debunking misinformation when we come across it is essential.

TOP