The Echo Chamber Effect

As social media platforms proficiently feed users with self-affirming posts and pleasant experiences, the adverse effect is that people get trapped in their mental comfort zones, losing touch with different communities and alternative viewpoints. Opinion leaders find themselves in endless pursuit of reach, public opinion becomes polarised, and people with opposite viewpoints lose the capacity and interest to converse. Remedies to this social engineering experiment can be found in fact checking and assistive intelligence.

It is natural to surround oneself with like-minded people. However, if an echo chamber is created, you are cut off from the outside world and lose contact with people of different opinions. This can lead to many social problems, including the polarisation of public opinion.


Algorithms deliver different information to different people

The echo chamber effect is closely associated with information technology, especially the rise of social media. Before the widespread use of the internet, information came from a limited number of sources — newspapers, magazines, radio and TV stations. Even though people had different interests and often disagreed, the information they received generally overlapped. In the 1990s, the internet came along and democratised the distribution of information, introducing a diversity of information channels. For example, RSS feeds, providing website updates in standardised format, became popular for subscription to news, blogs and websites. Then, in 2004, Facebook was founded, and we turned away from professional media to ‘friends’ for information. Compared to RSS feeds, which let users decide where the information came from, in what order it was presented and how it was labelled, Facebook began to dictate users’ news feeds. Later on, it even scrapped explicit user settings and handed over the decision making to algorithms. In short, algorithms get the final say, and news feeds are simply spoon-fed to users.

Behind the algorithms lies artificial intelligence (AI), which in turn is supported by machine learning. Unlike traditional software, whose logic is specifically designed by programmers, the process of machine learning remains opaque, and the results generated by algorithms are often inexplicable. This is even more so with advanced machine learning. For instance, AlphaGo, the computer programme that defeated the world’s best Go players, often made moves that even Go masters could not decipher.

Some might suggest that Facebook excels in choosing news feeds, since people are only offered content they care about. It is true that Facebook employs top engineers and psychologists, who adopt a ‘people-oriented’ approach in their selection and compilation of highly tailored news feeds. A ‘people-oriented’ approach sounds infallible. However, since people tend to become set in their ways and to develop confirmation bias, meeting their preferences, and excluding all contradictory information, may not be in their best interest. Moreover, an individual is part of a wider society, and it is dangerous to place too much emphasis on personal interests or beliefs. Facebook has succeeded in invoking positive feelings by continuously feeding users self-affirming posts, but there is a consequence: people are trapped in their comfort zones, completely oblivious to different communities or alternative viewpoints. In other words, they find themselves in an echo chamber.

Facebook alone has already done significant damage. When we bring in other standalone ecosystems, such as WeChat, WhatsApp and LINE, we end up with hundreds of millions of private groups that circulate tens of billions of messages daily. Every day, different people are exposed to drastically different information, sometimes with zero overlap. It is no wonder that there is little common ground in their resulting worldviews.

Bringing like-minded people together serves to connect them; but locking them in an echo chamber serves to disconnect them from all ‘others’.

Opinion leaders are incentivised to spread disinformation

Along with ‘people-oriented’ algorithms, we have opinion leaders. Of course, opinion leaders existed before the advent of social media. They were eminent scholars of different disciplines, pioneers in various sectors and experts on a range of topics in discussion forums. Indeed, discussion forums were not ‘people-oriented’ but ‘topic-based’. Therefore, those participating in political debates would be exposed to views that spanned the political spectrum. They had to engage with opinions that were definitely not as congenial as those on Facebook.

While previous opinion leaders had to possess specific expertise, an in-depth understanding of algorithms is also required in today’s age of social media. The ‘viral effect’ lies at the heart of the game. Some opinion leaders are well versed in increasing engagement and maximising reach. They make a big deal about whether a picture should be square or rectangular, whether a text should be long or short and whether a post should be scheduled for midday or early morning. They choose emotional wording; they make opportune comments to boost traffic. Opinion leaders seem to thrive on social media platforms, attracting followers and exerting influence. At the same time, they can also be hijacked by algorithms and end up pandering to the system, lost in an exhausting, endless pursuit of reach.

The bad news is that opinion leaders are not the only ones empowered by algorithms. Disinformation has also taken hold of the online world. In this post-truth era, any disinformation can find its believers, no matter how implausible the story. On top of that, algorithms have helped recruit supporters of conspiracy theories. Even when a piece of disinformation is debunked, opinion leaders are often reluctant to correct their statements. Even if the error has been rectified, algorithms will make sure that the corrected information reaches far fewer people than the original false information. When the ‘returns’ of disinformation are so much higher than the ‘cost’, the continuous dissemination of false information becomes a ‘rational’ choice. This is the anomaly of the echo chamber.

When echo chambers dominate the online world, there is no overlap between different communities, no common ground in society, no interaction among divergent opinions, and it becomes increasingly difficult to hold a public discussion. If people’s opinions are polar opposites, the best-case scenario is that they stop conversing.

Using fact checks to mitigate the echo chamber effect

The combination of the echo chamber and misinformation has given rise to serious social problems. However, technology is evolving much more quickly than ethics; new media is also developing much faster than online etiquette. Mass media, traditionally viewed as bearing social responsibility as the ‘fourth estate’, has struggled to adapt to the new platforms of social media and independent online voices. In the past, editors and reporters were responsible for setting agendas, highlighting and presenting important issues to stimulate public discussion. In contrast, today’s algorithms place opinion leaders centre stage, with the mission of attracting eyeballs and reinforcing users’ opinions.

Following the scandal of Cambridge Analytica’s improper acquisition and use of personal data from Facebook, Western society has become increasingly aware of the problem and applied pressure on social media. These platforms have started to take action in recent years. For example, Facebook is working with fact-checkers in different countries certified by the International Fact-Checking Network (IFCN). Once posted content is identified as misinformation, Facebook will significantly reduce its reach. It will also attach a label to warn viewers of its misleading nature, followed by a link to the verification report. This policy is undoubtedly welcome, but the next step is to improve the user interface and algorithms to further assist the fact-checkers in combatting the echo chamber effect.

Assistive intelligence vs artificial intelligence

Instead of passively reacting to disinformation, a more effective approach would be to tackle the causes of the echo chamber effect. Artificial intelligence has undeniably boosted productivity and improved our lives in some ways. However, society needs to be aware of its potential pitfalls, especially when allowing it to decide what information the public receives. Bearing in mind the indecipherable processes behind machine learning, some ethicists are advocating the use of Explainable Artificial Intelligence (XAI), a program that attempts to ensure that AI results comply with ethical standards and can be understood by humans. Some developed countries have already incorporated such initiatives into their laws. For example, the General Data Protection Regulation (GDPR) introduced by the European Union in 2018 protects users’ ‘right to explanation’ as well as their personal data, but details need to be finalised before these provisions can be implemented in AI.

In the context of social media, the right to explanation refers to the user’s right to know how posts are ordered in a news feed — whether the sequence is based on priority settings, engagement with the author or advertising deals. As algorithms dictate what information users receive, the public is no longer convinced by the claim of ‘technological neutrality’. In particular, algorithms are shaped by advertisers, business partners such as Cambridge Analytica, and even the government, so technology has never been neutral.

Along with automation, a place should be reserved for human judgment, especially when it comes to key decisions. The computer can propose solutions and explain the pros and cons, leaving the final decision to the user. Also abbreviated as AI, programs based on ‘assistive intelligence’ undertake to provide assistance and empower humans, while artificial intelligence functions to replace and render redundant human input. Perhaps, instead of pursuing artificial intelligence, society should focus more on developing assistive intelligence.


Cover image: Gerd Altmann from Pixabay
(Translated by Tse Kwun-Tung.)
Originally published on HEINRICH-BÖLL-STIFTUNG



Posted

in

by

Comments

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *