Valuing truth in the age of fake

RLUK Deputy Executive Director Fiona Bradley was invited to deliver a plenary at the London Info International conference on 5 December 2017, a copy of her remarks is below:

Today I’m going to talk with you about ‘fake news’ and why using that term is problematic, but also why we must pay attention to the ways our norms about how we access and use information are changing, and what we can do about it.

Although it’s been in the headlines for more than a year, no one has been able to definitively define ‘fake news’, although two dictionaries have declared it their ‘word of the year’:

Collins dictionary: “false, often sensational, information disseminated under the guise of news reporting”

Macquarie dictionary: “disinformation and hoaxes published on websites for political purposes or to drive web traffic” and “the incorrect information being passed along by social media”

Earlier this year, the Department for Digital, Culture, Media, and Sport (DCMS) launched an inquiry into ‘fake news’. The fact that DCMS put the term fake news in quotes to me signified that lack of consensus about what it means. Research Libraries UK was one of the organisations that responded to the inquiry. In our response, we stated that attempting to define ‘fake news’ should be avoided because there could be unintended consequences or a chilling effect on the media, libraries, publishers, and others that provide access to information. Trying to define fake news is like trying to nail jelly to a wall: whatever it means changes depending on who is speaking.

What we need to do instead is to think about what fake news represents. We need to distinguish between clickbait and misinformation.

Clickbait

If you’ve ever read a celebrity news site, listicles on Buzzfeed, or filled out a quiz on Facebook, you know all about clickbait. There’s so much clickbait now that the satirical newspaper, The Onion, has branched out and created a satirical clickbait website ClickHole: Psychologists have found that the reason many of us love these sites so much is because they play to our emotions, and our curiosity, triggering a dopamine response.

A lot of the time, these sites are harmless fun. But clickbait can be harmful too. Entrepreneurs in Eastern Europe, for example, are making huge amounts of money from sites that churn out political clickbait stories. It doesn’t matter to them whether the stories are true or false. Some of these started appearing around the time of the 2016 US election. The is where clickbait can potentially turn into something that we should worry about – misinformation, and disinformation.

Misinformation and disinformation

So misinformation too, can initially seem harmless. You might have seen a picture circulating on Twitter in September. It claims to show a shark swimming down the freeway, after Hurricane Harvey in the US. Yet Buzzfeed reporter Jane Lytvynenko found that the same shark has turned up during many storms over the years. That shark sure can swim! But why do some tweets about a shark matter? On social media, it’s incredibly easy to manipulate or misattribute photos, making them appear as something other than what they are. Sometimes this is completely innocent, for fun or the result of a mistake, and other times it can be very harmful.

When misinformation – or disinformation – is deliberate, it can have real impact on our institutions, our trust in government, and the media. We should all be worried that press freedom is declining around the world.

In recent months, there have been many reports about attempts by individuals and organisations, with a range of motivations, to influence the referendum on exiting the European Union, elections around the world, the independence referendum in Catalonia, and opinions about Rohingya Muslims fleeing Myanmar through social media. When such efforts are state sponsored, it crosses the line and can become a form of psychological warfare. Rick Stengel, a former managing editor of Time Magazine, commented in December last year that the information and infrastructure developed by the US played an important role in the Berlin Wall coming down, and this is a lesson other countries are now learning from. He concludes by observing, “you don’t have to invade a country if you control its information space.” Access to knowledge has been transformative for our societies and our lives, but we must also remember that it represents significant power as well.

The challenge of regulation

Recognising these challenges, governments are looking for ways to regulate the internet and internet companies. In the DCMS inquiry on ‘fake news’ the Department wanted to know whether and how the major internet intermediaries including Facebook, Google and Twitter should be regulated. Because the general election was called, DCMS closed the inquiry, but has continued to talk regularly in the months since with Facebook and others. Some of these discussions have included a call to ban encryption, and reclassify Facebook and Google as publishers. These discussions have also been greatly influenced by the horrific terrorist attacks in the UK this year and the need to remove extremist material quickly. However, outlawing encryption for everyone would break the internet, as it would make it impossible to shop online, or do internet banking. There are also troubling examples where freedom of speech has been curtailed in other countries. Courtney Radsch from the Committee to Protect Journalists said, and I quote:

“Many authoritarian countries criminalise the publication of what they commonly call false news, censoring content, shuttering news outlets, and jailing journalists on the charge, which is often levied against information critical of or unwanted by those in power” Courtney Radsch, CPJ Advocacy Director

Regulation is a very challenging balance. On one hand, excessive regulation should be resisted, but on the other, technology is moving so quickly that the law cannot keep up. Intermediaries are self-regulating, relying on community editors, handbooks that are not open, private algorithms, and therefore making decisions that can have wide ranging and immediate effects:

Facebook has been criticised for the worrying impact on democracy of its “downright Orwellian” decision to run an experiment seeing professional media removed from the main news feed in six countries.

The experiment, which began 19 October and is still ongoing, involves limiting the core element of Facebook’s social network to only personal posts and paid adverts.

So-called public posts, such as those from media organisation Facebook pages, are being moved to a separate “explore” feed timeline. As a result, media organisations in the six countries containing 1% of the world’s population – Sri Lanka, Guatemala, Bolivia, Cambodia, Serbia and Slovakia – have had one of their most important publishing platforms removed overnight.

Again you might ask, why does this matter? It matters because in many countries, Facebook IS the internet, because Internet users don’t have to buy data to access Facebook. This is called ‘zero-rating’, and Facebook’s version of it is called “Facebook Basics”. Millions of people never access any internet content outside of the platform. And in every country, more and more of us get our news from Facebook. How Facebook and other platforms are regulated is therefore critically important.

Furthermore, technological approaches that are increasingly being used to decide what information to show us, such as algorithms, risk replicating human biases and errors, are not open to scrutiny, and don’t help people to critically evaluate the information they use.

What can we do about it?

It is a truism that a functioning democracy relies on an educated and well-informed populace (Kuklinski, Quirk, Jerit, Schwieder, & Rich, 2000).

What we need are ways to help people understand and use what they see. Being able to critically evaluate information is a key skill for everyone, and something libraries have supported for many years. Information literacy, and increasingly digital literacy, are ways to put information in context. Libraries are trusted and valued by the public, in all age groups, as places where they can get training and advice.

But we must do more. Techniques that focus on how to judge the reliability of information based on whether webpages have a name attached to them, date, domain URL (eg does it end in .org or .gov) may no longer be sufficient in helping us to critically evaluate the credibility of information. We must know who is behind the content – who funds it, and why did they create it. The need to increase investment in ensuring everyone has information literacy skills has never been greater.

Libraries have professional standards and guidelines to draw from, including the UNESCO Media and Information Literacy (MIL) Global Assessment Framework which encompasses learning, critical thinking and interpretative skills across educational and societal boundaries, and all types of media. In the UK organisations like SCONUL have developed the Seven Pillars of Information Literacy, that defines core abilities in higher education. We can also look to national frameworks such as the Welsh National Literacy Framework. The UK government has also identified the need for digital skills, as outlined in a green paper for the Department for Business, Energy and Industrial Strategy released in January 2017 that noted the need for everyone to have literacy, digital, and lifelong learning skills, including those that don’t have the opportunity to go to university. Libraries have a large reach in helping people to gain these skills – RLUK’s member libraries alone delivered 17,446 hours of instruction on these skills to users in 2015[1].

This brings me to scholarly publishing. We now acknowledge funding sources in journal articles, so that any potential biases from author affiliations can be identified. We have developed technologies and standards to link together identities and works, such as ORCiD, and institutional repositories. We encourage researchers to share their expertise by building profiles, and they do so including on networking sites like ResearchGate and Academia. Codes of practice are in place to deal with fraudulent research. But there is more we need to do: in my opinion ‘fake news’ is another motivation to support open access to research outputs and data. Open access includes permissive licences from publishers that allow authors to make available open copies of their work, or the creation of new, quality open access journals and books.

Why does open access matter in this context? Consider this: making available research open access makes scientific discoveries, facts, and new knowledge discoverable by anyone. It makes more research available to be cited, to be reviewed, and to be replicated. This speeds up and strengthens science and research. It also means that when academic research is cited in the media or someone wants to learn more, they can access the research. While having access to research alone won’t solve the challenges of misinformation or ‘fake news’, it will help when people want to find out more behind the headlines or an article they find on Wikipedia.

We must also ensure that the record of contemporary history, science, and culture is preserved for current and future generations to ensure accurate use, citation, verification, and reproducibility.  Libraries have an essential stewardship role through our research and national libraries, and archives. This issue is especially urgent in the case of digital content, where we guard against the ephemeral nature of the internet that can result in deliberate or accidental revisions or deletions of materials. Here too, there are many initiatives I could point to, including web archiving at the British Library, digital preservation standards, programmes to preserve access to subscribed content in libraries and the UNESCO Persist programme in collaboration with government and industry to secure ongoing access to digital information. Digital collections preserve and share the story of major events in the UK, such as the National Library of Scotland’s Scottish Independence Referendum 2014 Collection. Looking ahead, what will the story of the referendum on the EU look like, when we include all the social media, campaigns, and the official reporting?

We can’t get rid of all misinformation, because technology is constantly changing, and because we must constantly strive to protect freedom of expression and access to information. There is no easy way to solve ‘fake news’, but with a combination of skills, openness, and preservation we can make more informed decisions about the effect it has on us.

Thank you.



[1] Reader instruction: user hours 2014-2015, mean of RLUK institutions, 26 institutions responding. SCONUL Annual Library Statistics 2014-2015 https://www.sconul.ac.uk/tags/sconul-statistics

Comments:

  • Fiona

    An excellent piece, but for me it let's off the hook so called 'main stream media'.

    Your misinformation examples uses social media but provides no examples from 'main stream media', despite there being numerous examples.

    You mention psychological warfare, but you example of the Berlin Wall is not only old, but does not provide examples of click bait, misinformation or disinformation.

    A clear example of mis/dis information would be the Iraq war: disinformation = US/UK governments peddling the WMD angle; misinformation = newspapers publishing the disinformation. (In this case, many media outlets were aware of that this was probably misinformation, but spiked stories that raised doubts.)

    More recently, the reporting of Russian penetration / manipulation of US elections also fits this pattern.

    Where media outlets/journalists consistently publish misinformation from the same sources without cross checking, or do not give the same prominence to corrections as to the original stories, the are in reality publishing disinformation. The tendancy to reprint press releases from governments, enterprises, etc. as reporting is a sad tendancy in this direction.

    The reasons media outlets behave like this are many, financial, political and human reasons among them. Financial = chasing reader numbers and first to post a story (not so far from "click bait), cuts to reporting staff; political = arguements regarding national security, etc. (though this should have worn thin by now); human = being too close to sources (golfing partners, etc.)

    You are correct that techniques base on 'webpage reliability' ARE no longer sufficient, but I would argue that 'who is behind the content' is not either, if this not expanded beyond who published it to an analyse of where publisher got their information in the first place.

    In reality, I do not believe that Social Media has created these problems, they have always existed ('The first victim in war is the truth'), as has the need for information literacy & analytical skills. Social Media have just made these problems explicit and clear.

    Mark Perkins

    03rd Jan, 2018 at 10:48pm

Latest News / View all News