A study released last month in the US by Pew Research Center found that today, more Americans believe the government and tech giants should be proactive in restricting misinformation online than in 2018.
Of more than 11,000 adults interviewed 48% said the government should take steps to restrict false information, even if it means losing some freedom to access and publish content, up from 39% in 2018. Even more adults - 59% - said technology companies should take steps to restrict misinformation online, even if it puts some restrictions on Americans’ ability to access and publish content. This is largely unchanged since 2018.
Earlier this month, the now renowned former Facebook employee Frances Haugen, appeared before the senate subcommittee in the USA after exposing via thousands of internal documents, that Facebook knew its algorithms, which tailor the content that a user sees on Instagram and Facebook, were damaging to the mental health of teenage girls and incited violence and hate. Haugen stated: “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people”. Haugen’s case is with the financial watchdog in the US, the Securities and Exchange Commission.
This study and the recent whistleblowing by Frances Haugen are merely two cases of a long string of examples worldwide of increasing pressure being put on big tech companies to take a more responsible position when it comes to how they distribute content to their users and in moderating content published on their sites.
In the USA alone, for example, there are two other current challenges to Facebook afoot. Reforms are underway to the Communications Decency Act, which exempts social media platforms from liability for what is posted on their network. And, the Federal Trade Commission is suing to break up Facebook via a new antitrust complaint which seeks to force Facebook to sell off Instagram and WhatsApp.
In the UK, a May 2021 draft of the Online Safety Bill puts duty of care on social media companies to be responsible for restricting harmful content, saying explicitly that websites “have a responsibility to make sure you do not expose your users to harm if you own or manage an online platform or service” that enables user generated content, interaction between users and aggregates content.
In Australia in February, Facebook and Google signed up to a new voluntary code of practice designed to reduce the risk of online misinformation and disinformation, joining other existing big tech companies such as Microsoft and Twitter. The efficacy of this code is yet to be demonstrated.
It is clear that the public is more conscious than ever of the prevalence of misinformation and manipulation in the online space, but navigating the legislative and regulatory path is proving tricky and will continue to be so. This is due to a myriad issues at play, such as the scale and pace at which the networks grow and relative growing responsibilities of moderation, lack of understanding by regulators of the algorithms used for content distribution, and the inherent problem that big tech profits are reliant on data mining of personal experience data and cannot therefore be expected to self-regulate, as per the Haugen case.
In an article in The Guardian earlier this month, Christopher Wylie who was the former employee of Cambridge Analytica and the whistle-blower of the case of the illegal harvest of millions of Facebook profiles of US voters, says “One of the failures of public discourse around all of the problems with big tech and algorithms is that we fail to understand that these are products of engineering. We do not apply to tech companies the existing regulatory principles that apply in other fields of engineering, we do not require safety testing of algorithms,” indicating that big tech’s platforms are not sufficiently checked by regulators before being released to the public.
In the same article, Harvard professor Shoshana Zuboff discusses how big tech companies mine personal experience and turn it into behavioural predications for sale to businesses, and they "rely on surveillance to invade our once ‘private’ experience with operations designed to bypass individual awareness… They sell human futures – predictions of what we will do next and later” but also that it is multiple sectors looking to generate profit from people’s data, not just big tech, and mirrors the sentiment that the public and policy makers have not grasped as deeply as required, the truth of this.
So how do the tech companies maintain integrity and revenue and users maintain well being, as well as their social media accounts? CEO of Truescope, John Croll, says
“There is a constant war of improvement by the tech companies and social platforms on the one hand, while bad actors, businesses or individuals, manipulate their content to use the algorithms to promote false or misleading ‘information’. A global solution might be as clear as mud at this stage. In any case, it will continue to be Truescope's role to inform clients of what is being said about them in the public domain, true or false, to give them the opportunity to respond.”
We are yet to see where the responsibility for misinformation lies, but there is seemingly a battle with tech underway and the wheels are in motion for global change.