Industry Update: Draft Code of Practice for disinformation protects the platforms too

Full Report

Industry Update: Draft Code of Practice for disinformation protects the platforms too

Already a Subscriber? Login here to view Full Report

Industry Update: Draft Code of Practice for disinformation protects the platforms too

Industry Update: Draft Code of Practice for disinformation protects the platforms too

What has happened?

A draft Code of Practice on Disinformation and an associated Discussion Paper was released by the Digital Industry Group Inc. (DIGI) on 19 October 2020 (whose membership includes Google, Facebook and Twitter).

The ACCC’s ground-breaking inquiry into the dominance of digital platforms in the media industry has led to a number of new regulatory processes. The most publicised has been the advertising Code of Practice, which the ACCC has decided to make mandatory.

However, the ACMA has also been working on a different voluntary Code of Practice to deal with misinformation circulated on platforms, which has attracted less attention. Nevertheless, it has large implications. Once a platform accedes to the voluntary Code, its provisions become enforceable on the platform by ACMA. The draft Code makes provision for a complaints process, an annual report, and a review to take place two years after commencement. It is expected the disinformation Code will come into force in 2021.

Getting a clear definition of disinformation is critical. The draft Code defines disinformation as, cumulatively:

 
  • inauthentic Behaviour (e.g. spam) by users of a digital platform;
  • that propagates digital content via that platform;
  • for the purpose of economic gain or to mislead or deceive the public;
  • that may cause harm (i.e. threats to political and policy processes or to public goods); and
  • is not otherwise unlawful.

The draft Code specifically excludes misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary. Strictly, disinformation requires the intent to cause harm, in contrast to misinformation where false information is circulated without intent to cause harm e.g. innocently retweeting a fake news story. But the boundary is blurred, and likely that the effect of information will be more important than intent.

The draft Code focusses on outcomes rather than prescriptive rules. It will require platforms that opt in (the Code is voluntary) to:

 
  • implement policies, processes and technologies to reduce the risk that users will be exposed to disinformation
  • disrupt advertising and monetisation incentives for disinformation (e.g. by improving authentication and de-promoting advertising services known for disinformation)
  • work to ensure the public benefit of digital platforms (e.g. by managing users to eliminate bots and other bad actors)
  • empower consumers to make better informed choices of digital content.

The Code identifies several specific measures that might be adopted under the Code to achieve these outcomes, including but not limited to:

 
  • complaints handling systems and processes
  • flagging or demoting or ranking of content
  • removal of content
  • user communication
  • security systems
  • account suspension or disabling
  • technological and algorithmic review of content and/or accounts
  • notifying users who have engaged with relevant content
  • fact checking
  • exposing meta data to users about the source of content
  • partnerships with third-party organisations
  • preventing monetisation of disinformation
  • prioritising credible and trusted news sources
  • editorial and curation processes.

DIGI’s Discussion Paper also discusses some existing measures such as Twitter’s policy to manage “inauthentic content”, and Facebook’s policy on “coordinated inauthentic behaviour” (CIB) that seeks to manipulate the public.

Submissions on the draft Code close on 24 November 2020.

Our take

Disinformation has been a problem on social media for a long time, but came to prominence around the time of the Brexit debate and the 2016 US elections. Though the role of disinformation in these particular events have been downplayed by later investigations, it is undoubtedly true that disinformation spread on social media platforms is causing harms. For example, 5G towers and telecommunications technicians in Europe have been attacked over false claims that 5G is causing COVID-19 infections, and false information on the safety or otherwise of vaccines has circulated for many years. The costs of disinformation is real, even if these “facts” are not.

Clear procedures for the handling of disinformation are designed to help society, but they help the platforms too. Just last week, Twitter blocked an article by the New York Post alleging one of the Presidential candidates was embroiled in corrupt practice. This led, amongst other things, to the suspension of White House press secretary Kayleigh McEnany’s personal Twitter account, attracting severe criticism from White House allies.

Coming in the last weeks of an ill-tempered campaign, this action has shone an even stronger spotlight on disinformation and how to deal with it. If Twitter had exempted White House Twitter accounts as “partisan news and commentary” - as the draft Australian Code allows – then they would have been spared the embarrassment of admitting the issue was mishandled (despite the admission, they currently retain a ban on the New York Post Twitter account).

It would also have avoided growing calls for the reform or withdrawal of section 230 of the US Communications Decency Act, which exempts platforms from the responsibilities of publishers. These calls argue that a platform interfering with media reporting or political speech is in effect a publisher with editorial power, and should be regulated as one. But actions taken under a recognised Australian Code would not threaten platforms’ exemption from editorial responsibility in Australia.

The controversy is an inevitable result of the gatekeeper power that digital platforms have. No Code of Practice can change this underlying power, and the latest controversy has revived suggestions in the US that digital platforms need to be broken up (using AT&T as the historical example). But it is difficult to see how this can work, given that (for example) Google’s business generates almost all of its revenue from search advertising, and the other parts of its business are unlikely to be viable if separated.

In our view, a more productive approach will be to accept platform dominance as a fact, and look to other concepts such as the common carrier idea in telecommunications, market power in competition regulation, and community standards in broadcasting as models for regulation of this dominance. The disinformation Code of Practice is a good example of this line of thought, and is more likely to yield progress than industry structural reform.