Brief

Summary:

A proposed Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill in Australia aims to combat online misinformation and disinformation by giving the Australian Communications and Media Authority increased powers. However, Human Rights Commissioner Lorraine Finlay argues that the bill risks unduly affecting freedom of expression. Concerns include overly broad definitions of key terms, a low harm threshold, and the exclusion of government content from potential censorship. The bill also grants powers to regulate digital content to digital platform providers and the ACMA, which risks restricting public debate and censoring unpopular opinions. To combat misinformation and disinformation effectively, the bill requires strong transparency and scrutiny safeguards to protect freedom of expression and avoid undermining democracy and freedoms.

Complaints ,

Why Misinformation Bill risks Freedoms it Aims to Protect

,

Technology and Human Rights

,

This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in The Australian on Thursday 24 August 2023.

Despite being labelled the “word of the decadeâ€ in 2021, fake news is not a modern phenomenon. Misinformation has been spread for political gain since Octavian used fake news to discredit Mark Antony in ancient Rome.

What is different today is the way modern technology makes it easier to spread fake news around the world but harder to distinguish fact from fiction. Misinformation and disinformation can have devastating effects on human rights, social cohesion and democratic processes.

Australia needs to address these risks. But this needs to be balanced with ensuring we don’t unduly affect freedom of expression.

This is the key problem with the federal government’s proposed Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill. The draft bill aims to give the Australian Communications and Media Authority increased powers to combat online misinformation and disinformation, but in a way that does not find equilibrium between censorship of objectively untrue content and protection for freedom of expression.

Concerns about whether the draft bill strikes the right balance have been expressed by a range of groups, including social media companies such as Meta, legal experts such as the Victorian Bar Council, and the Media, Entertainment and Arts Alliance (which represents more than 15,000 workers in media and cultural sectors). The full extent of feedback about the draft bill is not yet known, with public submissions to the government’s consultation process closing last week but publication of those submissions being delayed until next month.

The Australian Human Rights Commission submission, which already has been made public, highlights four key concerns about the draft bill.

The first issue is the overly broad and vague way key terms – such as misinformation, disinformation and harm – are defined. Laws targeting misinformation and disinformation require clear and precise definitions.

Drawing a clear line between truth and falsehood is not always simple, and there may be legitimate differences in opinion as to how content should be characterised. The broad definitions used here risk enabling unpopular or controversial opinions or beliefs to be subjectively labelled as misinformation or disinformation, and censored as a result.

The second key problem is the low harm threshold established by the proposed law. Content that is “reasonably likely to cause or contribute to serious harmâ€ risks being labelled as misinformation or disinformation. The categories of harm are themselves extremely broad, including things like “harm to the health of Australiansâ€ and “harm to the Australian environmentâ€. Reasonable people may have very different views about what constitutes harm under these categories. The definitions also provide no guidance about how harm is meant to be judged.

It is true that what is required under the bill is not just harm but serious harm. The effect of this, however, is uncertain given the proposed law does not go on to define serious harm. It further requires only that the content has to be “reasonably likely to cause or contribute to serious harmâ€. Content can be labelled as misinformation even if it does not actually cause harm – it only has to be “reasonably likely to do soâ€.

Further, the harm threshold is not limited to causation but requires only contribution, and no minimum level of contribution is stated. This leaves open the possibility that even a minor or tangential contribution will be sufficient. The harm threshold established under this draft bill is extremely low, which risks allowing an extremely broad range of content potentially to be restricted.

The third concern highlighted by the commission is the way the proposed law defines excluded content, which is content that is protected from being labelled as misinformation or disinformation.

One key example here is that the draft bill defines any content that is authorised by the government as being excluded content. This means government information cannot, by definition, be misinformation or disinformation under the law.

This fails to acknowledge the reality that misinformation and disinformation can come from government. Indeed, government misinformation and disinformation raises particular concerns given the enhanced legitimacy and authority that many people attach to information received from official government sources.

This specific exclusion privileges government content but fails to accord the same status to content authorised by the opposition, minor parties or independents.

The result is that government content can never be misinformation but content critical of the government produced by political opponents might be. Any law censoring online information to counter misinformation and disinformation must be scrupulously impartial and apolitical.

The fourth concern relates to the powers to regulate digital content that are granted under the draft bill to digital platform providers and (indirectly) the ACMA.

There are inherent dangers in allowing any one body – whether it be a government department or social media platform – to determine what is and is not censored content. The risk here is that efforts to combat misinformation and disinformation could be used to legitimise attempts to restrict public debate and censor unpopular opinions.

Striking the right balance between combating misinformation or disinformation and protecting freedom of expression is a challenge with no easy answer.

While we need to respond to the risks posed by misinformation and disinformation (which realistically will involve some degree of proscription about what kind of content can appear online), this draft bill does not strike the right balance. Regardless of how future efforts to combat misinformation and disinformation may better find equilibrium between these competing tensions, there needs to be strong transparency and scrutiny safeguards to protect freedom of expression. It is these mechanisms that are sorely missed in the draft bill’s current form.

If we fail to ensure robust safeguards for freedom of expression online, then the measures taken to combat misinformation and disinformation could themselves risk undermining Australia’s democracy and freedoms.

Lorraine Finlay

Highlights content goes here...

Summary:

The article, published in The Australian on August 24, 2023, is an opinion piece by Human Rights Commissioner Lorraine Finlay, raising concerns about the draft Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill in Australia. The author argues that the bill risks infringing upon freedoms it aims to protect, such as freedom of expression.

The article highlights four key concerns with the draft bill:

1. Vague definitions: The bill’s definitions of key terms like misinformation, disinformation, and harm are overly broad and vague, which could lead to subjective labeling of content as misinformation, resulting in censorship.
2. Low harm threshold: The bill establishes a low threshold for what constitutes harm, which could result in a wide range of content being restricted, and the definition of serious harm is unclear.
3. Excluded content: The bill excludes government-authorized content from being considered misinformation or disinformation, which could lead to unequal treatment of content from different political sources.
4. Powers to regulate digital content: The bill grants powers to digital platform providers and the Australian Communications and Media Authority (ACMA) to regulate content, which could be used to restrict public debate and censor unpopular opinions.

The author concludes that the draft bill does not strike the right balance between combating misinformation and disinformation, and protecting freedom of expression. To address this, the author suggests the need for strong transparency and scrutiny safeguards to protect freedom of expression.

Australian Human Rights Commission

Quick Insight
RADA.AI
RADA.AI
Hello! I'm RADA.AI - Regulatory Analysis and Decision Assistance. Your Intelligent guide for compliance and decision-making. How can i assist you today?
Suggested

Form successfully submitted. One of our GRI rep will contact you shortly

Thanking You!

Login

Enter your Email

Enter your email id below to signup.

Enter your Email

Enter your email id below to signup.
Individual Plan
$125 / month OR $1250 / year
Features
Best for: Researchers, Legal professionals, Academics
Enterprise Plan
Contact for Pricing
Features
Best for: Law Firms, Corporations, Government Bodies