Australia’s Digital Platform Regulators Release Working Papers on Risks and Harms Posed by Algorithms and Large Language Models
Australia’s Digital Platform Regulators Forum (DP-REG) has recently released two working papers relevant to developing AI policy on the global stage: Literature summary: Harms and risks of algorithms (Algorithms WP) and Examination of technology: Large language models used in generative artificial intelligence (LLM WP) (together, the Working Papers) to mark the launch of its website. The DP-REG, which comprises various prominent Australian regulators across multiple industries, was established to ensure a collaborative and cohesive approach to the regulation of digital platform technologies in Australia. The Working Papers focus on understanding the risks and harms, as well as evaluating the benefits, of algorithms and generative artificial intelligence, and provides recommendations on the Australian Federal Government’s response to AI. The Working Papers also serve as a useful resource for the Australian industry and the public as these technologies are increasingly integrated and engaged with in the Australian market. Interestingly, the recommendations set out in the Working Papers are broadly aligned with the requirements of the EU’s Artificial Intelligence Act, which reached political agreement on 8 December 2023, suggesting that Australia’s proposed approach to regulating AI may be inspired at least in part by Europe’s AI regulatory framework.
Summarised below are the key discussion points from the Working Papers:
Algorithms WP
- Content Moderation: The Algorithms WP highlight the importance of algorithmic content moderation for the purposes of managing large volumes of harmful and illegal content on online platforms. Whilst this tool has many benefits, the DP-REG acknowledge that algorithmic content moderation has the potential to pose various harms to individuals and society. According to the Algorithms WP, a core issue with algorithmic content moderation is the rate of error. These errors are commonly referred to as false positives (i.e., the removal of harmless content) and false negatives (i.e., the failure to remove harmful or misleading information). Errors can occur for reasons such as human error, AI misjudgement of context and language and the limitation of certain tools’ “robustness” (i.e., algorithms’ ability to “manage circumvention efforts or unexpected inputs that occur when a tool is used in the real world”).
The Algorithms WP also highlight the risk that algorithms are trained on existing content that may exhibit bias, and thereby perpetuate bias, marginalise certain communities and negate a diversity of viewpoints. The DP-REG called for Australian lawmakers to ensure potential harms to individuals are mitigated, and recommended enhanced transparency and user feedback when using such tools.
- Recommender Systems: Recommender systems – also known as content curation systems or ranking algorithms – are used by digital platforms to prioritise and personalise content to help users find relevant and desirable content. Algorithms analyse user data, such as interactions on a platform, to predict how users will react to different types of content based on certain factors such as past engagement with similar content. The Algorithms WP notes the risk that recommender systems may also exhibit bias from biased training data leading to discrimination and increasing exposure to harmful content. The DP-REG stresses the importance of transparency around the design and operation of recommender systems and the data they are trained on, and emphasises risks from highly personalised recommendations.
- Targeted Advertising: The DP-REG highlights the potential for targeted advertising to manipulate consumer preferences and exploit vulnerable consumers, such as advertising diet or cosmetic items to individuals with low self-esteem. According to the Algorithms WP, algorithmic transparency is critical to reducing these and similar risks of misinformation and manipulation.
LLM WP
- Limitations of LLMs: According to the LLM WP, text generated by an LLM “lacks a true understanding of communication, the world, or the reader’s state of mind.” Issues arise when users assume LLMs understand like humans do due to “our predisposition to interpret communicative acts as conveying coherent meaning and intent, whether or not they do.” As the LLM WP notes, this can lead to LLM chatbots producing information that might be inaccurate or harmful while also appearing authoritative. The LLM WP highlights the risk that LLMs could engage in self-preferencing, or make it difficult for consumers to identify sponsored recommendations or content. The LLM WP also highlighted risks from bias and quality control.
- Webscraping Considerations: The LLM WP also highlighted that, as LLMs are often trained on public data due to their large data needs, LLMs incentivize scraping information from public websites without the knowledge or consent of the content creators or data subjects, which may in turn present regulatory challenges.
The DP-REG acknowledges the difficulty it has in “balancing innovation and regulation, enforcement, and keeping pace with evolving technology and business models.” A common recommendation featured throughout the Working Papers is the need for transparency. In terms of next steps, the Working Papers note that the DP-REG members will take part in Australian Government-wide discussions to plan Australia’s response to these technological developments. Relatedly, on 4 December 2023, the Australian Federal Government announced its intention to formulate a new copyright and AI reference group in order to better prepare for any future copyright challenges emerging from AI use in Australia. These pre-emptive steps taken towards regulating AI show promising signs that Australia is closer to its goal in becoming “a leader in responsible AI”.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.