Australian Government Commences Public Consultation on National Regulatory Framework for the “Safe and Responsible” Use of AI
On 1 June 2023, the Australian Government published the Safe and Responsible AI in Australia: Discussion Paper (“Discussion Paper”) to seek public feedback on identifying the potential gaps in the existing domestic governance landscape and possible additional AI governance mechanisms to support the “safe and responsible” development of AI. As noted in the Discussion Paper, although AI has been identified as a “critical technology in Australia’s national interest”, AI adoption rates across Australia remain relatively low. A key aim of the Discussion Paper is to inform the Australian Government on the steps that should be taken on AI regulation in order to increase “community trust and confidence in AI”. The Discussion Paper addresses a broad range of AI technologies and techniques, such as self-driving cars and generative pre-trained transformers (also known as GPT), and notes that any AI regulatory framework would need to consider existing as well as possible future uses of AI and any ensuing risks. The Discussion Paper has an eight (8) week consultation period ending on 26 July 2023.
Potential risks of the use of AI
The Discussion Paper warns that AI has a range of “potentially harmful purposes” such as “generating deepfakes to influence democratic processes or cause other deceit, creating misinformation, and disinformation, [and] encouraging people to self-harm.” The Discussion Paper also highlights concerns over algorithmic bias, as well as concerns over discrimination against individuals based on race, sex, or other protected categories. Any regulatory framework adopted by the Australian Government would need to address unwanted bias issues as well as the speed and scale that AI can be deployed, which may “generate benefits and cause potential harm.” The Discussion Paper recognizes that in determining the appropriate governance response, mitigation of risks posed by AI technologies would need to be carefully balanced with “fostering innovation and adoption.”
Leveraging existing governance landscape
As the Discussion Paper notes, the use of different types of AI applications must conform with existing applicable Australian laws and regulations. Many existing regulatory regimes (either general or sector-specific) can and are currently being applied to address potential issues stemming from the use of AI. There may also be scope to extend Australian Consumer Law to apply to AI, for example where algorithmic decision making may result in misleading and deceptive conduct. Further, any personal information processed in AI applications would need to be in accordance with Australia’s Privacy Act 1998.
In addition to existing laws and guidelines, the Discussion Paper suggests the potential development of new AI-specific laws, including to provide the Australian Communications and Media Authority with powers to combat online misinformation and disinformation, including in the context of AI technologies.
Draft AI regulatory framework
To assist in identifying the most appropriate AI regulatory model, the Discussion Paper highlights the different approaches taken by international jurisdictions active in this space, including by the European Union (“EU”), the United States, Canada, and New Zealand. The model favoured in the Discussion Paper, and of which the Australian Government is seeking feedback, is the risk-based approach to the regulation of AI taken by the proposed EU AI Act with varying regulatory requirements commensurate with the risk profile of an AI application, as summarised below:
- Low risk: These AI applications have a “minor impact that are limited, reversible or brief”. The Discussion Paper includes AI applications that enable personalised online shopping recommendations and automate business processes as well as algorithm-based spam filters, in this category. Interestingly, within the low risk category are AI-enabled chatbots that direct consumers to service options according to existing processes. However, such technology has the potential to be high risk where the service is in a medical context and involves “existing processes” relating to vulnerable individuals leading to discrimination. Requirements involve basic self-assessment, training for users and internal monitoring and documentation requirements.
- Medium risk: These AI applications have a “high impact that are ongoing and difficult to reverse”. Among the AI applications in this category are chatbots directing individuals to essential or emergency services, applications assessing an applicant’s creditworthiness for a loan, and the use of generative AI in education and employment settings. This category is subject to more stringent requirements and involves carrying out a comprehensive self-assessment and “meaningful points of human involvement”, recurring training for users and frequent monitoring and documentation requirements.
- High risk: These AI applications have a “very high impact that are systemic, irreversible or perpetual” and include AI-enabled robots for medical surgery and self-driving cars. The AI applications falling within this category trigger the most onerous requirements including external audits.
Timing of the Australian Government’s expected response to feedback received on the Discussion Paper following public consultation is unknown. While Australia has previously taken various voluntary pre-emptive steps towards regulating AI, including being one of the first countries to adopt an Artificial Ethics Framework, with the rapid developments and advances we are currently seeing in AI globally, Australia will need to act quickly to fulfil its goal in becoming “a leader in responsible AI”.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.