Asia-Pacific Regulations Keep Pace With Rapid Evolution of Artificial Intelligence Technology

Regulation of artificial intelligence (AI) technology in the Asia-Pacific region (APAC) is developing rapidly, with at least 16 jurisdictions having some form of AI guidance or regulation. Some countries are implementing AI-specific laws and regulation, while others take a more “soft” law approach in reliance on nonbinding principles and standards. While regulatory approaches in the region differ, policy drivers feature common principles including responsible use, data security, end-user protection, and human autonomy.

Highlighted below are selected updates on recent AI regulatory developments in APAC. Further developments are expected imminently, particularly following the coming into force of the European Union Artificial Intelligence Act (EU AI Act) on August 1, 2024. The EU AI Act represents the first comprehensive “hard” law on AI globally and governs the full lifecycle of manufacturing, distribution, and use of AI systems. The EU AI Act has expansive extraterritorial reach, extending to (1) any person who places an AI system on the market in the EU and (2) any provider or deployer (regardless of where based) of an AI system whose outputs are used in the EU. These factors mean that the EU AI Act is expected to influence the direction and nature of various regulations being developed across different APAC countries.

Businesses are seeing immediate impacts from these AI-related developments. First, APAC businesses will need to review the extent to which they use or are impacted by AI technology and develop AI governance frameworks to enable compliance with applicable legal and regulatory requirements. Further, the proliferation of AI systems across nearly all industries has cross-sectoral impacts relevant to business decision-making, including in the context of mergers and acquisitions, investment transactions, joint ventures, and procuring or outsourcing material services and supplies. There is therefore increasing need to carefully consider AI governance and compliance as part of transaction management, including by undertaking risk assessments and due diligence, and ensuring that AI-related issues (and associated risks) are appropriately treated in relevant business arrangements.

Recent AI Regulatory Developments in APAC

  • India was expected to include AI regulation as part of its proposed Digital India Act, although a draft of this proposed legislation is yet to be released. However, it was reported that a new AI advisory group has been formed, which will be tasked with (1) developing a framework to promote innovation in AI (including through India-specific guidelines promoting the development of trustworthy, fair, and inclusive AI) and (2) minimizing the misuse of AI. In March 2024, the government also released an Advisory on Due Diligence by Intermediaries/Platforms, which advises platforms and intermediaries to ensure that unlawful content is not hosted or published through the use of AI software or algorithms and requires platform providers to identify content that is AI-generated and explicitly inform users about the fallibility of such outputs.
  • Indonesia’s Deputy Minister of Communications and Informatics announced in March 2024 that preparations were underway for AI regulations that are targeted for implementation by the end of 2024. A focus of the regulations is expected to be on sanctions for misuse of AI technology, including those involving breaches of existing laws relating to personal data protection, copyright infringement, and electronic information.
  • Japan is in the preliminary stages of preparing its AI law, known as the Basic Law for the Promotion of Responsible AI. The government aims to finalize and propose the bill by the end of 2024. The bill looks likely to target only so‐called “specific AI foundational models” with significant social impact, and it touches on aspects such as accuracy and reliability (e.g., via safety verification and testing), cybersecurity of AI models and systems, and disclosure to users of AI capabilities and limitations. The framework also proposes collaboration with the private sector in implementing specific standards for these measures.
  • Malaysia is developing an AI code of ethics for users, policymakers, and developers of AI-based technology. The code outlines seven principles of responsible AI, which primarily focus on transparency in AI algorithms, preventing bias and discrimination by inclusion of diverse data sets during training, and evaluation of automated decisions to identify and correct harmful outcomes. There are presently no indications that the government is contemplating the implementation of AI-specific laws.
  • Singapore has similarly not announced plans to develop AI-specific laws. However, the government introduced the Model AI Governance Framework for Generative AI in May 2024, which sets out best practice principles on how businesses across the AI supply chain can responsibly develop, deploy, and use AI technology. Relatedly, the government-backed AI Verify Foundation has released AI Verify, a testing toolkit that developers and owners can use to assess and benchmark their AI system against internationally recognized AI governance principles. The government also recently revealed plans to introduce safety guidelines for generative AI model developers and app deployers, which are aimed at promoting end users’ rights through encouraging transparency in relation to how AI applications work (including what data is used, the results of testing, and any limitations of the AI model) and outline safety and trustworthiness attributes that should be tested prior to deployment.
  • South Korea’s AI law, the Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI, has passed the final stage of voting and is now under review by the National Assembly. Following an “allow first, regulate later” principle, it aims to promote the growth of the domestic AI industry but nevertheless imposes stringent notification requirements for certain “high risk AI” (those that have a significant impact on public health and fundamental rights).
  • Taiwan has published a draft AI law entitled the Basic Law on Artificial Intelligence. The draft bill is open for public consultation until September 13, 2024. It outlines a series of principles for research into development and application of AI and proposes certain mandatory standards aimed at protecting user privacy and security, such as specific AI security standards, disclosure requirements, and accountability frameworks.
  • Thailand has been developing draft AI legislation, notably the Draft Act on Promotion and Support for Artificial Intelligence (which creates an AI regulatory sandbox) and the Draft Royal Decree on Business Operations That Use Artificial Intelligence Systems (which outlines a risk-based approach to AI regulation by setting out differentiated obligations and penalties for categories of AI used by businesses). Like the approach taken in the EU AI Act, the Draft Royal Decree groups AI systems into three categories: unacceptable risk, high risk, and limited risk. The progress of both pieces of legislation is unclear, however, with no major developments reported in 2024.
  • Vietnam’s draft AI law, the Digital Technology Industry Law, is under public consultation until September 2, 2024. The draft sets out policies aimed at advancing the country’s digital technology industry, including government financial support for companies participating in or organizing programs aimed at improving their research and development capacity, as well as a regulatory sandbox framework. It also outlines prohibited AI practices, including the use of AI to classify individuals based on biometric data or social behavior. If enacted, it will apply to businesses operating in the digital technology industry (which includes information technology, AI systems, and big data companies).

Please visit Sidley’s AI Monitor, our centralized resource of AI content including Sidley thought leadership, the latest laws and regulations, and access to our AI lawyers. If you would like to receive AI-related news from Sidley, please click here to subscribe to our AI mailing list.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.