Singapore PDPC Consultation on New Guidance for Use of Personal Data in AI Systems
On July 18, 2023, Singapore’s data protection authority published proposed guidelines on the use of personal data in artificial intelligence (AI) systems. The guidelines will be up for public consultation until August 31, 2023, and aim to address how Singapore’s privacy laws will apply to organizations which develop or deploy AI systems. The draft guidelines underscore the significance placed by the privacy regulator on the need to ensure personal data protection, without discouraging organizations from responsibly using AI systems in their businesses. Accordingly, organizations interested in using AI can use the guidelines for insight into what privacy expectations lie in store once the guidelines are finalized.
On July 18, 2023, Singapore’s data protection authority (the Personal Data Protection Commission or PDPC) published its proposed advisory guidelines on the use of personal data in artificial intelligence (AI) recommendation and decision systems (the Proposed Guidelines, accessible here) for public consultation. The Proposed Guidelines aim to 1) clarify how the Singapore Personal Data Protection Act (PDPA), which regulates the collection and use of personal data in Singapore, applies to organizations that develop or deploy AI systems that use machine learning (ML) models to make decisions or to assist a human in making decisions (AI Systems) and 2) provide guidance and best practices on ensuring that such AI Systems are operated in compliance with the PDPA.
The Proposed Guidelines come on the heels of growing global concerns over the collection and use of personal data by AI Systems. The public consultation on the Proposed Guidelines highlights the PDPA’s priority to ensure personal data protection without discouraging organizations from the responsible use of AI Systems in their business.
The Proposed Guidelines cover three implementation stages of AI Systems, namely 1) the development, testing and monitoring of an AI System (referred to in the Proposed Guidelines as “Development, Testing and Monitoring”), 2) deployment of an AI System by an organization in its products or services (referred to in the Proposed Guidelines as “Business Deployment”), and 3) the provision of development and deployment support services for AI Systems by service providers (referred to in the Proposed Guidelines as “Service Providers”).
Development, Testing, and Monitoring
The Proposed Guidelines note that where organizations are considering using personal data to develop, test, or monitor an AI System, they may be able to rely on one of two statutory exceptions in the PDPA instead of obtaining consent for the use of personal data for this purpose.
- The first exception is the “Business Improvement” exception, which permits the use of personal data without consent if the personal data is used for certain business improvement purposes such as the improvement of existing goods and services or the development of new goods or services. The Proposed Guidelines set out relevant considerations for organizations seeking to rely on this exception in connection with the development, testing, or monitoring of an AI System, such as whether the use of personal data contributes to improving the effectiveness or quality of the AI System as well as “common industry practices or standards on how to develop test and monitor AI Systems.”
- The second exception is the “Research” exception, which allows organizations to use personal data without consent if the personal data is used to conduct research or development (including joint research projects with other organizations), where there is a clear public benefit to using the personal data for the research purpose and if any published results of the research do not identify any individual user. As with the Business Improvement exception, the Proposed Guidelines set out a number of relevant considerations for organizations considering whether the Research exception is applicable, including whether the development of the AI system will improve the understanding of science or engineering or will benefit society by improving the quality of life.
Regardless of whether an organization intends to rely on one of the two exceptions, the Proposed Guidelines remind organizations to adopt appropriate technical, process, and/or legal controls for data protection in the context of developing, testing, or monitoring an AI System, including data minimization and the anonymization of personal data. With respect to the latter, it is interesting that the PDPC has adopted a practical stance in recognizing the tradeoffs of using anonymized data to develop or train AI Systems as opposed to personal data, and accordingly the Proposed Guidelines provide that organizations can use personal data instead of anonymized data so long as an evaluation of the “pros and cons” of using either type of data is conducted and documented internally.
Business Deployment
The Proposed Guidelines confirm that the PDPA’s consent, notification, and accountability obligations apply to the collection and use of personal data in the Business Deployment of AI Systems and provides helpful guidance on compliance with these obligations.
With respect to consent, the Proposed Guidance notes that users must be able to provide “meaningful” consent, namely that users are notified of the purposes of the collection and use of their personal data when consent is sought and that they consent to those purposes. In this regard, the Proposed Guidelines advise organizations involved in the Business Deployment of AI Systems to provide users with sufficient information on 1) the product feature that requires personal data to be collected and processed, 2) the types of personal data that will be collected and processed, 3) how the processing of personal data is relevant to the product feature, and 4) the specific features of personal data that are more likely to influence the product feature. This information can be provided to users through notification pop-ups that may include links to more detailed written public policies. Similar to the practical approach adopted for the use of personal data to train AI Systems stated above, the PDPC has recognized the need to protect commercially sensitive and/or proprietary information as well as the security of the AI System and has noted in the Proposed Guidelines that organizations may provide high-level information to users when notifying them of the purpose of the collection and use of their personal data, if appropriate, so long as the decision to provide limited information is justified and documented clearly internally.
The accountability obligation under the PDPA requires organizations to take responsibility for the personal data in their possession. The Proposed Guidelines advise organizations involved in the Business Deployment of AI Systems to develop policies and practices to ensure the responsible use of personal data. Such policies could include “behind-the-scenes” measures such as 1) measures taken to ensure the AI Systems provide fair and reasonable recommendations, predictions, and decisions such as measures relating to bias assessment; 2) safeguards and technical measures to protect personal data during development and testing such as pseudonymization and data minimization; and 3) if useful, information on the safety and/or robustness of the AI System or ML model. These policies should then be made available to users.
Service Providers
The Proposed Guidelines confirm that Service Providers may be treated as data intermediaries under the PDPA where they process personal data on behalf of another organization and would then have to discharge the PDPA obligations applicable to data intermediaries, that is, obligations to 1) implement reasonable security arrangements to protect personal data from unauthorized access, disclosure, and use; 2) cease to retain personal data when retention is no longer necessary for legal or business purposes; and 3) notify the other organization of data breaches. The Proposed Guidelines also encourage Service Providers to support organizations to develop policies and practices that satisfy the consent, notification and accountability obligations noted above, particularly where those organizations are relying on the technical expertise of Service Providers to comply with these obligations. However, the Proposed Guidelines propose that the primary responsibility for ensuring that the AI System complies with the PDPA will still rest with the organization and not the Service Providers.
The consultation on the Proposed Guidelines remains open until August 31, 2023. It remains to be seen the extent to which the final version of the Proposed Guidelines will differ from the current draft, which will depend on feedback received during the public consultation. However, the preliminary draft indicates a willingness by the PDPC to support the responsible development and implementation of AI Systems and gives organizations interested in using AI insight into what to expect and signposts as to what privacy expectations lie in store.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.