Artificial intelligence has been hailed for the promise of breakthrough innovations but also the object of concern by such notable voices as Bill Gates, Stephen Hawkins, and Elon Musk. To explore the issues presented, the White House conducted a review of the opportunities, risks, and regulatory implications of artificial intelligence. Last week, the White House released a comprehensive report, Preparing for the Future of Artificial Intelligence, reflecting a culmination of its review, including public comment and several public workshops that were co-hosted by the White House Office of Science and Technology Policy with the National Economic Council, as well as non-profit and academic organizations.
On September 23 2016, the European Data Protection Supervisor (“EDPS“) published an Opinion on the coherent enforcement of fundamental rights in the age of big data (the “Opinion”). Building upon the preliminary opinion it published in 2014, the EDPS sought to emphasise the importance of the protection of personal data rights in light of the rise of data “monopolies.” With the expansion of the big data economy and the Digital Single Market Strategy, the EDPS suggested that the interface between competition and privacy should be a long-term concern for all data protection authorities.
This month, the White House announced a series of workshops and a working group to address the “benefits and risks” of artificial intelligence. The workshops, which are to be held in Seattle, Washington, Pittsburgh, and New York City, will take place between May 24 and July 7, and are expected to result in a public report issued by the end of the year. The workshops and report are expected to address familiar themes – “privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.” Participation by all stakeholders – academia, industry, the research community, civil society, and others – will be key to shaping a report that is likely provide an initial roadmap for regulatory and policy initiatives in the next administration.
Building upon its 2012 Consumer Protection Report, its 2014 report on Data Brokers, and a public workshop held on September 15, 2014, the FTC issued a new report on January 6, 2016, with recommendations to businesses on the growing use of big data: Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (“2016 Big Data Report”). Rather than focusing on prior themes of notice, choice, and security, the 2016 Big Data Report addresses only the commercial use of big data consisting of consumer information, and focuses on impacts of such big data uses on low-income and underserved populations.
*This post originally appeared in Law360 on January 7, 2016.
While 2015 was a big year in data, 2016 may prove to be even bigger. Many hot button and game changing topics are being debated in legislative bodies and campaign trails, regulators are focused, and privacy-related litigation continues to rise. Below, we count down the top ten cybersecurity, data protection and privacy issues to watch in 2016.
*Based on Remarks at the Big Data East Big Data Innovation Conference, September 9, 2015
I believe in the enormous potential of big data. Erik Brynolfsson and Andrew McAfee, authors of The New Machine Age and leading scholars of the digital economy, have compared the power and granularity of computational science to the transformation in understanding of nature that occurred when Anton Van Leuwenhook first peered at samples through his newly-invented microscope. We are seeing new advances in medicine, in social science, new ways of teasing out causation from correlation.
Data Protection Law & Policy
“Data is the new oil” – This statement by Neelie Kroes in 2011 has since been on everyone’s mind and with the constant development of new technologies, the importance of data has grown dramatically over the past few years and in recognition of this it seems that we have now entered into a new era: the era of Big Data. William Long and Geraldine Scali, Partner and Associate respectively at Sidley Austin LLP explore the potential data protection issues that may arise.
The new year will ring in significant privacy, data protection and cybersecurity changes in the U.S., Europe, Asia and elsewhere around the world. Below are some key developments and possible concrete action items for General Counsels, Chief Privacy Officers and Chief Information Officers:
In November 2012, the UK Information Commissioner’s Office (ICO) published a Code of Practice on managing data protection risks related to anonymization. This Code provides a framework for organisations considering using anonymization and explains what it expects from organisations using such processors.
One of the benefits of anonymization is that the onerous data protection obligations under EU data protection laws, including the UK’s Data Protection Act 1998, will not apply to data rendered anonymous such that individuals are no longer identifiable.
As the Code notes, anonymization can allow organisations to make information derived from personal data available in a form that is rich and usable whilst protecting individuals.
The main good practices and recommendations provided in the Code are summarised below:
- Personal data, anonymization and identification: the Code highlights that the concept of “identify” and therefore “anonymized” is not straightforward because individuals can be identified in numerous ways and re-identification by a third party can also take place. It is therefore crucial for businesses to assess the risk of identification when they decide to disclose anonymized data.
- Ensuring effectiveness of anonymization: the ICO recommends the use of the “motivated intruder” test to assess the risk of re-identification. This test involves determining whether a “motivated intruder”, who is a person who starts without any prior knowledge but wishes to identify the individual from whose personal data the anonymized data has been derived, would be successful. It can be done by (i) carrying out a web search to verify if date of birth and postcode can lead to the identification of a specific individual; or (ii) using social networks to establish if anonymized data can lead to an individual’s profile.
- Consent: importantly, the Code provides that consent is generally not needed to legitimize an anonymization process as it could be logistically onerous or even be impossible to obtain such consent.
- Governance: organisations using anonymization should have in place an effective and comprehensive governance structure that should include (i) a Senior Information Risk Owner (SIRO) with the technical and legal understanding to manage the process, (ii) staff trained to have a clear understanding of anonymization techniques, the risks involved and the means to mitigate them, (iii) procedures for identifying cases where anonymization may be problematic or difficult to achieve in practice, (iv) knowledge management regarding any new guidance or case law that clarifies the legal framework surrounding anonymization, (v) a joint approach with other organisations in their sector or those doing similar work, (vi) use of a privacy impact assessment, (vii) clear information on the organization’s approach on anonymization including how personal data is anonymized and the purpose of the anonymization, the techniques used and whether or not the individual has a choice over the anonymization of its personal data, (viii) review of the consequences of the anonymization programme, and (ix) a disaster recovery procedure should re-identification take place and the individual privacy is compromised.
- Trusted Third Party: a Trusted Third Party is an organisation which can be used to convert personal data into anonymized data. The Code highlights the value of using a Trusted Third Party arrangement especially where a number of organisations each want to anonymize personal data they hold for use as part of a collaborative project. Use of Trusted Third Party arrangements can facilitate large scale research using data collected by a number of organisations without the organisations involved ever having to access each others’ personal data. It also allows researchers to use anonymized data when the use of personal data is not necessary or appropriate, and can be used to link datasets from separate organisations to create anonymized records for researchers.
The Code also clarifies when the research exemption under the UK Data Protection Act can be relied upon to process personal data for research purposes and concludes with explanations of key anonymization techniques and various case studies such as one on the use of anonymization in clinical studies.
The Code which also sets out other good practices and recommendations is welcome having been published at a time when anonymization techniques and the status of anonymized data are key issues for many industries including digital media, financial services and life sciences. Anonymization and the ability to use data will also remain key issues with the current discussions on the proposed EU Data Protection Regulation and clarity on these issues at an EU level would also be welcome.
Sidley Austin provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship.
Attorney Advertising – For purposes of compliance with New York State Bar rules, our headquarters are Sidley Austin LLP, 787 Seventh Avenue, New York, NY 10019, 212.839.5300; One South Dearborn, Chicago, IL 60603, 312.853.7000; and 1501 K Street, N.W., Washington, D.C. 20005, 202.736.8000.