FTC Issues Report (and Warning Shot) on Big Data Use

Building upon its 2012 Consumer Protection Report, its 2014 report on Data Brokers, and a public workshop held on September 15, 2014, the FTC issued a new report on January 6, 2016, with recommendations to businesses on the growing use of big data:  Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (“2016 Big Data Report”).  Rather than focusing on prior themes of notice, choice, and security, the 2016 Big Data Report addresses only the commercial use of big data consisting of consumer information, and focuses on impacts of such big data uses on low-income and underserved populations.

Continuing the familiar theme of ambivalence about the potential in big data, the FTC acknowledges both benefits and risks from big data analytics:

Big data analytics can provide numerous opportunities for improvements in society. In addition to more effectively matching products and services to consumers, big data can create opportunities for low-income and underserved communities. For example, … big data is helping target educational, credit, healthcare, and employment opportunities to low-income and underserved populations. At the same time, …potential inaccuracies and biases might lead to detrimental effects for low-income and underserved populations. For example, … companies could use big data to exclude low-income and underserved communities from credit and employment opportunities.

2016 FTC Big Data Report at i.

No single law comprehensively addresses big data analytics in the market.  Lest businesses mistakenly believe that big data analytics operate without legal restraint, the Report is intended to alert businesses to the potential implications of existing law applicable to big data.  The agency is clearly hoping to send the message that companies are responsible for anticipating and monitoring the impacts of their creative data analytics on consumers. And, that they may be held accountable even if the initial data inputs seemed benign to the data scientists and the adverse impacts were neither intended nor readily foreseeable — in other words, the FTC’s big data law enforcement will follow the “disparate impact” track of civil rights law.

Accordingly, companies should think ahead about the hypothetical range of consequences for their big data analytics, identify the legitimate business needs for those analytics, and tailor the analytics survive future regulatory scrutiny. In addition, companies may wish to consider establishment of internal data use guidelines and review boards. Companies may also want to start thinking about designing data use impact assessments and means to classify and digitally “tag” data to reflect any relevant regulatory or contractual provenance. Taking account of the FTC’s concerns, and planning ahead for the new EU General Data Protection Regulation, companies may also wish to consider possible policy disclosures about analytic and other compatible secondary uses of data collected from or about individuals.

The FTC seeks to guide development of big data initiatives through incorporating fairness and privacy principles in the design of big data applications to minimize the risk of violating existing laws.  In particular, one prominent section of the report reminds those in the big data market that the Fair Credit Reporting Act (“FCRA”) could apply to the compilation and sale of data on consumers, specifically when used for credit, employment, insurance, housing, or other eligibility determinations.  Although the FCRA has primary applicability to consumer reporting agencies, consumer reports, and entities that contribute to or otherwise use consumer reports, it can often be implicated in big data initiatives.  Key for such analysis will be a fact intensive review of whether a particular big data initiative or product is intended to or could reasonably be anticipated to be used for an FCRA-specified purpose.

The FTC report asserts that FCRA requirements could be triggered any time a company makes a credit decision based on a consumer report, regardless of how the company obtains the report. For example, if a company uses information about a consumer, such as his or her zip code combined with shopping behavior and other characteristics, to make creditworthiness decisions about consumers that share some of those characteristics, the Report noted the FTC would probably view such analysis as a consumer report, triggering FCRA requirements.  But FCRA would not be triggered by a similar report used only to inform general policies and not make decisions about consumers. As the FTC emphasized, “[o]nly a fact-specific analysis will ultimately determine whether a practice is subject to or violates the FCRA, and as such, companies should be mindful of the law when using big data analytics to make FCRA-covered eligibility determinations.”  Id. at ii.

The report emphasizes that problems with data quality, accuracy, and representativeness, as well as imperceptible biases, can lead to mistaken inferences that could hurt consumers.  To counteract such unintended harms, the Report focuses on considerations to mitigate negative impacts of big data uses on low-income and underserved populations.

The FTC also reminds companies to consider civil rights laws, such as the Equal Credit Opportunity Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Genetic Information Nondiscrimination Act in applying big data principles, although the report is rather vague about their specific application.  Under those laws, companies should ensure their use of big data does not create “disparate treatment.” For example, companies should not disadvantage certain protected groups based on big data analytics that suggest such groups are less likely to repay loans. Companies should also be careful to avoid creating a “disparate impact” through a neutral practice that results in unfair treatment of a protected group unless the practice furthers a legitimate business need and cannot be achieved another way. In one example highlighted by the FTC, data analytics that screen applicants based on zip code could have a disparate impact on a protected group if there is a large percentage of that group living in a screened zip code. Accordingly, the FTC is warning companies to be careful when making inferences based on zip code, even if such practices would be allowed under FCRA.  Id. at 17 n.86.  Interestingly, the FTC also specifically warned that even advertising practices could violate the Equal Opportunity Laws by creating a disparate impact or treatment based on a protected category.

As in prior FTC statements, the 2016 Big Data Report primarily rests FTC authority over big data issues on section 5 of the Federal Trade Commission Act. Here, the FTC is focused particularly on unfair (e.g., discriminatory use) and deceptive uses of big data analytics.  Deceptive statements are those likely to mislead users, and include violations of promises not to share data or to keep consumers’ personal information safe.  Deception can also arise from failure to disclose important information. For example, the Report cited the CompuCredit case, in which the company failed to tell consumers their credit lines would be reduced if they used their cards for cash advances or other transactions like bars, massage parlors, and marriage counseling.  Id. at 22.

If the FTC were to allege that any given big data use was discriminatory and thus “unfair” under Section 5 of the FTC Act, the statute would require it to prove the injury to consumers was disproportionate to any countervailing benefits, and not reasonably avoidable.  To the extent the agency claims that alleged discriminatory advertising or offers, or differential pricing, were unfair, the harm in question would have to be substantial and satisfy the statutory cost-benefit test.  As examples of possible unfair big data practices, the Report cites the failure to secure data in a manner commensurate with its sensitivity, or providing data or data analytics to third parties who are likely to use it for fraudulent purposes.

The 2016 FTC Big Data Report includes several recommendations for protecting consumer information and evaluating a big data initiative.  It also includes some explicit prohibitions.  In particular, the FTC stresses that, for purposes of potential Section 5 enforcement, “at a minimum, companies must not sell their big data analytics products to customers if they know or have reason to know that those customers will use the products for fraudulent or discriminatory purposes.  The inquiry will be fact-specific, and in every case, the test will be whether the company is offering or using big data analytics in a deceptive or unfair way.”   Id. at iv.

The agency offered a number of noteworthy examples of potentially problematic analytic projects. For example, the FTC explained that employers using big data analysis to synthesize employee information and  make employment decisions could risk incorporating old discrimination into new employment decisions. Id. at 28. In another, even more aggressive, example, the FTC explained that companies hiring practices that incorporate algorithms that benefit alumni of “top tier” colleges may be incorporating old biases from the college admissions process. Id. at 29.  Alleging prejudicial bias based on remote and fairly standard factors like academic pedigree would seem quite strained, however. It may be that the FTC is intending to send a signal that no consequential use of data analytics will be free from regulatory scrutiny.

The FTC recommends that companies consider four critical topics in the development of any big data analytics process:  representation in data sets, bias, accuracy, and fairness.  Specifically, the FTC stated that:

[C]ompanies already using or considering big data analytics should:

  • Consider whether your data sets are missing information from particular populations and, if they are, take appropriate steps to address this problem.
  • Review your data sets and algorithms to ensure that hidden biases are not having an unintended impact on certain populations.
  • Remember that just because big data found a correlation, it does not necessarily mean that the correlation is meaningful. As such, you should balance the risks of using those results, especially where your policies could negatively affect certain populations. It may be worthwhile to have human oversight of data and algorithms when big data tools are used to make important decisions, such as those implicating health, credit, and employment.
  • Consider whether fairness and ethical considerations advise against using big data in certain circumstances. Consider further whether you can use big data in ways that advance opportunities for previously underrepresented populations.

2016 FTC Big Data Report at 32.

The FTC concludes its Report noting: “Big data will continue to grow in importance, and it is undoubtedly improving the lives of underserved communities in areas such as education, health, local and state services, and employment. Our collective challenge is to make sure that big data analytics continue to provide benefits and opportunities to consumers while adhering to core consumer protection values and principles. For its part, the Commission will continue to monitor areas where big data practices could violate existing laws, including the FTC Act, the FCRA, and ECOA, and will bring enforcement actions where appropriate.”

In light of this clear message, it would be prudent for companies to include legal compliance considerations and regulatory risk analysis in the development of their big data programs.