U.S. FTC’s New Rule on Fake and AI-Generated Reviews and Social Media Bots

On August 14, 2024, the United States Federal Trade Commission (FTC) announced a final rule that prohibits fake and artificial intelligence-generated consumer reviews, consumer testimonials, and celebrity testimonials, along with other types of unfair or deceptive practices involving reviews and testimonials. This new rule is the latest development in the FTC’s increased rulemaking efforts and increased focus on AI, and will take effect on October 21, 2024.

The final rule addresses and prohibits:

Fake or False Consumer Reviews, Consumer Testimonials, or Celebrity Testimonials

  • Businesses are banned from creating or promoting fake consumer reviews, consumer testimonials, or celebrity testimonials that misrepresent the identity, experience, or existence of the reviewer or testimonialist. This includes, for example, fake celebrity endorsements or AI-generated reviews and testimonials. Purchasing or disseminating such misleading content is also prohibited. Businesses also cannot use reviews from their own employees or relatives on third-party platforms if those reviews are deceptive. With respect to fake reviews from real people, this provision echoes activity at the heart of the privacy torts of misappropriation and right of publicity. But while those torts are not recognized in every state, this new regulation will have national reach.

Buying Positive or Negative Consumer Reviews

  • Businesses are prohibited from providing compensation or other incentives in exchange for reviews that express a specific sentiment, whether positive or negative. For example, contacting customers and saying “Tell us how much you loved [product] for 10% off your next purchase!” would likely violate this section because consumers receiving this message could reasonably interpret the message as saying that their reviews must be positive and enthusiastic in order to obtain the reward.
  • This section does not apply to testimonials, and applies only to reviews that appear on a website or portion of a website dedicated to receiving and displaying such reviews. For example, a blogger’s “review” is not considered a consumer review, and if such a review was incentivized, it would be considered a testimonial.

Misuse of Fake Indicators of Social Media Influence

  • The rule prohibits selling, purchasing, procuring, or distributing known or reasonably known “fake indicators of social media influence” that are used to deceive others about a business’s or individual’s importance for commercial gain. “Fake indicators of social media influence” are defined as those “generated by bots, purported individual accounts not associated with a real individual, accounts created with a real individual’s personal information without their consent, hijacked accounts, or accounts that otherwise do not reflect a real individual’s or entity’s activities, opinions, findings, or experiences.”

Insider Consumer Reviews and Consumer Testimonials

  • Officers or managers of a business, or their immediate relatives, must clearly and conspicuously disclose their relationship to the business when posting reviews or testimonials. The intention of this section is to address certain inherently biased reviews and testimonials, not merely those that are fake or false. If a business fails to ensure these disclosures, or encourages others to hide their connections, it is considered an unfair or deceptive practice. Not covered by these restrictions are general solicitations for customers to post testimonials and merely engaging in consumer review hosting.

Company-Controlled Review Websites or Entities

  • Businesses cannot materially misrepresent that a website, organization, or entity they control provides independent reviews or opinions, other than consumer reviews, about a category of products or services that includes its own products or services. For example, a business cannot create purportedly independent seals or badges that it then awards to its own products.

Review Suppression

  • Businesses cannot mislead consumers into believing that all submitted reviews are displayed if negative reviews are being hidden. Nor can businesses suppress consumer reviews through threats, intimidation, or false accusations. However, certain types of content, such as defamatory or false reviews, may be legitimately withheld.

This latest rulemaking effort comes as no surprise. The FTC has significantly increased its rulemaking efforts since the Supreme Court’s decision in AMG Capital Management, LLC v. FTC, 141 S. Ct. 1341 (2021), which held that the FTC cannot obtain equitable monetary relief, including consumer redress, under Section 13(b) of the FTC Act. Prior to AMG, Section 13(b) was the FTC’s favored statutory vehicle for seeking monetary relief. Indeed, the commentary to the rule explains that the AMG ruling “has made it significantly more difficult” for the FTC to obtain monetary relief. Thus, as the commentary explains, “[a]lthough such unfair or deceptive acts or practices are already unlawful under Section 5 of the FTC Act, the rule will increase deterrence of such conduct by allowing courts to impose civil penalties against the violations” and “will allow the Commission to seek court orders requiring violators to compensate consumers for the harms caused by their unlawful conduct.”

The FTC is also increasingly focusing on the role of artificial intelligence in consumer protection. The FTC issued an Advanced Notice of Proposed Rulemaking in the fall of 2022 which included, in part, a focus on AI and algorithmic processing concerns. And as explained in the commentary to this fake and AI-generated reviews final rule, “AI tools make it easier for bad actors to pollute the review ecosystem by generating, quickly and cheaply, large numbers of realistic but fake reviews that can then be distributed widely across multiple platforms.” Thus, “AI-generated reviews are covered by the final rule, which the Commission hopes will deter the use of AI for that illicit purpose.” The agency is concerned with how AI can be exploited to mislead consumers and, in line with President Biden’s October 2023 AI Executive Order, is actively working to enforce regulations and promulgate rules that ensure transparency and fairness, holding businesses accountable for using AI in ways that harm or deceive the public. This focus is part of the FTC’s broader effort to adapt consumer protection laws to the evolving digital landscape.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.