Dechert Cyber Bits
Issue 63 - October 10, 2024
FTC Staff Report on Social Media Platforms’ Privacy and Security Practices
On September 19, 2024, the Federal Trade Commission (“FTC” or the “Commission”) announced the release of its staff report, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services” (the “Report”).
The 129-page Report specifically looked at certain social media and video streaming platform companies and stated that it found, among other things, that:
- The Companies collected and retained “troves of data” from both users and nonusers, yet failed to adequately control and handle it;
- The Companies engaged in extensive targeted advertising as one of their main sources of revenue;
- Both users and non-users of the Platforms had their data collected and fed into algorithms and artificial intelligence (“AI”) systems; and
- The Companies treated teens on the Platforms like adults.
Based on its assertions, the Commission made several recommendations, which included calls that: (i) Congress pass comprehensive federal legislation to limit surveillance, address baseline protections, and grant consumers data rights; (ii) companies implement appropriate data collection and retention policies; (iii) companies not use tracking technologies to collect sensitive information; and (iv) companies enforce greater protections for teenage users. While all five FTC Commissioners voted to issue the Report, four Commissioners issued separate statements. Notably, Commissioners Holyoak and Ferguson both issued partial dissenting statements expressing concerns about suppressing online free speech and misclassifying advertising AI systems as harmful.
Takeaway: While the FTC used its 6(b) authority to investigate these technology companies, the FTC could in the future bring enforcement actions under Section 5 of the FTC Act against any company that does not implement the recommendations from the Report. Prudent companies should consider conducting a gap analysis of their practices as compared to the Report’s findings to determine their risk level and implement fixes as needed.
California Legislature Passes Several New AI Laws, But Governor Vetoes the Most Controversial Measure
California’s legislature was active in the AI space this year, with the state’s legislature advancing four measures to Governor Gavin Newsom’s desk for signature this fall. Governor Newsom signed three of these measures into law but vetoed the most sweeping—and controversial—of these measures, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the “Bill” or “SB 1047”).
On September 19, 2024, Governor Newsom signed three narrow bills aimed at addressing ethical concerns surrounding AI and protecting individuals from the misuse of digital content. The new laws require AI-generated content to be watermarked for transparency (SB 942), criminalize the creation and distribution of AI-generated sexually explicit images intended to cause serious emotional distress (SB 926), and require social media platforms to address reports of such content, leading to its blocking and deletion where necessary (SB 981). SB 942 will go into effect on January 1, 2026, and SB 926 and SB 981 will go into effect on January 1, 2025.
While the Governor signed these three bills, he did not sign SB 1047, a far-reaching AI safety measure. The Bill, which was first introduced by Senator Scott Wiener (D-San Francisco), and passed the California Senate 37-1 with strong bipartisan support, sought to institute various requirements on developers of AI models, which included “implementing the capability to promptly enact a full shutdown” of an AI model and “implement[ing] a written and separate safety and security protocol” regarding the model. The Bill also required companies to take “reasonable care” to prevent their AI models from causing catastrophic harm. The Bill likewise sought to create a new state agency, the Board of Frontier Models within the Government Operations Agency, which would have been tasked with developing a framework to “advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable[.]”
SB 1047 divided tech companies and legislators alike. For example, while Elon Musk and AI developer Anthropic voiced support for the Bill—with some caveats—numerous other technology giants, such as OpenAI and Meta, penned letters urging Governor Newsom to veto the measure. Though Governor Newsom indicated that the SB 1047 was “well-intentioned,” he did not think that the Bill was “the best approach to protecting the public[.]” Governor Newsom expressed support for measures regulating the AI space but cautioned that such measures “must be based on empirical evidence and science.” He also outlined various measures in furtherance of developing such evidence and science, but concluded that at this juncture, he could not sign SB 1047 into law.
Takeaway: The AI bills that were passed by the California legislature and signed by the Governor were narrow in scope and use-case specific. SB 1047 on the other hand was an attempt to regulate how the AI industry should build its technology, starting with its more powerful models. Ultimately, the Governor found the Bill’s approach to be incompatible with the kind of regulation he believes the AI market needs at this moment—namely, laws capable of “protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good.” California’s experience suggests that although state legislatures appear eager to draft their own legislation governing AI, such regulations may be further away than most expect. But with 32 of the 50 leading AI companies based in California, expect California’s government to continue to be a leading voice in determining the future shape of AI regulation.
“Operation AI Comply”: The FTC Targets 5 Companies for “Deceptive AI” Practices In Massive Sweep
On September 25, 2024, the Federal Trade Commission (“FTC”) announced five enforcement actions against companies alleged to have engaged in deceptive and unfair consumer practices through the use or sale of artificial intelligence (“AI”)—an enforcement sweep that the FTC has entitled, “Operation AI Comply.” Those included in this sweep were: (i) DoNotPay; (ii) Ascend Ecom; (iii) Ecommerce Empire Builders; (iv) Rytr; and (v) FBA Machine. The allegations in each action are briefly summarized here.
- DoNotPay. DoNotPay uses AI technology to offer legal services, claiming to provide “the world’s first robot lawyer.” In its complaint, the FTC alleged that DoNotPay overstated the capabilities of its services because it did not conduct testing on the quality of the AI’s output and/or retain attorneys in connection with its business. The parties have agreed to a proposed order in which DoNotPay must, among other things: (i) pay $193,000, and (ii) cease making misrepresentations regarding the AI’s abilities. DoNotPay does not admit to any wrongdoing in connection with this matter.
- Ascend Ecom. The FTC alleged that Ascend Ecom represented to consumers that its AI technology could earn consumers’ passive income on online storefronts. In its complaint, the FTC further alleged, among other things, that Ascend has defrauded consumers out of $25 million due to unsubstantiated claims regarding its AI-powered tools and persuaded consumers into withholding negative reviews. The U.S. District Court in the Central District of California has issued a temporary restraining order prohibiting Ascend Ecom from continuing the alleged scheme. Ascend Ecom disputed the allegations.
- Ecommerce Empire Builders (“EEB”). EEB offers customers “AI-powered Ecommerce Empire[s]” which customers can build by enrolling in training programs or purchasing already-made store fronts. In its complaint, the FTC alleged, among other things, that EEB did not have evidence to substantiate its money-making claims regarding its AI technology and customers made little to no money on their storefronts. The company disputed the allegations. The U.S. District Court for the Eastern District of Pennsylvania has issued a temporary restraining order prohibiting EEB from continuing the alleged scheme.
- Rytr. Rytr offers customers an AI “writing assistant” that can be used for, among other things, creating testimonials and reviews based upon a generic input. In its complaint, the FTC alleged that the reviews were false because they were not related to the users’ input and stated that customers would use this product to mass produce false reviews. The parties agreed to a proposed order which, among other things, prohibits Rytr from marketing or selling any generated testimonial service. Commissioners Holyoak and Ferguson issued dissenting statements, with Commissioner Holyoaks’s found here and Commissioner Ferguson’s found here. Both took the position that the action is inconsistent with the FTC’s Section 5 authority and bad for innovation. Rytr did not admit to any wrongdoing in connection with the matter.
- FBA Machine. FBA Machine represented to customers that FBA Machine’s storefronts utilizing AI technology would be guaranteed income akin to a “7-figure business” and marketed this scheme as risk free. In its complaint, the FTC alleged that FBA Machine defrauded its customers over $15.9 million due to its unsubstantiated claims regarding its AI-powered tools. FBA Machine disputed the allegations. The U.S. District Court for the District of New Jersey issued a temporary restraining order prohibiting FBA Machine from continuing its alleged scheme.
Takeaway: The FTC will continue to scrutinize the marketing claims that companies make about their respective AI products, solutions, and services. Simply put, companies should have a reasonable basis for the claims they make about their AI technologies, and feel-good or hyperbolic marketing speak (e.g., our solution can replace a live human!) can turn into a costly enforcement action. Operation AI Comply builds from the FTC’s enforcement actions involving AI, beginning with Rite Aid in late 2023, which we covered here, and which sets forth the parameters of what the FTC considers the baseline for an “comprehensive algorithmic fairness program.” Companies may want to revisit their AI marketing claims to confirm they meet the FTC expectations and that their statements regarding their uses of AI are accurate.
Texas Attorney General and Pieces Technology Reach First-Of-Its-Kind Generative AI Settlement
On September 18, 2024, Texas Attorney General Ken Paxton (“Texas AG”) announced a first-of-its-kind settlement with Pieces Technology (“Pieces”), a healthcare artificial intelligence (“AI”) company that creates generative AI technology to assist providers with charting and drafting clinical notes in inpatient medical facilities and hospitals. According to the Texas AG, Pieces violated the Texas Deceptive Trade Practices-Consumer Protection Act (“DTPA”) by misrepresenting the precision of its AI through its statement that the AI had a hallucination rate of less than 1 per 100,000, thereby deceiving hospitals “about the accuracy and safety of the company’s products.” After investigating, the Texas AG alleged that Pieces’ metrics regarding the hallucination rate were likely inaccurate but did not go into detail on this point in its settlement. Pieces “vigorously denies” wrongdoing in connection with the matter and, in a public comment, stated that it "accurately set forth and represented its hallucination rate."
Under the Assurance of Voluntary Compliance (“Assurance”), Pieces is required to, among other things: (i) clearly and conspicuously disclose the meaning and definition of any metrics used in marketing and the method or procedure used to calculate those metrics; (ii) cease making misrepresentations regarding its products; and (iii) clearly and conspicuously disclose any harmful uses or misuses of its products to current and future customers. No monetary penalty was imposed.
Takeaway: The Texas AG’s action and settlement, in conjunction with the FTC’s recent Operation AI Comply, discussed above, makes clear that companies should carefully vet their marketing claims concerning AI. This action follows Texas AG Ken Paxton’s recent comments regarding how to protect Texas residents against the misuses of AI. Those operating in Texas should tread carefully, as the Texas AG’s office has proven to be one of the most active and aggressive enforcers of state consumer protection laws in the technology space in recent months, as we covered here.
Irish Data Regulator Fines Meta €91 Million For GDPR Security Violations
The Irish Data Protection Commission (“DPC”) announced a final decision after an inquiry into Meta Platforms Ireland Limited’s (“MPIL”) GDPR compliance that was initiated in 2019. The inquiry began after MPIL reported that it had inadvertently stored user passwords without cryptographic protection or encryption. The decision, which emphasized the principles of integrity and confidentiality, resulted in a reprimand and a €91 million fine for MPIL’s failure to implement appropriate security measures and properly document and report personal data breaches.
EU AI Pact Gains Over 100 Signatories
The European Commission announced that over 100 companies, including multinationals and SMEs from various sectors, have signed the EU AI Pact and its voluntary pledges. The Pact encourages early adoption of the AI Act principles, with a focus on AI governance, high-risk AI systems mapping and enhancing AI literacy. Additional commitments include human oversight, risk mitigation and transparent labeling of AI-generated content. The European Commission also launched the AI Factories initiative to drive AI innovation in key sectors such as healthcare, energy and defence and aerospace.
We are honored to have been recognized in The Legal 500 2023, Chambers USA 2023, nominated by The American Lawyer for the Best Client-Law Firm Team award with our client Flo Health, Inc., and named Law360 Cybersecurity & Privacy Practice Group of the year! Thank you to our clients for entrusting us with the types of matters that led to these recognitions.
Recent News and Publications
- Brantley et al. v. Prisma Labs, Inc. (Global Legal Chronicle published August 31, 2024)
- Law360's Legal Lions of The Week (Law360 published August 9, 2024)
- Lensa AI App Creator Shakes Ill. Biometric Privacy Suit (Law360 published August 6, 2024)
- Prisma Labs Skirts BIPA Suit Over Training of Its AI Photo App (Bloomberg Law published August 6, 2024)
- A New UK Labour Government: A Fresh Approach to AI Regulation (Dechert OnPoint published July 9, 2024)
- The EU AI Act: An Overview (Dechert OnPoint published May 13, 2024)
- Tribunal Overturns UK ICO’s Enforcement Action Against Clearview AI (Dechert OnPoint published November 8, 2023)
- 5 Takeaways from ICO's Biometric Recognition Guidance (Published in Law360, October 18, 2023)
- Bridge Over Troubled Data Flows: UK-US Data Bridge Approved (Dechert OnPoint published September 22, 2023)
- US-EU Plan On AI Illustrates Differing Opinions On Regulation (Published in Law360, August 2, 2023)
- SEC Final Rule Exempts ABS Issuers from New Cybersecurity Disclosure and Reporting Requirements (Dechert OnPoint published August 16, 2023)
- SEC Finalizes Cybersecurity Disclosure Rules for Public Companies (Dechert OnPoint published August 7, 2023)
- Ready. Set. Flow: Green Light from the Commission for EU-U.S. Data Privacy Framework (Dechert OnPoint published July 11, 2023)
- EU General Court Examines Data Anonymisation and Pseudonymisation (Dechert OnPoint published May 25, 2023)
- SEC Proposes New Cybersecurity Risk Management Rule for Various Market Entities (Dechert OnPoint published May 10, 2023)
- Artificial Intelligence: Legal and Regulatory Issues for Financial Institutions (Dechert OnPoint published April 26, 2023)
- Visit Dechert's California Consumer Privacy Act Resource Center
-
- BioDech | A Global Life Sciences Broadcast Series - What Every Life Sciences Company Needs to Know About Cybersecurity
- The group was named 2022 Law360 Practice Group of the Year.
- Winner of the International Association of Privacy Professionals (“IAPP”) Legal Innovation Award for the Americas for 2022, for its work with client Flo Health, Inc., the world’s leading women’s health App on its “Anonymous Mode” feature in the wake of the Dobbs decision by the U.S. Supreme Court.
- Recognized as a 2022 “Standout” by London’s Financial Times in a legal innovation award for the Americas in the category of “Innovation in Enabling Business Resilience.”
- Exploiting Public Health Data for R&D: UK Progresses Secure Data Environments (Dechert OnPoint published July 20, 2023)
- EU Data and Digital Drive: 10 Things to Know About the Digital Services Act (Dechert OnPoint published February 17, 2023) By: Paul Kavanagh, Dr. Olaf Fasshauer, and Madeleine White.
- Your Company’s Data Is for Sale on the Dark Web. Should you Buy it Back? (Published in the Harvard Business Review January 4, 2023) By: Brenda Sharton.
- Brenda Sharton and Steven Rabitz quoted in Plan Sponsors Have Myriad Responsibilities to Protect Against Cyberthreats (Published in PLANSPONSOR December 22, 2022).
- English High Court Maintains Claimant’s Anonymity in Cyberattack Case (Dechert OnPoint published December 19, 2022) By: Paul Kavanagh, Brenda Sharton, Dylan Balbirnie, and Anita Hodea.
- The entry into force of the Digital Markets Act kicks off new era of digital regulation in Europe (Dechert OnPoint published October 25, 2022), by members of the Dechert antitrust practice.
- Brenda Sharton was named a 2022 Law360 MVP for Cybersecurity & Privacy.
- Brenda Sharton was recognized as one of Massachusetts Lawyers Weekly's Go To Cybersecurity/Data Privacy Lawyers for 2022 (Published in Mass. Lawyers Weekly October 31st issue)
- Practice leaders Brenda Sharton and Karen Neuman are discussed in Litigation Leaders: Dechert’s Cathy Botticelli and Jonathan Streeter on Counseling Clients With an Eye Toward Avoiding Litigation (Published in Law.com August 15, 2022).
- Brenda Sharton quoted in Why hackers are able to steal billions of dollars worth of cryptocurrency (Published in the Washington Post August 11, 2022).
- FDA Medical Device Cyber Guidance Protects Patients, Cos. (Published in Law360 June 9, 2022) By: Brenda Sharton, Emily Van Tuyl, and Kathleen Fay
- Olaf Fasshauer was ranked in the 2022 publication of German’s daily newspaper Handelsblatt (in cooperation with Best Lawyers) as best lawyers in Germany for Data Security and Privacy Law
- Brenda Sharton presented at the WSJ Pro Cyber Forum (June 1, 2022).
- Brenda Sharton was a moderator on the panel, "The Digital Transformation of Customer Experience" at the LendIt Fintech Conference (May 25, 2022).
- Ranked by The Legal 500 US – Media, Technology and Telecoms: Cyber Law (including Data Privacy and Data Protection). Brenda Sharton was named a Leading Lawyer and Hilary Bonaccorsi was named a Rising Star.
- Brenda Sharton named to Cybersecurity Docket’s Incident Response 40 2021 list.
- Dubai data protection authority plans to launch international privacy risk index and update international data transfer mechanisms (Dechert OnPoint published May 5, 2022) By: Paul Kavanagh and Dylan Balbirnie.
- Brenda Sharton quoted in Global Data Review article, "SEC proposes 4-day breach reporting rule" (April 26, 2022).
- CJEU rules on private copying exception to storage in the cloud (Dechert OnPoint published April 11, 2022) By: Paul Kavanagh and Nathan Smith.
- SEC Proposes New and Amended Cybersecurity Rules for Public Companies (Dechert OnPoint published March 17, 2022) By: Timothy Blank, Kevin Cahill, Brenda Sharton and Daniel Murdock.
- Brenda Sharton was quoted in the Law360 article, “Congress Seizes On Incident Reports In Fighting Cyberattacks” (March 16, 2022).
- 4 Takeaways For Asset Managers From SEC's Cyber Rule Plan (Published in Law360 on March 10, 2022) By: Kevin Cahill and Hilary Bonaccorsi.
- California Privacy Protection Agency Signals Delay for Final CPRA Rules & California AG Conducts CCPA Investigative Sweep (Dechert Newsflash published February 25, 2022) By: Karen Neuman, Hilary Bonaccorsi, Bailey E. Dervishi.
- SEC Proposes New Cybersecurity Rules for SEC Registered Advisers and Funds (Dechert OnPoint published February 23, 2022) By: Kevin Cahill, Timothy Blank, Brenda Sharton, Hilary Bonaccorsi, Colleen Hespeler and Bailey Dervishi.
Content Editors
Dylan Balbirnie, Maëlle Chausse, Anita Hodea, Julie Jones, Allie Ozurovich, and James Smith
Production Editors
Dylan Balbirnie and Hilary Bonaccorsi
Senior Editor
Partner Committee Editors
Dechert Cyber Bits Partner Committee
Brenda R. Sharton
Partner, Chair, Cyber, Privacy and AI
Boston
brenda.sharton@dechert.com
Timothy C. Blank
Senior Counsel
Boston
timothy.blank@dechert.com
Kevin F. Cahill
Partner
Los Angeles
kevin.cahill@dechert.com
Dr. Olaf Fasshauer
National Partner
Munich
olaf.fasshauer@dechert.com
Vernon L. Francis
Partner, Senior Editor
Philadelphia
vernon.francis@dechert.com
Paul Kavanagh
Partner
London
paul.kavanagh@dechert.com
Laura Rossi
Partner
Luxembourg
laura.rossi@dechert.com
Benjamin Sadun
Partner
Los Angeles
benjamin.sadun@dechert.com
"Dechert has assembled a truly global team of privacy and data security lawyers. The cross-practice specialization ensures that clients have access to lawyers dedicated to solving a range of client’s legal issues both proactively and reactively during a data security related crisis or a litigation."
"The privacy and security team collaborates seamlessly across the globe when advising clients."
- Quotes from The Legal 500, 2023
Dechert’s global Cyber, Privacy and AI practice provides a multidisciplinary, integrated approach to clients’ privacy and cybersecurity needs. Our practice is top ranked by The Legal 500 and our partners are well-known thought leaders and sought after advisors in the space with unparalleled expertise and experience. Our litigation team provides pre-breach counseling and handles all aspects of data breach investigations as well as the defense of government regulatory enforcement actions and class action litigation for clients across a broad spectrum of industries. We have handled over a thousand data breach investigations of all types including nation states, ransom/cyber extortion, vendor/supply chain, DDoS, brought by threat actors of all types, from nation-state threat actors to organized crime to insiders. We also represent clients holistically through the entire life cycle of issues, providing sophisticated, solution oriented advice to clients and counseling on cutting edge data-driven products and services including for trend forecasting, personalized content and targeted advertising across sectors on such key laws as the CCPA, CPRA and state consumer privacy laws, Section 5 of the FTC Act; the EU/UK GDPR, e-Privacy Directive, and cross-border data transfers. We also conduct privacy and cybersecurity diligence for mergers and acquisitions, financings, corporate transactions, and securities offerings.
-
- Issue 62 - September 26, 2024
- Issue 61 - September 12, 2024
- Issue 60 - August 15, 2024
- Issue 59 - August 1, 2024
- Issue 58 - July 18, 2024
- Issue 57 - June 27, 2024
- Issue 56 - June 13, 2024
- Issue 55 - May 23, 2024
- Issue 54 - May 2, 2024
- Issue 53 - April 18, 2024
- Issue 52 - March 28, 2024
- Issue 51 - March 14, 2024
- Issue 50 - February 29, 2024
- Issue 49 - February 19, 2024
- Issue 48 - February 1, 2024
- Issue 47 - January 18, 2024
- 2024 Crystal Ball Edition - January 5, 2024
-
- Issue 46 - December 14, 2023
- Issue 45 - November 16, 2023
- Issue 44 - November 2, 2023
- Issue 43 - October 19, 2023
- Issue 42 - October 5, 2023
- Issue 41 - September 21, 2023
- Issue 40 - August 31, 2023
- Issue 39 - August 17, 2023
- Issue 38 - August 3, 2023
- Issue 37 - July 20, 2023
- Issue 36 - June 29, 2023
- Issue 35 - June 15, 2023
- Issue 34 - May 25, 2023
- Issue 33 - May 11, 2023
- Issue 32 - April 27, 2023
- Issue 31 - March 30, 2023
- Issue 30 - March 16, 2023
- Issue 29 - March 2, 2023
- Issue 28 - February 16, 2023
- Issue 27 - February 2, 2023
- Issue 26 - January 19, 2023
-
- Issue 25 - December 15, 2022
- Issue 24 - November 10, 2022
- Issue 23 - October 27, 2022
- Issue 22 - October 12, 2022
- Issue 21 - September 29, 2022
- Issue 20 - September 15, 2022
- Issue 19 - August 18, 2022
- Issue 18 - August 3, 2022
- Issue 17 - July 21, 2022
- Issue 16 - June 23, 2022
- Issue 15 - June 10, 2022
- Issue 14 - May 26, 2022
- Issue 13 - May 12, 2022
- Issue 12 - April 28, 2022
- Issue 11 - April 7, 2022
- Issue 10 - March 24, 2022
- Issue 9 - March 10, 2022
- Issue 8 - February 24, 2022
- Issue 7 - February 10, 2022
- Issue 6 - January 27, 2022
- Issue 5 - January 13, 2022
-
- Issue 4 - December 9, 2021
- Issue 3 - November 18, 2021
- Issue 2 - November 4, 2021
- Issue 1 - October 21, 2021