Dechert Cyber Bits

Issue 94 - April 23, 2026


It was great to see so many of you at the IAPP Global Conference in Washington, D.C.!

Thanks so much to all who visited Dechert's Cyber, Privacy & AI team at our IAPP booth last week. We especially appreciated all of the great feedback on Cyber Bits! Read more »


Colorado AI Policy Work Group Unveils Revised AI Framework

On March 17, 2026, the Colorado Artificial Intelligence Policy Work Group (“Work Group”) announced its unanimous support for a proposed legal framework concerning artificial intelligence (“AI”), entitled “Concerning the Use of Automated Decision Making Technology in Consequential Decisions” (“Proposed Framework”). The Work Group, containing members representing hospitals, schools, and technology companies, among others, was assembled by Governor Jared Polis in the fall of 2025 to create AI policy in Colorado that balances consumer protection and thriving innovation. If the Work Group’s Proposed Framework is enacted, it will repeal and replace the existing “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” Act (“Colorado AI Act”). The Colorado AI Act is set to become effective on June 30, 2026; if taken up by the Colorado legislature and passed before the legislative session ends in May, the Proposed Framework would go into effect on January 1, 2027.

The Proposed Framework would move the Colorado AI Act away from “duties of care” and the regulation of “high-risk artificial intelligence systems,” instead focusing on notice and choice around “covered ADMT”—automated decision-making technologies. “Covered ADMT” is defined in the Proposed Framework as ADMT “that is used to materially influence a consequential decision.” Consequential decisions are those related to employment, education, housing, and financial services, among others.  If enacted, the Colorado AI Act would more closely reflect existing privacy legislation in other states, which similarly regulate automated decision-making technologies.

Takeaway: Colorado’s Proposed Framework reflects the latest push in a broader deregulatory effort designed to enable innovation in the artificial intelligence industry. It follows legislation last year that extended the effective date of the Colorado AI Act and paved the way for the law to be amended. Businesses should note the potential relaxation of AI legal requirements in Colorado and elsewhere and plan accordingly. However, with the widespread legislative interest in AI technologies and numerous regulatory developments in other states and jurisdictions, we caution against assuming these deregulatory efforts will be broadly successful.   


OkCupid, Match Settlement: Data Sharing and FTC Oversight

On March 30, 2026, the Federal Trade Commission (“FTC”) announced a proposed settlement with Humor Rainbow, Inc. (“Humor Rainbow”)—operating as OkCupid—and its affiliate Match Group Americas (“Match”) (together, “the Companies”) to resolve allegations that the Companies had violated Section 5 of the FTC Act. After successfully enforcing a Civil Investigative Demand against OkCupid, the FTC filed a complaint in the Northern District of Texas alleging, among other things, that: (i) in September 2014, Humor Rainbow gave an unrelated third-party facial recognition technology company access to nearly three million OkCupid user photos, along with demographic and location information, because Humor Rainbow's founders were financially invested in the third party; (ii) the third party did not qualify under any category of recipients permitted by OkCupid's privacy policy, and Humor Rainbow neither informed users of the data sharing nor gave them an opportunity to opt out, in violation of its privacy policy; and (iii) since 2014, the Companies engaged in extensive efforts to conceal and deny the data sharing, including by issuing statements to the media and to users disavowing any involvement with the third party. According to the Complaint, the Companies admitted to no wrongdoing in connection with the settlement. In response, OkCupid stated that over the years it has “strengthened [its] privacy practices and data governance to ensure we meet the expectations of our users,” and that the alleged conduct “does not reflect how ⁠OkCupid ​operates today.”  

Under the proposed settlement, the Companies are, among other things: (i) permanently prohibited from misrepresenting their privacy controls or the purposes or extent of which they collect, maintain, use, disclose, delete, or protect Covered Information and (ii) required, for ten years, to create certain accounting records, records of user complaints, and records demonstrating compliance with the proposed settlement. In announcing the proposed settlement, Christopher Mufarrige, Director of the FTC's Bureau of Consumer Protection, remarked that “[t]he FTC enforces the privacy promises that companies make,” and it “will investigate, and where appropriate, take action against companies that promise to safeguard [consumers’] data but fail to follow through—even if that means [the FTC has] to enforce [its] Civil Investigative Demands in court.”

Takeaway: This settlement is notable not only for what it includes, but for what it does not. The proposed order contains no monetary penalty, and its terms are limited to forward-looking prohibitions on misrepresentations, with no requirement for corrective measures, user notification, or restitution. This outcome is consistent with the FTC’s current enforcement posture, and the settlement was announced the same day FTC Commissioner Mark Meador spoke at the IAPP Global Summit in Washington DC, where he emphasized that the agency is “not looking to step in and tell companies how to run their business.” The settlement and the FTC’s public statements suggest that the agency is prioritizing deterrence through enforcement actions that serve as “notices to the marketplace” over aggressive penalties—even in a case involving allegations of decade-old misconduct and concealment.


EU Court of Justice Strengthens Boundaries of Data Subjects’ Right of Access to Personal Data

In a ruling of March 19, 2026, the Court of Justice of the EU (“CJEU”) clarified circumstances in which a controller may refuse a data subject access request as “excessive.” The Court held that a first access request may be refused on the basis that it is excessive where the request is made for the abusive purpose of artificially creating conditions for obtaining compensation, rather than the proper purpose of verifying the lawfulness of processing.

The case concerned a data subject who subscribed to the newsletter of a family-run firm of opticians, Brillen Rottler, by entering his personal data in the registration form on their website. A few days later he submitted a request under the GDPR for access to his data. Brillen Rottler refused his request on the basis that it was “abusive.” According to Brillen Rottler, various online publications showed that the data subject systematically subscribed to newsletters of various companies before making a request for access and then claiming compensation. The data subject maintained that his request for access was legitimate and claimed compensation of at least €1,000 for alleged non-material damage suffered as a result of the refusal.

Whilst the CJEU did not reach a determination of whether the data subject’s conduct was in fact abusive, it confirmed that, as a matter of principle, a first request for access to personal data may be “excessive” where the request was made not for the purpose of being aware of processing of data or verifying the lawfulness of such processing, but to artificially create conditions to obtain compensation under the GDPR, which the CJEU considered may be characterized as abusive. It also confirmed that it is possible to rely on publicly available evidence in establishing abusive intent.

Takeaway: The ruling makes an inroad into the proposition that subject access requests are “purpose blind” and will have an immediate impact on professional data subject trolls who use subject access requests for the sole purpose of extracting payments from organizations. These trolls may shift to other low-hanging fruit (such as cookies compliance) to sustain their activities. More generally, the decision may indicate a more flexible and business-friendly future for handling subject access requests in the EU.


gears

The Cost of Carrying Scam Calls: FCC Seeks to Fine Voxbeam $4.5 Million

On April 2, 2026, the Federal Communications Commission (“FCC”) announced a Notice of Apparent Liability for Forfeiture (“NAL”) against Voxbeam Telecommunications (“Voxbeam”) for alleged violations of the FCC rules. In the NAL, the FCC alleged that Voxbeam allowed 2,250 calls from Axfone, LLC (“Axfone”)—a foreign provider—to be transmitted into the United States in violation of FCC regulations between March 31 and April 2, 2025, despite Axfone’s absence from the Robocall Mitigation Database (“RMD”). The NAL alleged that of the 60,873 calls transmitted by Voxbeam on behalf of Axfone, at least 2,250 calls contained United States area codes in the caller identification field and were delivered to recipients in the United States, and in these calls, Axfone spoofed United States financial institutions’ customer service and fraud prevention phone numbers.

Pursuant to the NAL, the FCC proposed a $4.5 million penalty again Voxbeam—an amount determined by multiplying the 2,250 verified violating calls by the $2,500 base forfeiture amount for each call, combined with a downward adjustment for Voxbeam’s “prompt action” in blocking Axfone’s calls within 72 hours of their initiation. Voxbeam still has the opportunity to submit a response before the FCC finalizes the allegations and proposed penalty. In response to the NAL, FCC Chairman Brendan Carr stated: “Companies like Voxbeam must ensure they are not accepting traffic from sketchy operators. These gateway providers are the on-ramps to American phone networks and with that business model comes significant responsibility. As we saw in this case, failure to follow the FCC’s robocall mitigation rules can result in tens of thousands of scam calls reaching U.S. customers. The FCC is committed to protecting consumers from robocall scams like these.”

Takeaway: The FCC’s action is a reminder that compliance obligations oftentimes extend beyond a company's own practices to include the partners and providers it does business with. Businesses would be well served to periodically reassess their third-party relationships and ensure that internal escalation procedures are robust enough to address problems before they happen. This is particularly critical for companies operating in more regulated industries like telecommunications—after all, Voxbeam is facing a $4.5 million penalty despite undertaking prompt remedial actions.


Dechert Tidbits

Penguin Sues OpenAI over AI “Copycat” of a Popular Children’s Book

Penguin Random House filed a lawsuit against OpenAI, alleging that ChatGPT violated copyright laws by reproducing content from the popular German children’s book series Coconut the Little Dragon. When prompted to write a children's story featuring the dragon character, ChatGPT generated text, artwork, and a blurb that Penguin described as “virtually indistinguishable” from the original works and evidence that OpenAI's model had unlawfully “memorised” the series. The case, which could set a precedent for other publishers, follows a Munich court ruling last November that found ChatGPT had violated German copyright laws by harvesting lyrics to train its models.

UK Regulators Look to Strengthen Data Practices for Supporting Vulnerable Customers

The UK’s Financial Conduct Authority (“FCA”) and Information Commissioner’s Office (“ICO”) issued a joint statement outlining expectations for how financial firms should identify and support vulnerable customers, share relevant data appropriately across distribution chains, and monitor outcomes. It emphasizes that firms using automated decision-making or profiling must meet additional requirements under the UK GDPR, including providing consumers with the ability to request human intervention.

Judge Approves Google Privacy Settlement, Limits Plaintiffs’ Attorney Fees

On March 26, 2026, Judge Yvonne Gonzalez Rogers granted final approval of a class settlement with Google LLC (“Google”), resolving the class members’ claims against Google for Google’s alleged sharing of personal information through its real-time bidding program. In addition, Judge Gonzalez Rogers granted plaintiffs’ attorneys approximately $21.9 million in legal fees—a small portion of the over $128 million the attorneys had requested. The settlement agreement requires Google to implement new privacy controls related to its real-time bidding program; it does not require Google to pay class member damages. Google admitted to no wrongdoing in connection with the settlement.


In 2025, Dechert’s Cyber, Privacy & AI team achieved top individual and group rankings in The Legal 500 and Chambers USA. Global Chair and Partner Brenda Sharton, a Law360 MVP, and Partner Ben Sadun, a Law360 Rising Star, were recognized for their leadership and contributions to the team’s achievements. The team was also recognized in Law.com’s “Litigators of the Week” column for its recent victory for Flo Health, a matter that showcased the team’s strategic excellence. Thank you to our clients for entrusting us with the types of matters that led to these recognitions.



Content Editors

Dylan Balbirnie, Nafeesa Hussain, Julie Jones, Lydia Speight

Production Editors

Austin Mooney and Madeleine White

Partner Committee Editors

Kevin Cahill, J.J. Jones, and Paul Kavanagh


Dechert Cyber Bits Partner Committee


Dechert’s global Cyber, Privacy and AI practice provides a multidisciplinary, integrated approach to clients’ privacy and cybersecurity needs. Our practice is top ranked by The Legal 500 and our partners are well-known thought leaders and sought after advisors in the space with unparalleled expertise and experience. Our litigation team provides pre-breach counseling and handles all aspects of data breach investigations as well as the defense of government regulatory enforcement actions and class action litigation for clients across a broad spectrum of industries. We have handled over a thousand data breach investigations of all types including nation states, ransom/cyber extortion, vendor/supply chain, DDoS, brought by threat actors of all types, from nation-state threat actors to organized crime to insiders. We also represent clients holistically through the entire life cycle of issues, providing sophisticated, solution oriented advice to clients and counseling on cutting edge data-driven products and services including for trend forecasting, personalized content and targeted advertising across sectors on such key laws as the CCPA, CPRA and state consumer privacy laws, Section 5 of the FTC Act; the EU/UK GDPR, e-Privacy Directive, and cross-border data transfers. We also conduct privacy and cybersecurity diligence for mergers and acquisitions, financings, corporate transactions, and securities offerings.

View Previous Issues