Dechert Cyber Bits

Issue 92 - March 12, 2026

We've added to our team!

We are thrilled to welcome two new Cyber, Privacy & AI partners, J.J. Jones and Austin Mooney. J.J. Jones is a partner based in San Francisco and joins Dechert from Microsoft, where she recently served as Assistant General Counsel for Cybersecurity. As a top cyber & AI lawyer for Microsoft, she advised the Global Chief Information Security Officer and security operations and threat intelligence teams on high-risk cybersecurity and privacy incidents and supported enterprise-wide cybersecurity initiatives, including global regulatory compliance programs. Prior to Microsoft, J.J. was an in-house counsel at Google, having started her career in private practice at another global law firm. 

See Ms. Jones' full bio here.

Austin Mooney is a partner in our Washington, D.C. office and joins us from McDermott, Will & Schulte. Austin provides strategic counseling for cybersecurity and privacy programs, including negotiating data sharing terms, and advising on AI privacy and security risks. His work spans the panoply of global laws in privacy counseling, cookie compliance, cross-border data transfers, cyber investigations and assessments, and federal and state surveillance laws.

See Mr. Mooney's full bio here.


See you at the IAPP Global Privacy Summit!

Come meet J.J., Austin, and the rest of Dechert's top ranked Cyber, Privacy & AI team at the IAPP Global Privacy Summit in Washington, D.C. on March 30-31. Stop by and see the Dechert team at Booth #149.


UK Court Confirms Security Measures Required Regardless of Identifiability to Cyber-Attackers 

On February 19, 2026, the UK Court of Appeal allowed the UK Information Commissioner’s Office’s (“ICO”) appeal against an Upper Tribunal decision in the long-running DSG Retail Ltd v Information Commissioner case. The Court ruled that data controllers must safeguard personal data from unauthorized processing based on whether the controller, rather than third parties, can identify the individuals.

The case arose from a large cyber-attack on the payment systems of DSG Retail (which operated a large chain of consumer electronics stores) between 2017 and 2018. Malware was installed on store tills and remained undetected for nine months, allowing attackers to capture payment card data during transactions. The stolen information consisted only of the card number and expiry date, without names or other identifying details. The ICO fined DSG £500,000 (the maximum penalty under the data protection law in force at the time) for failing to implement appropriate technical and organizational measures to protect personal data. DSG argued that because the attackers could not identify individuals from the data obtained, the information was not “personal data” in the attackers’ hands, and therefore the security obligation did not apply. While this argument failed at the First-tier Tribunal, it was accepted by the Upper Tribunal on DSG’s appeal.

The Court overturned the Upper Tribunal decision and allowed the ICO’s appeal, holding that data protection law defines personal data from the controller’s perspective, not that of the attacker. As such, the duty to implement appropriate security measures applies to all personal data that the controller holds, even against threats where the attacker cannot identify the individual. The Court also warned that adopting the third-party perspective would create gaps in the protection of personal data, undermining the objectives of data protection law. The case was remitted to the First-tier Tribunal for reconsideration of the monetary penalty.

Takeaway: The Court’s decision provides a welcome clarification that it is the controller’s perspective which matters in determining whether the duty to implement security measures applies. This also aligns with the recent EU decision in SRB v EDPS which held that a controller’s obligations must be assessed from the controller’s perspective at the relevant time. The Court pointed out that it would be an odd result if a controller could avoid having to implement security measures to protect data on the basis that it wouldn’t be identifiable to an attacker. However, the ability of an attacker to identify the relevant individuals is still a critical factor in considering what security measures are needed and also in the context of the aftermath of a cyberattack (e.g. for purposes of breach notification requirements) whether a third party could take malicious action. What is the real-world risk caused by disclosure of “personal data” if it cannot be tied back to any individual?


FTC Limits Enforcement Actions for Age Verification Tools

In an effort to encourage businesses to employ age verification mechanisms, on February 25, 2026, the FTC released a policy statement indicating that the Commission will limit its enforcement against businesses that collect and use age verification information (“Policy Statement”). This policy shift comes on the heels of a recent age verification workshop where participants questioned whether the use of age verification technologies could result in potential violations of the Children’s Online Privacy Protection Act and its implementing regulations (“COPPA”).

Under COPPA and its implementing regulations, commercial websites and online services directed at or known to collect personal information from children are required to obtain parental consent before collecting personal information from children under the age of 13. The Commission clarified in the Policy Statement that it will not require “general audience” and “mixed audience” sites and services to first obtain parental consent from users under the age of 13 before they process such users’ information for age verification purposes. The FTC will, however, still require mixed and general audience sites and services to comply with certain conditions when using age verification tools. For example, personal information must be collected for age verification purposes only, deleted after that purpose has been fulfilled, and protected by reasonable security safeguards. Businesses must also comply with all other applicable COPPA requirements. The FTC expects to conduct a formal review of COPPA and issue regulatory amendments reflecting its policy change regarding age verification information.

Takeaway: Many businesses have been concerned about COPPA compliance risks as innovative age verification processes continue to take shape and as states pass legislation requiring businesses to use age verification tools. With an increasing number of states moving forward with age-verification requirements, it makes sense for the FTC to reconsider COPPA requirements as they relate to the collection of age verification data. Companies should consider whether they can make use of the exceptions set out in the FTC’s Policy Statement and be sure to adhere to other COPPA requirements as the overall risk of COPPA enforcement remains high and the conditions businesses need to meet it remain stringent.


UK Data Regulator Fines Reddit £14.47 Million for Alleged Children’s Privacy Failures

On February 24, 2026, the ICO announced that it had fined Reddit, Inc. (“Reddit”) £14.47m after finding that the platform did not adequately protect the personal data of children under 13 and did not properly verify users’ ages.

The ICO said that its investigation found that Reddit:

  • processed the personal data of children under 13 without a lawful basis as it lacked robust age-verification systems, even though its terms of services prohibited children under 13 from using the platform; and
  • failed to carry out a data protection impact assessment to assess and mitigate risks to children when required, even though children between the ages of 13 and 18 were allowed to use the platform.

In July 2025, Reddit implemented age verification for mature content and required users to declare their age during account creation. However, the ICO warned that relying on self-declared ages is easily bypassed and does not sufficiently protect children online. It gives further guidance on potential methods in its age assurance opinion.

Information Commissioner, John Edwards, made clear that the ICO’s view is that self-declared age measures are “not enough when children may be at risk and we are focusing now on companies that are primarily using this method. I therefore strongly encourage industry to take note, reflect on their practices and urgently make any necessary improvements to their platforms”.

Takeaway: The robust fine highlights the ICO’s continued focus on children’s privacy. Earlier in February, the ICO announced that it had fined MediaLab for failing to use children’s personal data lawfully. Organizations operating online services that are likely to be accessed by children will want to review their age assurance policies and practices and take appropriate steps given the ICO’s commitment to enforcement in this area. Failure to do so could result in significant financial penalties, greater regulatory scrutiny and reputational consequences.


gears

Texas AG Intensifies Consumer Data Enforcement

The Texas Attorney General Ken Paxton (“TX AG”) has been active over the past year filing numerous lawsuits against companies alleging privacy and data use violations through the unpermitted collection, use, and sharing of consumer data.

For example, in December 2025, the TX AG filed suit against five smart TV companies claiming their use of ACR technology violated Texas law by illegally capturing video and audio data from users. SamsungLGSonyHisense, and TCL Technology Group are among the companies accused of engaging in these practices, and in its suit against Samsung, the TX AG alleged the company failed to obtain informed consent from customers before collecting and selling viewer data through ACR technology. Samsung recently reached a settlement with the TX AG, agreeing to halt its use of ACR technology until receiving express customer consent, pursuant to providing “clear and conspicuous” privacy disclosures on its smart TVs. In its statement, Samsung affirmed that its TVs “do not spy on consumers” and that “Samsung allows [consumers] to control [their] privacy.” Samsung is the first to resolve its claim with the state, which we have previously covered in Cyber Bits Issue 88.

The TX AG has also filed several lawsuits under the Texas Deceptive Trade Practices Act against companies that have alleged connections with China. For example, the TX AG has alleged that Temu engages in false pricing and misrepresentation of goods and its data practices in violation of Texas law. Specifically, the TX AG asserted that the online marketplace stores customer data on Chinese servers through its Chinese-affiliated holding company, thereby allowing the Chinese Communist Party access to such customer data.

Takeaway: True to our Crystal Ball edition prediction, the Texas AG has launched an aggressive enforcement campaign over the past year, targeting companies across sectors for alleged unauthorized data collection, privacy violations, and misrepresentations under the Texas Deceptive Trade Practices Act, among other things. This aggressive enforcement approach is expected to continue, particularly in the lead-up to upcoming US elections, as Texas tries to position itself as a leader in this space. Businesses, especially those with non-US connections, may want to consider audits of their data privacy practices, including customer disclosures and data sharing practices.


gears

Southern District of New York Judge Finds Communications Exchanged by Client with AI Platform Not Protected by Attorney-Client Privilege

Ruling on a matter of first impression, Southern District Court of New York Judge Jed S. Rakoff recently issued a ruling that communications between a non-lawyer user and Anthropic PBC’s AI chat platform Claude (“Claude”), are not protected by the attorney-client privilege or the work product doctrine. United States v. Heppner, 1:25-cr-00503-JSR, ECF No. 27, at 2 (S.D.N.Y. 2026).

The defendant (Heppner) was indicted for securities fraud and related charges. After receiving a grand jury subpoena and becoming aware that he was the target of a criminal investigation, Heppner, without the direction of counsel, discussed his case with Claude and used it to prepare documents outlining a defense strategy. After the FBI seized 31 documents memorializing these conversations during a search of his home, Heppner asserted the attorney-client and work product privileges. Heppner argued chiefly that his chats with Claude incorporated information learned from counsel and that he used it to create documents for the purpose of obtaining legal advice, which he subsequently shared with his lawyers.

Judge Rakoff rejected both arguments and ruled the documents were not privileged. The court found the attorney client privilege did not apply primarily because: (1) the Claude chat platform is not an attorney; and (2) the communications were not confidential because Anthropic’s privacy policy allows it to review, use, and disclose the contents of users’ chats with Claude. Id. at 4-7. The court also noted that whether the communications were made for the purpose of obtaining legal advice was “a closer call” but ultimately found that element lacking as well because Heppner had no intention to obtain legal advice “from Claude”. Id. at 7-8 (emphasis original). Judge Rakoff found that the attorney work-product doctrine did not apply because the documents were not prepared by or at the behest of counsel. Nor did the chats reflect counsel’s strategy because he created them on his own and only presented them to his lawyers after the fact. Thus, while they might have “affected” counsel’s strategy after being created, “they did not ‘reflect’ counsel’s strategy at the time that Heppner created them”. Id. at 9-10.

TakeawayHeppner serves as a stark reminder that non-lawyers who discuss legal matters with AI platforms, even in anticipation of litigation, without explicit direction from counsel run the risk that those discussions may not be privileged. The ruling in Heppner is fact-specific and leaves open the possibility that conversations regarding legal strategy between AI chatbots and lawyers or clients acting at the direction of counsel may be protected work product. The ruling also serves as a reminder that conversations with many public-facing tools should not be considered confidential without a careful review of their terms of service and privacy policies.


Dechert Tidbits

International Data Protection Authorities Issue Joint Statement on Privacy Risks of AI-Generated Imagery

The ICO, along with 60 other international data protection authorities, signed a joint statement highlighting the privacy risks of AI-generated images and videos of identifiable people, especially children, created without their knowledge or consent. The statement calls on organizations developing or using AI content-generation systems to implement robust safeguards preventing misuse of personal information and generation of non-consensual imagery, to provide transparency with respect to system capabilities and risks, to provide mechanisms for removing harmful content quickly, and to verify that enhanced safeguards are in place for child-specific risks.

 

FTC and Kochava Reach Potential Resolution in Four-Year Geolocation Privacy Suit

On February 26, 2026, the FTC and Kochava provided notice that they had reached a proposed settlement, signaling a potential end to the nearly four-year litigation between the parties, which we have previously covered in CyberBits Issue 33 and Issue 49. Approval by a majority of the FTC Commissioners is required before the proposed settlement can be finalized and no details have been released regarding the specifics of the proposed settlement.


In 2025, Dechert’s Cyber, Privacy & AI team achieved top individual and group rankings in The Legal 500 and Chambers USA. Global Chair and Partner Brenda Sharton, a Law360 MVP, and Partner Ben Sadun, a Law360 Rising Star, were recognized for their leadership and contributions to the team’s achievements. The team was also recognized in Law.com’s “Litigators of the Week” column for its recent victory for Flo Health, a matter that showcased the team’s strategic excellence. Thank you to our clients for entrusting us with the types of matters that led to these recognitions.




Dechert Cyber Bits Partner Committee


Dechert’s global Cyber, Privacy and AI practice provides a multidisciplinary, integrated approach to clients’ privacy and cybersecurity needs. Our practice is top ranked by The Legal 500 and our partners are well-known thought leaders and sought after advisors in the space with unparalleled expertise and experience. Our litigation team provides pre-breach counseling and handles all aspects of data breach investigations as well as the defense of government regulatory enforcement actions and class action litigation for clients across a broad spectrum of industries. We have handled over a thousand data breach investigations of all types including nation states, ransom/cyber extortion, vendor/supply chain, DDoS, brought by threat actors of all types, from nation-state threat actors to organized crime to insiders. We also represent clients holistically through the entire life cycle of issues, providing sophisticated, solution oriented advice to clients and counseling on cutting edge data-driven products and services including for trend forecasting, personalized content and targeted advertising across sectors on such key laws as the CCPA, CPRA and state consumer privacy laws, Section 5 of the FTC Act; the EU/UK GDPR, e-Privacy Directive, and cross-border data transfers. We also conduct privacy and cybersecurity diligence for mergers and acquisitions, financings, corporate transactions, and securities offerings.

View Previous Issues