Ctrl+Alt+Deceit: SKAT v Solo Capital and the Defrauding of Automated and AI Systems
Key Takeaways
- The inadequacy of an automated or AI system will be fatal to establishing fraudulent misrepresentation under English law. To succeed in a claim for fraudulent misrepresentation, a claimant must show that a false representation was made, that it was made fraudulently, and that the claimant actually relied upon it.
- Where businesses cannot explain their system design or prove that the system was built to rely on specific information being true, they will struggle to prove fraudulent misrepresentation if those systems are deceived by fraudsters.
- Organisations relying on opaque or poorly understood automated or AI systems face increased litigation and regulatory risks where they cannot demonstrate appropriate oversight and understanding of how their systems operate.
Firms using AI systems for their operations can make claims for fraudulent misrepresentation where those systems are deceived, but evidencing reliance on the representations through the design of the system will be key. In a recent English court case,1 Denmark’s tax authority Skatteforvaltningen (“SKAT”) lost a £1.4 billion fraud claim, with the Court ruling that SKAT’s automated system had not been misled into paying out dividend refunds. This case illustrates how traditional legal principles are being applied to constantly evolving automated and AI technologies.
The Facts
Between August 2012 and July 2015, certain market participants exploited SKAT’s automated system through 4,170 dividend tax refund claims totalling just under DKK 12.1 billion (approximately £1.4 billion).
Traders exploited cum-ex trades in an attempt to defraud SKAT. Cum-ex trades involve shares being sold by one investor immediately before dividends are paid and delivered to a buyer afterwards, creating confusion over who owned the shares at the moment at which the dividend was paid. This enabled both parties to claim rebates on withholding tax, a levy which had only been paid once, when the dividend was issued.
SKAT sought the recovery of the tax refunds it had wrongly paid out. The trial addressed multiple issues, including whether the refund claims contained representations regarding ownership and tax entitlement, and whether SKAT was induced by any such representations into paying out the claims.
The Decision
SKAT’s claim was dismissed, although it applied for permission to appeal to the Court of Appeal which was granted in January 2026. The appeal will be heard in March 2027.
A key issue was whether SKAT (and in particular any humans involved in the process) relied on the alleged representations in determining and paying out each claim, to prove that it had been induced to make the payments. These are key ingredients to successfully making a claim of fraudulent misrepresentation. However, SKAT’s system was largely automated: once a refund claim was submitted in the correct form, it was paid out, with no meaningful check on whether the claim or the statements made within the documents were in fact valid.
The Court found that the role of the SKAT employee who processed the claims did not require any analysis of the alleged representations, so no human reliance could be established, but what about the automated system itself?
English law recognises that a computer system can itself “rely” on information even where no human actively considers it at the point of processing. This concept is known as “systemic reliance.”2 A GenAI system may be programmed so that its outputs are necessarily determined by its inputs, because it is designed on the basis that whoever provided the inputs would be making a true representation by doing so. However, SKAT did not produce any evidence that its system had been designed in this way. The Court further disagreed with SKAT’s argument that it operated on a “trust-based approach”, namely that the applicants would do the right thing, or that SKAT received and relied on confirmation from a financial institution that the applicant was entitled to such a refund.
Instead, there were weaknesses in SKAT’s automated system that left it open to exploitation by fraudsters. The Court found that SKAT’s controls were “so flimsy as to be almost non-existent” and that those lax controls, rather than reliance on any alleged misrepresentations, caused the wrongful payouts. SKAT could not prove that it had been induced by misrepresentations, as the system was programmed in a binary manner to make the payments once a form was completed and submitted. Neither an individual at SKAT nor the automated system was induced into believing that the claims were valid.
The Court made clear that SKAT’s system being exploited is not the same as SKAT being deceived.
Conclusions
The SKAT ruling clarifies that to succeed in a fraudulent misrepresentation claim, a claimant must prove genuine inducement by, and actual reliance on, a false representation. This can occur with automated systems, but it will not be sufficient to merely show that they were exploited.
More broadly, this decision highlights the importance to businesses of ensuring they understand how their systems operate, not least given the litigation and regulatory risks, including the threat of being the victim of wide-scale fraud.
This issue may be heightened when businesses use automated systems, including AI and other systems characterised as “black box” systems (i.e. those lacking observable and testable workings), without understanding the technical details underlying them or the data upon which those systems were trained. There may be limitations with the use of such systems of which businesses are unaware, but which ought to inform how they use and rely on them. Being able to explain how the system is designed and how that design establishes reliance will be pivotal to claims that an AI tool has been deceived by a fraudulent misrepresentation.
As highlighted by the July 2025 Law Commission’s discussion paper on AI and the Law,3 the opacity of AI systems can make it exceedingly difficult to determine why such a system produced a particular output, complicating questions in English law of reliance, causation and liability. However, as we have seen with disputes concerning digital assets, the English Courts have shown themselves willing to adapt traditional legal concepts to innovative technologies, and we expect this to continue when it comes to determining the legal liability involved in the use of AI systems.
Contributors
The authors would like to thank Lauren Johncock and William Peet, trainee solicitors, for their contributions to this article.
Related Professionals
Related Services