When AI gets it wrong, lawyers pay the price

The use of artificial intelligence (AI) to conduct legal research has been scrutinised by the High Court in Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 with solicitors and barristers being found to have bolstered legal arguments by citing fake cases generated by AI.
AI has been increasingly recognised within the legal sector as a tool for enhancing efficiency, with many lawyers adopting AI tools for contract review, legal research and automating routine tasks. However, following two recent cases in which lawyers cited fake cases generated by AI, the President of the King’s Bench Division, Dame Victoria Sharp DBE, has warned against the use (and misuse) of AI in litigation, stating that whilst AI is likely to have a continuing and important role in litigation, its misuse carries “serious implications for the administration of justice and public confidence in the justice system”.
In the case of Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC (unreported), in which the claimant sought £8.9 million in damages against Qatar National Bank, the judge discovered that of the 45 cases cited in litigation correspondence and witness statements, 18 did not exist and many contained quotes which were fictitious. The claimant had used AI to carry out research and the instructed solicitor had relied upon the research without verifying the authorities.
Similarly, in the case of R (on the Application of Ayinde) v London Borough of Haringey [2025] EWHC 1167 (Admin), a judicial review case, the claimant’s barrister was found to have cited 5 fake authorities in their written submissions. Despite the fact that on the face of it the cited authorities looked legitimate, with plausible citations and proper names, none were real cases and all were in fact entirely fictitious. Whilst the barrister denied using AI, the court referred her to the Bar Standards Board for investigation, stating: “on the material before us, there seems to be two possible scenarios. One that is that [the barrister] deliberately included fake citations in her written work. That would be contempt of court. The other is that she did use generative AI tools to produce her list of cases and/or to prepare parts of the grounds of claim. In that event, her denial (in a witness statement supported by a statement of truth) is untruthful. Again, that would amount to a contempt.”
Both these cases were then referred to the Divisional Court under the court’s Hamid jurisdiction, which allows the court to regulate its own procedures and enforce duties lawyers owe to the court. In its ruling in the resulting case (Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025]), Dame Victoria Sharp DBE highlighted that:
- Law firms must adopt oversight procedures and internal guidelines for the use of AI to ensure “compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained”.
- “Freely available generative AI tools trained on large language model, such as ChatGPT, are not capable of conducting reliable legal research”. Clearly, whilst, generative AI may be able to produce plausible responses, these responses can turn out to be incorrect, untrue or simply fictional
- Whilst AI may be able to assist in legal work, “it is not a replacement for human responsibility and oversight”. Therefore, any legal professionals using AI have a professional duty to “check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work”.
Dame Victoria Sharp DBE then called on the Bar Council and Law Society of England and Wales to “consider as a matter of urgency what further steps, they should now take in light of this judgment”. Legal practitioners should keep a close eye out for guidance that will in no doubt be coming in the near future and ensure they bear such guidance in mind when developing their own internal AI policies and procedures.
Nonetheless, it is clear from this ruling that as the legal profession continues to embrace the significant potential in adopting the use of AI, law firms should ensure they have clear internal policies and procedures in place regarding the use of AI tools, particularly in respect of legal research, and should provide training to staff on the limitations of AI, reminding lawyers that AI is prone to “hallucinations” (fabricating information that appears plausible), and should only be used effectively when rigorous verification is undertaken.
If you would like any further information or advice on these issues, please contact James Milliken from the Commercial team.
*This information is for guidance purposes only and does not constitute, nor should be regarded, as a substitute for taking legal advice that is tailored to your circumstances.