The £89 Million AI Hallucination

Imagine preparing a legal case, citing precedent after precedent, only to discover that 18 of your 45 authorities do not exist. They were never written. No judge ever delivered them. They were invented — entirely — by artificial intelligence.
That is exactly what happened in the case of Hamad Al-Haroun v Qatar National Bank. And when the High Court discovered it, the reaction was swift.
The Case
Mr Al-Haroun was suing Qatar National Bank and its subsidiary for £89.4 million. He alleged breaches of a complex financing agreement. His solicitor was Abid Hussain of Primus Solicitors, a firm regulated by the Solicitors Regulation Authority.
In early 2024, Mr Al-Haroun applied to set aside an earlier court order. To support his application, he filed a witness statement. It cited 45 legal authorities. The problem? At least 18 of them were completely fabricated.
How It Happened
The court noticed something odd. Many of the cited cases did not appear in any law report. Some seemed to have names that did not sound right. One of the supposed authorities was even attributed to the very judge who was hearing the case — a “decision” that judge had never made.
When questioned, Mr Al-Haroun admitted the truth. He had conducted the legal research himself using publicly available AI tools, including ChatGPT. He had asked the AI to find supporting authorities. The AI had obliged — by inventing them.
Mr Al-Haroun said he had “complete, but misplaced, confidence” in the material. He did not intend to mislead the court. He genuinely believed the AI had given him real cases.
What surprised the court more was the role of the solicitor. Mr Hussain admitted that he had incorporated his client’s research into the legal filings without independently verifying a single citation. The lawyer had trusted his client — who had trusted the AI.
The Court’s Reaction
The case was referred to the Divisional Court under its “Hamid jurisdiction” — the court’s power to enforce the duties that lawyers owe not to mislead the court.
The judgment was scathing. The court noted that “freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT, are not capable of conducting reliable legal research.” It described the solicitor’s failure to check the material as “a lamentable failure to comply with the basic requirement to verify the accuracy of material put before the court.”
The court decided not to initiate contempt proceedings. But it made clear that this was not a precedent. It referred Mr Hussain to the SRA for investigation. And it issued a stark warning to the entire legal profession.
A Warning for the Profession
The Divisional Court’s judgment went beyond the individual case. It called for urgent action from heads of chambers, managing partners, and regulators.
The message was clear. AI can be a useful tool. But it is not a substitute for professional judgement. Lawyers who use AI-generated research without verification risk severe sanctions. The court warned that future breaches “risk severe sanction.”
The judge also ordered that those who use AI for legal research “have a professional duty to check the accuracy of such research by reference to authoritative sources.”
What This Means for Businesses
The Al-Haroun case is not just about lawyers. It is about anyone who relies on AI for professional work.
AI tools are designed to be helpful. They produce confident, plausible answers. But they do not know when they are wrong. They do not know what they do not know. They generate text that looks correct — but may be entirely invented.
For businesses, the lesson is simple. AI is a powerful assistant. But it requires human oversight. Any information produced by an AI must be verified before it is used in a professional context. This applies to legal research, financial analysis, customer communications, and internal reports.
The cost of not verifying can be enormous. In this case, it was professional reputations and regulatory scrutiny. In your business, it could be worse.
The Bottom Line
The Al-Haroun case will be studied for years. It is a textbook example of what happens when trust in technology replaces professional diligence.
AI did not intend to mislead the court. It simply did what AI does: it generated plausible text. The fault lay with the humans who failed to check it.
The judgment ends with a call to action. The court hopes that “practical and effective measures” will be taken to ensure that everyone providing legal services in this jurisdiction understands their obligations when using AI.
The message applies more broadly. Whether you are a lawyer, an accountant, a consultant, or a business owner: verify your sources. The AI will not do it for you.