The Next Generation of AI Hallucinations: A Case Comment on Kapahi Real Estate Inc. v. Elite Real Estate Club of Toronto Inc., 2026 ONSC 1438

Overview

In Kapahi Real Estate Inc. v. Elite Real Estate Club of Toronto Inc., Justice Myers of the Ontario Superior Court confronted what he described as a possible "next generation of AI hallucinations." Unlike earlier cases in which counsel cited entirely fictitious authorities, the factum in question cited real cases with correct neutral citations to CanLII. The quotations attributed to those cases, however, were fabricated. Nothing like them appeared in the decisions from which they were said to be drawn. There were seven such quotations.

The decision raises pointed questions about the adequacy of counsel's explanation for the errors, the limits of judicial investigation in such circumstances, and the appropriate institutional response when the court cannot determine whether it is dealing with undisclosed AI use or deliberate falsification.

A note before diving in: practising in Nova Scotia, I have not encountered opposing counsel misusing AI in court filings. I have, however, on at least two occasions encountered self-represented litigants using — and misusing — AI to prepare not only briefs but also originating notices, pleadings, and affidavit evidence. One such litigant admitted under cross-examination to using ChatGPT, CoPilot, and Gemini — all three — to prepare his court documents. The phenomenon is no longer hypothetical or confined to other jurisdictions. It is here, and practitioners and judges in every province need to be alive to it.

I. The Facts

The underlying proceeding was unremarkable. Justice Steele had enforced an arbitral award against the respondents and awarded costs. The respondents moved to vary the order. Justice Myers dismissed the motion, finding it wholly without merit, and noted that the respondents' counsel, Mr. Parvaiz, had made baseless allegations of sharp practice against opposing counsel.

The applicants then delivered costs submissions identifying what they said were AI hallucinations in Mr. Parvaiz's Reply Factum. Justice Myers confirmed the concern: the Reply Factum contained quotations that did not originate from the cases to which they were attributed. The court put two questions to Mr. Parvaiz directly — whether he had used generative AI to draft the factum, and why the certification of authenticity required by Ontario’s Rule 4.06.1(2.1) was absent.

The matter settled before costs submissions were completed. The respondents agreed to pay costs on a substantial indemnity basis, fixed at $32,747.40. The applicants abandoned their motion for costs against Mr. Parvaiz personally. But Justice Myers was not content to let the matter rest there.

II. The Seven Fabricated Quotations

Justice Myers reproduced the seven impugned passages from Mr. Parvaiz's Reply Factum. In each instance, the factum cited a real case with a correct neutral citation. The quotations were presented as direct extracts — indented, placed in quotation marks, and attributed to specific judges. But none of the quoted language appeared in the cited decisions. As Myers J. put it with characteristic directness after each: "Nothing like this quotation appears in the case. It is wholly made up."

The pattern was consistent. The citations were real. The judges' names were sometimes wrong — one passage attributed to "Justice Mosley" was from a decision written by Perell J. The quotations themselves were plausible-sounding propositions of law on topics like the standard of review for arbitral awards and piercing the corporate veil. They read, in other words, exactly like AI-generated legal text: substantively reasonable, stylistically generic, and entirely untethered from any actual judicial decision.

A telling forensic detail: none of the neutral citations included pinpoint paragraph references. The absence of pinpoints across all seven quotations is consistent with AI-generated output, which typically produces case names and neutral citations but fails to direct the reader to a passage that actually exists.

III. Mr. Parvaiz's Explanation

Mr. Parvaiz wrote to the court acknowledging that five paragraphs of his Reply Factum were "not accurate" — though the actual count was seven. He attributed the errors to "a lack of due care," "human errors," "misreading of the cases cited," "carelessness," and "inadvertence." He denied having used artificial intelligence or any similar tool. He expressed remorse, noted he was a sole practitioner called to the bar only in 2022, and took full responsibility.

IV. The Analysis

Justice Myers did not accept the explanation — nor did he formally reject it. He could not, because he had not had the benefit of full submissions or cross-examination. But he made his scepticism plain.

The core difficulty with Mr. Parvaiz's account, as Myers J. identified it, is this: if he did not use AI, how did he fabricate seven quotations and present them as direct extracts from real cases? One might charitably explain a single garbled quotation as a transposition error or a faulty paraphrase that accidentally acquired quotation marks. But seven distinct fabrications, each formatted as a direct quotation from an identified authority, cannot be the product of "misreading" or "carelessness." As the court observed, it is difficult to conceive of how anyone could "make up a quotation that supports the argument in a factum by misreading a case or being careless."

Myers J. framed the dilemma starkly: either Mr. Parvaiz used AI and was untruthful about it, or he did not use AI and instead personally fabricated seven quotations and attributed them to real cases. Neither explanation reflects well on counsel. And if the denial was untruthful, the court noted, the cover-up may be worse than the initial error.

V. The Court’s Response

Justice Myers considered but declined to initiate contempt of court proceedings, citing the absence of full submissions, the practical limitations of a judicial investigation into questions of computer metadata and online history, and the fact that Mr. Parvaiz had already been exposed to costs consequences. The court also referenced its earlier experience with AI hallucinations in the matter of Ko v. Li, where a show cause order ultimately led to a referral to the Attorney General for prosecution — but only after counsel had admitted to being untruthful at a prior hearing.

Instead, Myers J. referred the matter to the Law Society of Ontario for investigation, noting that the Law Society and the Toronto Police Service are better equipped to determine whether the fabrications resulted from undisclosed AI use or deliberate falsification.

VI. Implications for Practitioners

Kapahi is significant for at least three reasons.

First, it illustrates the evolution of AI hallucination in legal practice. The early cases involved fabricated case names and invented citations — errors that were relatively easy to detect. Kapahi represents a more sophisticated failure: real cases, correct citations, fake quotations. This pattern is harder to catch on a cursory review and places a correspondingly greater burden on counsel to verify not just that a cited case exists, but that the quoted passage actually appears in it. Any lawyer using AI for legal research — and many are, whether they admit it or not — must read the cases themselves.

Second, the decision underscores the significance of Ontario’s Rule 4.06.1(2.1) certification, which requires counsel to certify that the factum does not contain AI-generated content that has not been verified for accuracy. Mr. Parvaiz's Reply Factum did not include this certification. The omission is not merely a procedural technicality; it exists precisely to guard against the kind of problem that materialized here. It will be interesting to see how other provinces, including Nova Scotia, respond to these same concerns.

Third, Kapahi exposes the limits of judicial self-help in addressing AI misuse. A judge can identify the problem, flag it, ask questions, and make referrals. But a judge cannot compel production of browser histories, examine computer metadata, or cross-examine counsel on the process by which a factum was prepared — at least not without converting the proceeding into something resembling a prosecution. Myers J. was candid about these limitations, and his decision to refer the matter to the Law Society reflects a pragmatic recognition that some questions are better answered by bodies with investigative authority.

VII. Conclusion

This is a cautionary decision. It demonstrates that the AI hallucination problem in legal practice is not receding — it is evolving. The fabrications are becoming more subtle, and the explanations less convincing, while the institutional tools for addressing them remain imperfect.

For practitioners, the lesson is straightforward: if you use AI in any aspect of legal research or drafting, you must verify every citation and read every case you cite. The certification requirements now appearing in rules of civil procedure across the country are not bureaucratic formalities — they are the profession's first line of defence against precisely this kind of failure. And if you do not use AI, your factum should not contain seven fabricated quotations!

Next
Next

When Wills Go Missing: A Case Comment on Re Beals Estate, 2026 NSSC 73