VVX
Published on 04/13/2026 at 12:10 pm EDT
These decisions suggest a more flexible approach to confidentiality than other recent US and UK cases.
In two recent cases, US district courts in Colorado and Michigan have held that a self-represented party bringing civil proceedings was entitled to claim privilege, in the form of the US work product doctrine, over their interactions with a public AI tool: Morgan v V2X Inc, 30 March 2026 and Warner v Gilbarco Inc, 10 February 2026.
The decisions apply US law and are therefore not directly relevant to proceedings in the English courts, which invariably apply English law to questions of privilege. It is interesting, however, that in both cases the US courts were willing to find that privilege was not waived despite the use of a public AI tool. This contrasts with the UK Upper Tribunal's recent observation in UK v Secretary of State for the Home Department [2026] UKUT 81 (considered here) that uploading confidential documents into a public AI tool breached client confidentiality and waived privilege. It also contrasts with the approach of the Southern District of New York in US v Heppner (considered here), which found that privilege did not apply to the defendant's exchanges with a public AI chatbot because he had no reasonable expectation of privacy in those exchanges.
Despite these decisions, however, parties should exercise caution in their use of AI tools for confidential and/or privileged information – in particular ensuring that only private AI tools with strict confidentiality protections are used. The decisions arguably turn, at least in part, on specific aspects of waiver in the context of the work product doctrine, and in any event the whole question of privilege and AI is a developing area: there can be no guarantee that another court would find that privilege was maintained despite the use of public AI tools.
The recent US decisions are also interesting in suggesting that there may be a gap opening up between the approach to work product protection under US law for materials created by self-represented litigants as opposed to those who are legally represented. In Heppner, where the defendant was legally represented, the court held that work product protection did not apply to his chatbot exchanges, even if they were created in anticipation of litigation, because they were not prepared at the direction of counsel. In contrast, Warner and Morgan suggest that work product protection may apply to a self-represented litigant's AI interactions. As commented in our previous post on Heppner, for litigation privilege under English law there is no requirement that materials are created by or at the direction of a lawyer, and so a similar distinction seems unlikely to arise here.
The Morgan decision is also noteworthy in ordering restrictions on the parties' AI use in respect of confidential information disclosed by an opposing party – essentially barring the use of public AI tools for such information – and in requiring the party who had used a public AI tool to disclose the name of the tool used.
Warner v Gilbarco Inc
In Warner, the defendant sought production of all documents and information relating to the plaintiff's use of third‑party AI tools in connection with the litigation.
A Magistrate Judge in the Michigan district court denied the request, finding that the material was not discoverable and, in any event, was protected by the work product doctrine on the basis that it had been prepared in anticipation of litigation.
The court rejected the defendant's argument that the plaintiff had waived the protection by using ChatGPT, a public AI tool, noting that a waiver of work product protection must be a waiver "to an adversary or in a way likely to get in an adversary's hand" – in contrast to attorney-client privilege, which (under US law) may be waived by voluntary disclosure to a third party.
The court noted that ChatGPT and other GenAI programs "are tools, not persons, even if they may have administrators somewhere in the background" and commented that the defendant's arguments "would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed".
Morgan v V2X Inc
In Morgan, the defendant sought an order compelling the plaintiff, who was not legally represented, to disclose the AI tool he had used in the litigation, and restricting the use of AI in respect of confidential documents disclosed in the litigation.
A Magistrate Judge in the Colorado district court noted that "AI is forcing litigants and courts to confront difficult questions about how and to what extent longstanding protections will apply when parties use AI to assist them in the litigation process" – in particular relating to confidentiality, work product, and privilege.
The court noted that Federal Rule of Civil Procedure 26(b)(3)(A) protects “documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative”. In the court's view, this language included material created by a party before retaining a lawyer as well as material created by a self-represented party, and included not just litigation preparation materials, but also the "mental impressions, opinions, and theories" of parties – in addition to the separate protection given to mental impressions and opinions of legal representatives under Rule 26(b)(3)(B).
In the court's view, the present case could be distinguished from the decision in Heppner, which was not binding on it in any event, because: that was a criminal case which was not governed by the Federal Rules of Civil Procedure; and in Heppner there was a "gap" between the party and the attorney because the defendant acted entirely apart from his lawyer, whereas there is no such gap in the case of a self-represented litigant, who is "simultaneously the party and the advocate".
The court recognised that public AI systems collect user data for training and other purposes, but found that this "does not eliminate all expectations of privacy or automatically waive protections". It noted that nearly all electronic interaction passes through third-party systems, but the Supreme Court had previously held that intermediary access alone does not automatically extinguish a reasonable expectation of privacy.
In the court's view, it is "entirely reasonable for a person to expect some privacy and confidentiality" when interacting with AI tools, even if a third party is behind the tool collecting and storing their information. In any event, the court noted (as the court did in Warner) that work product protections are typically waived by disclosure to an adversary, or in circumstances that substantially increase the likelihood that an adversary will obtain the materials. It stated that, even though AI use technically “discloses” information to a third party, it is "highly unlikely the information will fall into the hands of an adversary absent some legal process to compel it".
However, the court held that work product protection did not extend to protection for the name of the AI tool used by the plaintiff, as disclosing this information would not reveal the plaintiff's mental impressions or case strategy. Accordingly, the plaintiff was ordered to disclose the name of the tool.
The court also granted the defendant's request to amend the existing protective order in the litigation to restrict the parties' AI use in respect of confidential information disclosed by the opposing party. The amended order essentially prohibited inputting confidential information into an AI platform unless sufficient contractual protections were in place. While the court recognised that this type of restriction might disadvantage a self-represented party, as it would bar the use of most (if not all) mainstream low-to-no-cost AI tools to process confidential information, such parties would remain free to use such tools in ways that did not involve uploading confidential information.
Julian Copeman Herbert Smith Freehills Kramer LLP Herbert Smith Freehills Kramer LLP Exchange House Primrose Street London EC2A 2EG UNITED KINGDOM Tel: 212 715 9100 Fax: 212 715 8000 E-mail: [email protected] URL: www.hsfkramer.com
© Mondaq Ltd, 2026 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source Business Briefing