Case of the Week Blog Series

AI, Work Product, and the Protective Order Problem: What Morgan v. V2X, Inc. Means for Every Litigator

Share this article

[Editor’s Note: This article has been republished with permission. It was originally published April 9, 2026 on the Minerva26 Blog]

On March 30, 2026, Magistrate Judge Maritza Dominguez Braswell of the District of Colorado issued a ruling in Morgan v. V2X, Inc. that is the most consequential AI-in-litigation decision we have seen yet. Only the third federal ruling on AI and privilege or work product — but by far the most thorough, and the one with the most direct implications for how you advise clients and draft protective orders right now.

A word about Judge Braswell before we get into it. She co-chairs the District of Colorado’s AI Committee, co-founded the Judicial AI Consortium, is an active member of The Sedona Conference Working Groups 1 and 13, and authors The AI Brief. This is not a judge who stumbled into these questions. That context matters for understanding why this opinion is as comprehensive as it is.

The case is an employment discrimination dispute. Plaintiff Archie Morgan is pro se — representing himself — against corporate defendant V2X. Both parties are using AI in connection with the litigation. The dispute arose when V2X moved to amend the existing protective order to restrict Morgan’s AI use. Morgan’s response was direct: V2X was conditioning production of a four-month-overdue insurance policy on getting concessions about his AI use. He called it holding a routine disclosure “hostage” and called out the obvious imbalance — a pro se plaintiff who can only afford free tools being restricted while V2X’s counsel maintains its own enterprise AI infrastructure. The court framed it around two questions: how does work product apply to a pro se litigant’s AI use, and what should a protective order say about AI?

Issue One: Work Product Applies — With Real Limits

Federal Rule of Civil Procedure 26(b)(3) protects documents prepared in anticipation of litigation by or for a party or its representative. Judge Braswell holds that this applies to Morgan’s AI-generated work — and her reasoning goes further than a textual analysis. The 1970 amendments extended protection to parties themselves, not just attorneys. Courts have applied it to pro se litigants for decades. And she writes that extending these protections to pro se litigants is “magnified in the context of AI — one of the most powerful knowledge tools ever to become available to the masses.” A pro se litigant must be both party and advocate. AI may be what makes that dual role survivable. Conditioning protection on the involvement of counsel finds no support in the text of Rule 26(b)(3).

V2X’s core argument was waiver: voluntary disclosure to a third-party AI platform destroys confidentiality. Judge Braswell takes it seriously and answers it with a question that should reframe how every litigator thinks about this issue:

Today, nearly all electronic interaction passes through third-party systems. Google, for example, hosts millions of accounts, and by extension, has access to millions of messages, emails, documents, videos, and more. Does that mean that anyone with a Gmail account has forfeited all rights to confidentiality and privacy?

If you accept V2X’s premise, the answer has to be yes — and that’s not a sustainable legal rule. She also draws on Carpenter v. United States and the Sixth Circuit’s United States v. Warshak for the principle that routing data through a third-party system does not automatically extinguish privacy expectations. And she notes something backed by actual social science: people disclose more sensitive information to AI chatbots than to any other digital tool. AI platforms are specifically designed to simulate empathy and invite candid disclosure. If anything, the case for confidentiality is stronger here than with email.

She also distinguishes Heppner — the criminal case where a represented defendant used Claude entirely on his own, without counsel’s direction, and lost. Her first distinction is clean: Heppner was criminal, this is civil, governed by Rule 26(b)(3)’s plain text. But the second distinction is the one that should keep outside counsel awake. 

In Heppner, there was a structural gap between the defendant and his lawyers. He used AI on his own initiative with no direction from counsel. Judge Braswell doesn’t disturb that outcome — she says it doesn’t apply here because Morgan IS his own counsel. But the implication runs directly in the other direction: a represented party who uses AI independently of their lawyer — not at counsel’s direction, not as counsel’s agent — may be in exactly the same position as Heppner. No privilege. No work product. This is a client counseling conversation that needs to happen before the subpoena arrives.

V2X also sought to have Morgan disclose the name of the AI tool he was using.  Morgan argued that identifying which AI platform he used would itself reveal work product — tool selection reflects strategy and analytical approach. He lost and the court’s language echoes one of our main themes on Case of the Week – you must have a factual basis to win, not just some conclusory statements: “You have not demonstrated that identifying the tool itself will reveal your mental impressions or legal strategy.” 

Conclusory allegations never carry a discovery burden. A stronger argument from Morgan might have invoked the Sporck v. Peil, 759 F.2d 312 (3d Cir. 1985) — where counsel’s selection of documents from a larger production was protected because the selection itself reflected mental impressions — and built a specific factual record about what the platform can do and how its capabilities relate to the litigation strategy. Morgan didn’t do that. It’s worth noting that we are asking a lot from a pro se litigant navigating questions here that experienced counsel haven’t figured out yet.

Issue Two: The Protective Order Standard That Will Be Copied

The second issue revolved around the proposed amendment to the protective order to require disclosure of the use of generative AI. This is where Morgan goes where Heppner and Gilbarco never had to. Judge Braswell analyzed both parties’ proposed language, rejected both, and wrote her own. V2X’s language named specific platforms — ChatGPT, Harvey.AI, Anthropic’s Claude — and was clearly drafted around V2X’s own enterprise contracts rather than the needs of this case. It still referenced Google’s Bard, which was rebranded as Gemini more than a year ago (which Judge Braswell did not appreciate). Morgan’s proposed “closed-circuit environment” language was too narrow — it addressed unauthorized bad actors, not what the platform itself does with data in the ordinary course.

The Court’s language required that before uploading Confidential Information to any AI platform, the provider must be contractually prohibited from: (1) storing or using inputs to train or improve the model; (2) disclosing inputs to any third party except as essential to service delivery; and  (3) any such third party must be bound by obligations no less protective than the order itself. The party must also have the contractual right to delete all Confidential Information upon request, and must retain written documentation of those protections.

She is candid about what that means:

The Court recognizes that practically speaking, and in light of the current state of AI, this provision will (at least for now) bar the parties from using most, if not all, mainstream low-to-no-cost AI to process Confidential Information. This type of restriction disadvantages pro se litigants. Enterprise-tier AI accounts that satisfy these requirements may be available only through organizational procurement processes, or at costs that a pro se litigant is unlikely to bear.

There are at least two problems that this very thoughtful opinion doesn’t fully resolve. First, the vector embedding problem: when you upload data to an AI platform, it is processed through vector embeddings — mathematical representations stored in a database. The data doesn’t persist as a file you can cleanly extract. Whether a “right to delete” satisfies a court-ordered protective obligation when the underlying data exists as embeddings rather than a document is an open question this opinion doesn’t answer. 

Second, the breadth problem: the order covers “any modern artificial intelligence platform, including any generative, analytical, or large language model-based tool.” That is Westlaw. Relativity. Microsoft 365 Copilot baked into Word and Outlook. AI is infrastructure now, not a separate product category. Read strictly, this standard applies to all of it. We have never had to audit and disclose our litigation technology stack as a discovery obligation, and the Federal Rules don’t require it. If courts adopt this language wholesale, the compliance burden is significant — and falls hardest on parties who can least afford it.

Takeaways for Litigators and their Teams

Before you amend a protective order, know your own exposure. AI restrictions bind both sides, so think carefully before you decide to include them in your protective order. Know what AI your client is using, what tier, and whether it can meet this standard before you file that motion. Enterprise accounts with genuine data processing agreements — ChatGPT Enterprise, Claude for Enterprise, Microsoft 365 Copilot with a DPA — are built for institutional procurement. If you can’t clear the standard yourself, don’t write language that imposes it on everyone.

The over-designation problem just got bigger. Every document designated Confidential is now a document the other side cannot run through consumer AI. That’s leverage, and it will be used, so read those draft protective orders carefully. Parties already over-designate constantly. Under a Morgan-style order, the incentive is stronger. Expect designation fights to become AI-use fights.

Know what platforms your clients are using — now. Not whether they use AI. Which platforms, which tier, what do the terms say about training use and deletion. A client using a free account with Confidential Information under an existing protective order may already have a problem. The litigation team and the data team need to be on the same page on this — don’t allow the ediscovery disconnect to put you at risk.

Your prompts are your most protected materials. The directions you give an AI about your evidence — those are mental impressions articulated in real time. Near-absolute opinion work product if done at counsel’s direction. The outputs are qualified work product. The tool name is the hardest case, and Morgan shows you need a specific factual record to protect it — not conclusions.

The full text of Morgan v. V2X, Inc. is available in Minerva26, along with U.S. v. Heppner and Warner v. Gilbarco. Use the platform to set up a notification to receive all cases tagged with Generative AI and stay on top of this developing area for your sake and your client’s.

Podcast | Transcript

Kelly Twigger on EmailKelly Twigger on Linkedin
Kelly Twigger
CEO at Minerva26
Kelly Twigger is a practicing attorney, software developer, consultant, writer, and speaker on issues in electronic discovery, the development and implementation of legal technology, and how to effectively use data in planning for and during litigation.

She is a co-author of Electronic Discovery and Records and Information Management, and host of Case of the Week at Minerva26. As Principal at ESI Attorneys, Kelly manages the boutique eDiscovery and information law firm that acts as operational business partners with its clients to advise law firms, corporations, and municipalities on all areas of electronic information including eDiscovery, privacy, cybersecurity, and information governance.

Kelly is also the CEO of Minerva26. — a SaaS-based practical resource for litigators handling eDiscovery — that curates discovery decisions, rules, and additional content. She is developing an online academy to provide on-demand education for lawyers and legal support professionals to stay abreast of changes in the law and technology that affect litigation and clients’ obligations to respond.

You can reach Kelly at [email protected], join her Facebook community group at Let’s Talk eDiscovery, or connect with her on Twitter @kellytwigger.

Share this article