Artificial intelligence is transforming how legal professionals work—and now, it’s shaping how judges approach their role in the courtroom. With the release of Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers from The Sedona Conference, we have a clearer understanding of how AI should be used responsibly in judicial settings.
At ACEDS, we’ve been deeply involved in the AI and legal tech conversation. I had the honor of being a panelist last year at the DC Superior Court Judicial and Manager Training, moderated by Hon. Herbert B. Dixon, Jr., where we explored AI for All: Bridging the Divide for Greater Access to Justice. These discussions reinforced the importance of AI education across the legal industry. And if you haven’t seen it yet, my interview with Hon. Xavier Rodriguez on #WeAreACEDS tackled many of the same themes—watch it here.
One of the authors of these guidelines, Maura Grossman, is not only a leading expert in AI and law—she’s also an ACEDS Advisory Board member. Maura frequently participates in our Advisory Board webinars, where she shares insightson TAR (Technology-Assisted Review) and AI. Her expertise has been instrumental in shaping how legal professionals think about AI adoption, and she continues to be a key voice in these discussions.
These guidelines aren’t just for judges. They offer valuable insight into how AI-generated research, filings, and evidence will be evaluated by courts—insight that every eDiscovery professional, legal technologist, and attorney should be paying attention to.
Now, with these new guidelines, the conversation is shifting from AI’s potential to its practical application in the judiciary.
Judges Must Remain in Control of AI—And So Must We
One of the strongest messages from the guidelines is that AI can assist, but it cannot replace human judgment. Judicial authority belongs to judges—not AI. AI can help with research, document summarization, and administrative tasks, but ultimately, judges must verify and validate everything it produces.
For legal professionals, this reinforces an important point: AI is a tool, not an autonomous decision-maker. Whether you’re using AI in eDiscovery workflows, contract analysis, or litigation strategy, you are responsible for ensuring accuracy and ethical compliance. AI should enhance our work, not dictate it.
AI-Generated Content Will Face Scrutiny
These guidelines for the judiciary make it clear that judges must not take AI-generated content at face value. They warn against automation bias—the tendency to trust AI outputs without verification. Courts may also start asking for transparency around AI’s role in legal submissions, meaning legal professionals should be prepared to disclose when and how AI has been used.
This means that if you’re leveraging AI for research, drafting, or discovery, you should be able to explain your process and verify the accuracy of AI-generated results. Courts will expect nothing less.
The guidelines outline acceptable AI use cases in judicial chambers, including:
- Assisting with legal research (with human oversight)
- Summarizing depositions, motions, and pleadings
- Drafting routine administrative orders
- Helping with court scheduling and workflow management
- Analyzing court data for efficiency improvements
If judges are encouraged to use AI in these ways, legal professionals should expect AI-generated legal documents, filings, and arguments to become more common. Staying informed on these developments will be key to navigating the evolving legal landscape.
Judge Schlegel’s Perspective on AI in the Courts
For an inside look at how these guidelines came to be, check out Judge Scott Schlegel’s Substack post. I recently had the chance to see Judge Schlegel in person at the 12th Annual UF Law E-Discovery Conference, where AI’s role in legal proceedings was a major discussion point. His perspective reinforces what many of us in legal tech already know: AI is here to stay, but we need to use it responsibly.
AI in Law is Moving Fast—Are You Ready?
These new guidelines mark a significant step toward responsible AI adoption in the judiciary. Courts are being urged to embrace AI carefully and ethically, and as legal professionals, we need to do the same.
Whether you’re in eDiscovery, legal operations, or litigation support, now is the time to stay informed, ask the right questions, and ensure AI is working for us—not against us.
How do you see these guidelines shaping your work? Let’s continue the conversation—share your thoughts as we navigate AI in legal tech together.