Meeting the Moment

Preparing Students to Use and Challenge AI
March 17, 2026
Illustration of the GW law school facade and quad with students walking and riding bikes throughout. Semi-transparent icons for education, law, reading, and more hover above their heads.

Written by Sarah Kellogg
Illustrations by Wanda Nance

Artificial intelligence hasn’t just arrived in the legal profession; it has taken root quickly and decisively. Memos that once required hours now come together in minutes. Mountains of discovery can be sifted and organized in seconds.

Across corporate legal departments, law firms, and public-sector offices, tasks that once defined the daily work of junior associates and staff attorneys are increasingly being automated.

The question confronting the profession today is no longer whether lawyers will use AI. It is whether they understand enough about these tools to use them responsibly, and to challenge them when needed.

Concurrently, law schools, historically built on precedent, face the unprecedented: designing an education for a profession being reengineered in real time.

“We’re living through a structural shift in how legal work gets done,” said Dayna Bowen Matthew, Dean and Harold H. Greene Professor of Law. “Understanding AI is no longer an elective skill for young lawyers. It’s a professional necessity. Our responsibility is to ensure our students graduate not just fluent in the law, but fluent in the technologies shaping it.” What makes the challenge more complex for law schools is that the technology itself is still unsettled, morphing both in its engineering and its uses. New versions roll out every few months. Novel new use cases arise daily. Workflows in firms and agencies evolve constantly. Ethical debates and regulatory frameworks are far from resolved.

This uncertainty is precisely what makes AI literacy non-negotiable for today’s students. Few practitioners articulate the stakes more clearly for law students than Candida “Candi” Wolff, JD ’89, managing director and global head of government affairs at Citi. For Wolff, AI tools are already woven into the fabric of high-stakes legal and policy work.

“AI is going to be part of their lives whether they like it or not,” said Wolff. “Therefore, I think they should embrace, engage, and learn it, and law schools must teach how you use it, effectively, because ignoring it is at your peril.”

That conviction frames the challenge ahead for GW Law and other schools: as AI accelerates, legal education must evolve just as quickly to ensure future lawyers lead and not follow the technology.

AI Across the Curriculum

As GW Law addresses AI's opportunities and potential challenges, the goal is not mastery of a specific tool. It is a mastery of thinking—teaching students how to ask the right questions, craft the right prompts, identify missing information, and understand the interplay between technology and law.

GW Law is taking a comprehensive approach to preparing students for an AI-driven legal landscape. The law school has launched a school-wide initiative, guided by the Office of the Dean, designed to introduce students to both generative and discriminative AI and the ethical rules governing their use. A series of ad hoc committees of faculty and staff have considered how best to maintain and update the GW Law curriculum, recognizing that the AI landscape shifts faster than a traditional academic calendar.

Rather than silo AI in a single class, the consensus is that GW Law will build a curriculum that threads technical capability, ethical reasoning, and doctrinal expertise throughout a law student’s education. Within the Fundamentals of Lawyering Program, faculty members have already embedded AI training directly into the first-year experience.

 

'We are creating an environment in which faculty have the freedom to teach in ways that work best, and students gain exposure to AI in multiple contexts.'

Adrienne Fowler
Bernard Assistant Dean of the Privacy and Technology Law Program; Deputy Director, The Bernard Center

Image
Adrienne Fowler
 

First-year students learn to craft effective prompts for legal research and writing; evaluate the accuracy and limitations of generative tools; compare human-generated and AI-generated research; identify biases, hallucinations, and reasoning errors; and apply professional ethics rules governing responsible use of AI.

The GW Law approach intentionally blends the school’s own course offerings with vetted third-party training. The effort invites faculty and staff to rethink traditional methods, experiment with new teaching strategies, and share insights about how best to introduce emerging technologies into legal education.

“We are creating an environment in which faculty have the freedom to teach in ways that work best, and students gain exposure to AI in multiple contexts, from some traditional legal courses to foundational skills classes to advanced, practice-focused offerings,” said Adrienne Fowler, the inaugural Bernard assistant dean of the Privacy and Technology Law Program and the deputy director of the GW Center for Law and Technology: The Bernard Center.

Across upper-level courses and experiential learning opportunities, students gain access to the same tools reshaping legal workplaces: Lexis+ AI, Westlaw AI, and Harvey for research and drafting; compliance-automation platforms used by corporations and government agencies; document-review AI that mimics lawfirm workflows; and AI-assisted due diligence and transactional drafting systems.

Clinical programs and workshops walk students through real-world scenarios: drafting motions with AI assistance, testing multiple versions of arguments, and using generative systems to analyze complex regulatory frameworks. And webinars and conferences offer opportunities to bring in real-world voices to discuss AI’s impact on the profession and the law.

In November, the GW Law Animal Law Program hosted the Artificial Intelligence, Animals, and the Law Conference, bringing together attorneys, scholars, technologists, and advocates to examine how emerging technologies are reshaping the field of animal law. A prime learning experience for students, the conference explored the potential of solving longstanding challenges affecting humans, animals, and the environment, as well as the risks and regulatory gaps that demand careful scrutiny.

At the same time, students are finding their own ways of educating themselves about AI and its legal implications. The student-run Journal of Law & Technology ( JOLT) has held a series of symposia looking at AI-related issues, and its latest one is dedicated to the thorny question of authorship and creativity in the age of AI.

These conversations at conferences and webinars are not abstract. Students work through concrete scenarios involving AI-generated text, images, and code, debating how the law should treat each contribution. These offerings reflect the school’s belief that students should not only learn existing doctrine but also help shape the future rules governing new technologies.

Recognizing AI’s growing influence across the legal profession, GW Law has launched a new certificate program that underscores the technology’s importance in modern practice. The program focuses on developing practical, hands-on skills: students draft and revise documents, create summaries, and work across multiple AI platforms while also exploring issues such as platform moderation, privacy, and data security. Those who complete the four-credit program earn a dean’s notation on their transcript, signaling their emerging expertise to future employers.

AI and Fundamentals of Lawyering

When first-year law students walk into Professor Catlin Meade’s Fundamentals of Lawyering class, few expect to confront generative AI as directly as they do legal citations or case briefings. Many arrive uneasy, unsure whether using ChatGPT could violate an academic integrity code or accidentally expose private information. Others have experimented with the tools but remain wary—trained by undergraduate policies to assume that AI is either forbidden or academically dangerous. Meade quickly dismantles both extremes.

“I reject the idea that every student walking in our building is already using AI and relying on it,” said Meade, associate professor of Fundamentals of Lawyering. “Instead, I often see fear, confusion, and a need for guidance.”

She begins not by warning them away from AI, but by teaching them how to think about it. In early fall, after students analyze their first set of cases, she also asks various AI models—Lexis+ AI, ChatGPT, and Microsoft Copilot—to summarize the same cases. The results are “pretty good” but never perfect, and the students quickly realize the catch: they would never have spotted the errors or missing analyses if they hadn’t first done the work themselves. The lesson lands immediately. Lawyers get in trouble not only because AI is inherently flawed, but because they trust it blindly.

 

'I reject the idea that every student walking in our building is already using AI and relying on it. Instead, I often see fear, confusion, and a need for guidance.'

Catlin Meade
Associate Professor of Fundamentals of Lawyering

 

From there, Meade reframes AI not as a shortcut, but as a risk environment. Meade believes students cannot develop sound judgment around the ethical use of AI without a foundational understanding of what generative AI is, so she talks to students about how generative AI models work and are trained. During the discussion, she introduces specific Model Rules of Professional Conduct implicated by a lawyer’s use of AI, underscored by real examples of lawyers and judges who have run afoul of the rules.

Yet she doesn’t stop there. She shows them how AI can genuinely improve their writing and analysis: identifying passive voice, creating revision tables, or generating structural options without altering substantive analysis. Students may use AI for their rewritten assignments but must file an AI certification outlining exactly which tools they used, how they used them, and affirming that they independently validated all information, which mirrors real-world practice in many courts today.

Meade sees these lessons as indispensable for students in their careers. Clients increasingly expect lawyers to use AI to reduce costs, and firms are adopting internal large language model (LLM) AI offerings like Harvey. Employers even ask about AI proficiency in employment interviews.

“A student who can talk about the ethical, practical use of AI will stand out,” she said, noting her goal is to teach students to be skeptical, competent, and capable risk managers.

The Practice of Law

When Fowler talks about the future of legal education, she doesn’t start with technology. She starts with fundamentals. She is adamant that students must first understand how to think like lawyers before they can appropriately decide if, when, and how to employ the assistance of machines.

“While there is some focus on the use of AI in GW’s Fundamentals of Lawyering, the primary focus of that program is rightfully learning how to do key legal tasks yourself,” she explained, emphasizing that the goal is not to let tools eclipse the development of core skills.

In January, Fowler began teaching one of the school’s newest offerings: The Use of AI and Emerging Technologies in Legal Practice, a one-credit reading and workshop course designed to ground students in the realities rather than the hype surrounding rapidly evolving tools.

“Generative AI is a new tool, but it involves a lot of the same issues as other technologies from a lawyer’s perspective,” she said. “Understand what the technology is doing, understand what might be done with the information you provide, understand its limitations… and understand how to properly query and doublecheck things.”

A central theme of the course is balance. Students must neither reflexively fear AI nor treat it as a miracle solution. “Just because you have a hammer doesn’t make everything a nail,” Fowler wryly noted.

In her view, the work ahead isn’t about teaching law students to rely on AI. It’s about preparing them to interrogate it, challenge it, and selectively integrate it into the profession with clarity and judgment. “We need to look at it like we do any other legal tool, with a critical eye and an educated mind,” she said.

A New Professional Landscape

Professor Christopher Cotropia's classroom hums with the energy of a profession in transition. In his courses on intellectual property and professional responsibility, he sees the same quiet anxiety on students’ faces that has begun to ripple across the legal world: What does it mean to be an ethical lawyer when machines can draft a paragraph of legalese in seconds? What does competence look like when AI can analyze a case or invent one just as quickly?

 
Image
Christopher Cotropia

'Filing briefs that have misstatements of law, or even made-up statements of the law, is certainly not competence.'

Christopher Cotropia
David Weaver Research Professor of Law

 

Over the past several years, Cotropia, the David Weaver Research Professor of Law, has reshaped his curriculum to meet that moment. What began as informal conversations in his professional responsibility class about emerging tools has grown into a full-class session devoted entirely to the ethical use of AI. The shift, he explains, was prompted by a steady drumbeat of headlines: lawyers sanctioned for submitting AI-hallucinated cases, courts issuing stern reprimands, and firms scrambling to create guardrails.

“Filing briefs that have misstatements of law, or even made-up statements of the law, is certainly not competence,” he tells students, pointing them to the American Bar Association’s recent guidance on AI and attorney duties.

But Cotropia doesn’t teach AI as a threat. Instead, he frames it as a new professional landscape requiring fluency, judgment, and adaptability. Lawyers cannot simply refuse to engage with these tools, he insists. “Being a competent attorney nowadays means you need to know what it is and how people are using it… You can use it to help your client, but also be aware of the downsides.”

In class, that philosophy becomes hands-on. Students in his intellectual property course receive an AI-generated analysis of a non-compete agreement. Cotropia jokingly calls it a “B-minus” solution. It conflates legal requirements, misses key facts, and papers over its flaws with polished prose. The students’ task is to edit it. The exercise quickly reveals the core lesson: you cannot be a good editor if you don’t know the law yourself. “You can’t just offload the substantive knowledge,” he said, noting that “de-skilling” is possible with an over-reliance on generative AI models.

Cotropia also carves out “safe spaces” for students to experiment. He said many students arrive nervous about violating either classroom expectations or the academic integrity code. Yet experimentation is essential. “If you’re not willing to experiment,” he tells them, “you’re going to get left behind.”

For Cotropia, the goal is not to teach students the AI of today—it will soon be outdated—but to prepare them for decades of technological evolution. The core duties of the profession remain constant, he said. But the lawyers who thrive in the future will be those who can meet those duties with curiosity, critical thinking, and a deep understanding of both the promise and the limits of the tools now shaping the world.

The Next Generation of Attorneys

In the years ahead, algorithms will write, sort, summarize, analyze, and help decide aspects of legal work. But human lawyers will still be the ones ensuring accuracy, fairness, and justice. The most advanced AI cannot replicate the “relationship business” of the law. It cannot build trust. It cannot mentor. It cannot navigate the interpersonal nuances that shape careers and outcomes. AI may accelerate work, but it does not negate the importance of human judgment.

“We don’t want lawyers to lose their critical thinking and let AI sort of take over. Lawyers still have to do the thinking and make the decisions,” said Citi’s Wolff. “You’re still going to have to know the law. You still need people who truly understand the substance to write an [AI] prompt.”

The next generation must be capable of using AI as a tool, challenging it as an adversary, and understanding it as a force reshaping society. GW Law intends to make sure its graduates are ready to do all three, said Dean Matthew.

“A modern legal education must meet students where the profession is going,” said Matthew. “Our investment in AI literacy is an investment in our students’ lifelong competitiveness and their capacity to serve clients with insight, precision, and integrity.”


GW Law Adds New AI Credential

GW Law is launching a certificate program to prepare students to use artificial intelligence (AI) in legal practice.

The Dean’s Recognition for Training on AI and Legal Technology in the Practice of Law is designed to help students apply AI in legal research and drafting. It will also expose them to the ethical, regulatory, and practical implications of using AI tools.

“Failing to teach our students to use AI technology is not serving them as they prepare for careers in the law. The danger is not that our students will be replaced by AI, but that they will be replaced by other people who know how to use AI,” said Laurie Kohn, the Jeffrey and Martha Kohn Senior Associate Dean for Academic Affairs. “I believe this certificate will distinguish us from other schools because it will allow us to breed synergies between our top-rate doctrinal education in cutting-edge issues related to the law of AI and legal technologies and a skill-based education that will develop students’ aptitude in the appropriate, ethical, and effective use of AI.”

Kohn said students will be able to complete the requirements for the certificate in a semester.

Dean Dayna Bowen Matthew said the recognition will demonstrate that “our students are graduating with the ability to use AI effectively, ethically, and with insight into the potential for its use and its misuse.”