Catlin Meade, Associate Professor of Fundamentals of Lawyering, talks about how she integrates AI into her classroom. Check out the video or read the conversation below to learn all about how GW Law ethically uses AI in the Fundamentals of Lawyering Program.
Why is it important for students to learn how to use AI?
It's important for 1L students to learn how to use AI because the core of being a good lawyer is exercising sound judgement. AI is here, and it is not going anywhere. To pretend it doesn't exist by prohibiting use is not helping our students grow. It's also really tied into all of our ethical constraints as lawyers. It involves several professional responsibility tenants including confidentiality, diligence, competence. Teaching AI in conjunction with those ethical responsibilities provides our students with the information they need to make good choices when using AI. Furthermore, teaching them more about the AI itself — How does it work? What can it do? More importantly, what can it not do? — helps them make the better decisions when they're out in the world, starting as early as their first year after 1L. It all wraps back to making sound judgements. Teaching them and giving them the tools to make those sound judgements is really important.
How do you incorporate AI lessons in your 1L classes?
I incorporate AI into my classroom a number of different ways. I started as early as class three of the fall this year. Our first assignment is always a closed universe of cases. So we give our students five cases to answer the question they've been presented. The class after I've assigned those cases for the students to read, I shared with them the AI results from several different LLMs (Large Language Models), or generative AI models, of what those models said the cases were about. So I asked the models to summarize the case. Some of them did an OK job. Most of them made something up entirely incorrect. I showed those to the students to highlight very early on that they cannot rely on AI to do that type of work for them, at least not right now where those models are.
That was really eye opening for them because it showed them, "I can't just type in a case name into this AI and get results that are going to help me do the task at hand." Even the ones that had decent summaries were so high level that the students realized they're not talking about the reasoning, which is the most important part of the analysis. So we do that really early to level-set where we were with AI and it's uses.
Fast forward to the spring semester I did a lecture on what generative AI is, generally speaking, how it works, what it can do, what it can not do, the pros and cons of using it and then i also lectured them on how to prompt an AI, specifically the LexusNexis and WestLaw new AI tools for legal research. They were able to use those particular AIs to do the legal research for a trial brief assignment.
After that, we did a policy advocacy exercise that was loosely related to the problem from our trial brief. After they did their advocacy they were assigned to different advocacy groups and had to make statements before a task force. I taught them how to use the AI to prompt the AI to help with writing, specifically, and drafting. We used several different models—we used ChatGPT, we used Copilot, and Claude and others—to try to see how they compare. The students then were able to use press releases from the organizations they had been assigned to to train the LLM and ask it to create a press release based on the advocacy they had just done. So they used the llm to draft the press release and then they had to go through and edit before they turned it in to me.
How do you teach students to use AI ethically?
It's important that we teach both sides of the AI. There are a lot of benefits that can come from AI as it develops, particularly in our field, but there are also a lot of drawbacks mainly around the mistakes it makes and the potential to lose confidential client data and other sensitive data.
We're really focused on phasing it in. We don't start on day one with AI. I started early on the fall semester by just highlighting the ways in which AI makes mistakes doing some of the routine tasks that you have to do as an attorney and then we had conversations about how had they not done the task themselves they wouldn't have known the AI had made the mistake. So they won't know how to catch mistakes made by an AI if they don't have the underlying skillset. We then pressed pause on AI the next semester where we integrated it into research. So they learned how to do research in the first semester and then in the second semester we integrated the AI powered research on our databases for our spring projects.
Additionally, we as professors keep abreast of how AI is changing, how it's being used in the legal field, what organizations are allowing and pivoting it, what types of AIs they're using, so that we can make sure that our students have the most up to date information and that we're allowing the appropriate amount of use for their future. But the bottom line is that they learn the skills first so they can learn how to do it without AI and then we show them ways AI is actually effective at helping with their work without them becoming reliant on it. While doing this we refer to and remind them of the ethics that interlaced with the use of AI.