Shaping Artificial Intelligence and the Law

March 13, 2026
Illustration of a brain downloading many streams of information

Written by Mary A. Dempsey
Illustration by Donald Clark

 

'In my opinion, no other law school faculty is covering the breadth of AI issues and bringing the variety of opinion, perspective, and real world impact as the GW Law faculty.'

Dean Dayna Bowen Matthew

 

GW Law faculty sit at the forefront of America's discourse about artificial intelligence (AI). These preeminent legal scholars are helping to shape the law surrounding AI and its implications for future lawyers. Their groundbreaking doctrinal scholarship in top legal journals examines AI’s disruption of privacy law and intellectual property rights, police surveillance, contract law, and even democracy itself.

“In my opinion, no other law school faculty is covering the breadth of AI issues and bringing the variety of opinion, perspective, and real world impact as the GW Law faculty. In the brave new world of artificial intelligence, we are making good on our vision and promise to be the school that informs lawmakers, policymakers, and decisionmakers around the world,” said Dean Dayna Bowen Matthew.

The public face for much of this work is the GW Center for Law and Technology: The Bernard Center. Last year, the new center launched the online GW Journal of Law and Technology ( JOLT); content in its inaugural issue ranged from AI governance to privacy regulation. The center’s faculty co-directors are Robert Brauneis, the Michael J. McKeon Professor of Intellectual Property Law, and Daniel J. Solove, the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law. Both are producing front line scholarship on AI’s threats to the law, while the Bernard Center’s two subject matter deans–Adrienne Fowler and John Whealan–forge partnerships between GW Law and the public and private sectors to train students and influence the law.

Together, these intellectual leaders place GW Law at the forefront of finding solutions to AI’s thorniest legal issues. “I teach copyright law, and I used to tell my students that the most interesting time for copyright law was the late 1990s when the internet was becoming the big thing. But if you ask me today, I have to say that now is the most interesting time,” said Brauneis.

In addressing copyright and AI training, he focuses on areas where technology may leave human authors, artists, and other creatives unprotected by copyright law. His latest article, “Copyright and the Training of Human Authors and Generative Machines,” appears in The Columbia Journal of Law & the Arts. Brauneis says fair use has emerged as a central conflict for generative AI as tech companies scoop up copyrighted material and content creators cry foul.

“The peril is that copyright law could cease to give adequate incentives for human authors to create things. There is also the argument that if much of human creativity is replaced by AI works … we will lose what is an essential part of our humanity, the exploration of the world through creation,” Brauneis said. “I happen to hold the position that the training of generative AI models is not fair use. And yet, the two courts that have approached this issue are seeing it the other way.”

Brauneis manages the Database of AI Litigation, which currently tracks about 260 cases, of which about 40 involve uses of works under copyright to train generative AI models.

“This fair use issue is going to play out over the next four or five years,” he said. “If I had to guess, I’d say it will eventually end up before the Supreme Court.”

 
Image
Robert Brauneis

'There is the also the argument that if much of human creativity is replaced by AI works … we will lose what is an essential part of our humanity, the exploration of the world through creation.'

Robert Brauneis
McKeon Professor of Intellectual Property Law; Faculty Co-Director, The Bernard Center; Co-Director of the Intellectual Property Law Program

 

In addition to generating scholarship, the Bernard Center underpins GW Law’s ability to convene prominent thinkers on issues related to AI. In 2025, it held a symposium on digital surveillance. This year, it hosted a February symposium to examine authorship and inventorship with AI. And on October 2 and 3, it will bring together some of the country’s most prominent copyright scholars for discussions focused on “The Future of the Copyright Act.” The gathering, 50 years after the enactment of the Copyright Act of 1976, includes presentations and papers that will be published in the Journal of the Copyright Society.


The Next Generation of Lawyers

GW Law's deep bench of AI-related expertise is accelerated by the school’s collaborative culture. Cooperation across branches of law not only positions the school to effectively impact policies and law; it also provides the next generation of lawyers with exemplary opportunities for learning. AI discussion is woven into courses, seminars, and—beginning this spring—a new certificate program in AI and the law.

“GW has a wonderful collection of scholars who are thinking about the hardest issues around AI,” said Professor of Law Andrew Guthrie Ferguson, a national expert on police surveillance technologies and their chilling effect. “The fun part about being here is that there are intersections of AI across legal domains, and that is helpful in generating new scholarship ideas and new thoughts. You can go across the hall and talk to someone who is looking at a different area of AI—and that is very valuable.”

In her seminar on AI bias and discrimination, Visiting Associate Professor of Law and Privacy and Technology Law Fellow Christina Lee pushes students to look at the pervasive role AI plays in people’s lives.

Christina Lee
Christina Lee

“AI is going to be a consequential technology that touches every aspect of our lives—and it will interact with and stretch the law in interesting ways,” Lee said. “The way it touches our lives is different from what has come before it. The challenge is figuring out which legal frameworks still work, where they can be made to work with stretching, and where we need new legal frameworks.

“That is going to be the work of today’s and tomorrow’s lawyers. As we think about educating lawyers of tomorrow, it is important for our students to understand what is going on to bridge the gaps in this new technology.”

Associate Dean for Academic Affairs and Associate Professor of Law Aram Gavoor is experimenting with AI applications in the Administrative Law, Issues and Appeals Clinic that he directs. He notes that there is currently no commercially available AI model that is sophisticated enough to replicate human research of complex U.S. Supreme Court briefs. But he has authorized the eight students in his appellate clinic, which specializes in administrative law and public law issues on nationally significant questions, to use AI for general research that can be supplemented with individual non-AI research.

He says the students are careful to ensure that privileged information on clients, concepts, and legal issues are not exposed, even in a Mosaic AI-type setting. That means AI prompts must be general.

“It is working well but, ultimately, we are also exploring different domains for privileged legal research using AI. That vetting process is an intentional and thoughtful one,” Gavoor said. “The students appreciate that the clinic is techforward while also recognizing that in the legal profession we need very careful professional responsibility guardrails.”

Aram Gavoor
Aram Gavoor

Outside the clinic, Gavoor is engaged in research that addresses AI in the face of energetic federal government tech deregulation.

“My research asks whether and how the marketplace, which desires stability, can engage in self-regulatory behavior for AI. I am looking at how participants in the AI industry can coalesce under self-regulatory standards based on market principles, so as to mitigate its general public use for cyberattacks as well as novel security vulnerabilities,” he explained.

He is especially interested in nonpartisan alignments aimed at restricting the deployment of powerful AI systems capable of cyberattacks or chemical, biological, radiological, nuclear, or explosive weapons.

“It makes sense for the industry to make some norms to reduce—or mitigate altogether—those very unfavorable outcomes,” he said.

Similar to Gavoor’s law clinic, experiential learning also anchors an AI-focused project involving three students working under the supervision of Adrienne Fowler, the deputy director of the Bernard Center. The students are collaborating with the Lawyers Committee for Civil Rights Under Law to develop Freedom of Information Act requests focused on the use of AI facial recognition technology by housing agencies and airports across the country.

“Our students leave law school not only understanding what the law does but also what it will be shaped in the future to do—and how they can be the next generation of leaders who construct legal regimes for new technology,” said Dean Matthew.


AI's Regulatory Quagmire

A jumble of laws—old and new— have begun to address the vast array of unanswered legal questions presented by AI’s new capabilities. But these laws leave many gaps in our understanding. One of those novel issues is the question of how to protect individuals’ privacy in the face of artificial intelligence. Professor Dan Solove, the Bernard Center’s other faculty co-director, is a preeminent voice offering to unravel this regulatory quagmire, and fill the existing void. He is one of the world’s most-cited scholars on privacy law. Solove’s latest book, On Privacy and Technology, was published last year.

 

'Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted.'

Daniel J. Solove
Bernard Professor of Intellectual Property and Technology Law; Faculty Co-Director, The Bernard Center

Image
Daniel J. Solove
 

Like his recent Florida Law Review article “Artificial Intelligence and Privacy,” the book outlines the problems that AI poses to privacy and suggests regulatory frameworks to mitigate that conflict. Solove believes new privacy laws, to be effective, must make fundamental changes in the way companies do business.

“Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted,” Solove wrote in the Florida Law Review. “But AI glaringly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws.”

Solove says AI poses substantial threats to privacy, and the dearth of a comprehensive privacy law makes any remedies a patchwork response. In his recent paper in the California Law Review, “The Great Scrape: The Clash Between Scraping and Privacy,” he contends that the automated extraction of data on the internet, which is unfolding at an unprecedented scale, violates nearly all the key principals of privacy law. What is needed, he says, is “a radical rethink of how privacy law addresses scraping.”

Protecting the Promise of AI

In contrast, Oppenheim Professor of Law Michael Abramowicz sees more promise than peril. Abramowicz says legislation is difficult precisely because of the speed with which technology is changing, and he believes premature application of the law could prevent crucial technology from emerging. Abramowicz also sees specific AI benefits in the legal field, such as bringing down the cost of legal services and making jury trials more accessible again.

“With AI, the death of the trial is going to reverse,” he said.

He also predicts that AI in the near future could enable young lawyers to open their own law firms and compete with senior lawyers.

 
Image
Michael Abramowicz

'With AI, the death of the trial is going to reverse. ... We could be entering upon a golden age of law and a golden age of lawyering.'

Michael Abramowicz
Oppenheim Professor of Law

 

“We could be entering upon a golden age of law and a golden age of lawyering,” said Abramowicz, who is also the law school’s associate dean for strategy and innovation.

His latest work touching on AI appears in the current issue of the George Washington Law Review. The article, “Major Technological Questions,” is co-authored with John F. Duffy of the University of Virginia School of Law and argues that courts should be skeptical of applying old, existing laws to regulate the “rush” of new AI technologies. They caution instead that lawmakers must first develop experience with new technologies before making important regulatory decisions. In their words, courts must “restrain the dead hand of the past from thoughtlessly tyrannizing the present and future.”


AI, Privacy, and Policing

Privacy rights are also central to the scholarship of Professor of Law Andrew Guthrie Ferguson, a national expert on police surveillance technologies, who joined GW Law last year. His work on predictive policing, facial recognition, and video analytics appears in top law journals. His latest book, Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance, comes out this year.

“Part of what I’m teaching brings up how AI is changing privacy and surveillance, something that law students should be thinking about as consumers, as lawyers, as future legislators, as judges,” Ferguson said. “They should think about it as thought leaders because they will be confronted with these issues before many other people.”

He is especially troubled by the growing use of AI to generate police reports from the audio recordings in police cameras.

“Almost 95 percent of cases will get resolved before trial, without seeing whether AI got it right, especially with low-level felonies,” Ferguson said. “This [AI] document will go to the prosecutor and the judge. It will be the basis of a constitutional motion to suppress. It will probably be the basis of plea bargains and probation revocations.

 

'Part of what I’m teaching brings up how AI is changing privacy and surveillance, something that law students should be thinking about as consumers, as lawyers, as future legislators, as judges.'

Andrew Guthrie Ferguson
Professor of Law

Image
Andrew Guthrie Ferguson
 

“In other words, the main document of fact will be an AI-generated thing that we put so much weight on,” Ferguson continued. “I think the simple solution is that maybe we allow it as a transcript, but not a police report.”

He is also concerned about the proliferation of public cameras for law enforcement, citing privacy violations, the potential for mass surveillance, algorithmic bias, and a lack of regulation.

“As AI technology turns cameras into something new and more powerful, we are changing the balance of power between police and citizens. We are creating an opening for an authoritarian government to misuse technology,” he said. “We’re building the technology and funding the technology, and we’re not debating its risks and rewards.”


AI: A Threat to Democracy?

Among the plethora of AI-focused scholarship at GW Law—including AI’s impact on civil rights, copyright, privacy rights, and beyond—the work of Patricia Roberts Harris Research Professor Spencer Overton addresses a fundamental question: Is AI a danger to multiracial democracy?

Overton’s work is the first to comprehensively examine the extent to which AI—and the legal frameworks that regulate it—influence race and democracy. His article in the Iowa Law Review, “Overcoming Racial Harms to Democracy from Artificial Intelligence,” details how AI and related technologies are transforming the U.S. electoral system, from “deepfake” recordings and videos to racial bias in automated election administration to the potential for AI-empowered hackers to inundate local election offices.

Even without malicious intention, bias and flaws embedded in AI datasets could affect elections and policymaking well into the future, says Overton, the founder and faculty director of GW Law’s Multiracial Democracy Project. He adds that existing laws, including the Voting Rights Act, are no match for the threat.

 
Image
Spencer Overton

'We can’t just rely on technologists to make technology and policy decisions. Key decisions that are democratic determinations require input from across society.'

Spencer Overton
Faculty Director, Multiracial Democracy Project

 

“To some, racial diversity is no longer considered a public good, and I believe this approach also shapes our government’s current approach to AI governance,” he said. In “Ethnonationalism by Algorithm,” his forthcoming article that will appear in the Howard Law Journal, he argues that the current White House has intentionally used federal AI policy to advance a broader agenda focused on dismantling racial diversity. Overton believes entrenching racial inclusion into AI law at its formative stage could shape the trajectory of a U.S. democracy that is growing ever diverse. But he sees little government or tech industry interest in doing that.

His upcoming paper in the Utah Law Review, “Analyzing the Benefits of Artificial Intelligence to Racially Inclusive Democracy,” acknowledges that certain AI tools, if applied appropriately, could help facilitate language translation, empower grassroots organizers, reduce turnout gaps, and increase government responsiveness to communities of color; however, creating tools that are linked to just a handful of tech giants gives those companies outsized influence.

“We can’t just rely on technologists to make technology and policy decisions. Key decisions that are democratic determinations require input from across society,” Overton said. “This is an urgent moment … an important moment to really envision the future we would like to see.”

Overton says community groups, civil rights organizations, and philanthropy can be deployed to ensure that emerging technologies strengthen—rather than weaken—multiracial democracy. He offers best practices for those efforts in Technology, Multiracial Democracy, Community Power, and Philanthropy, which is to be published by the Knight First Amendment Institute at Columbia University.

Overton’s work has been the focus of conferences at the nation’s most prestigious law schools including The University of Pennsylvania, Harvard, and the University of Albany. Overton will explore AI’s influence on equality, bias, and voting at a March conference organized by his Multiracial Democracy Project in partnership with Harvard Law School and Stanford Law School. The event will be held on the Stanford campus.


Beyond the Blockchain Teaching Law in the Age of AI and Crypto

Kristin Johnson
Kristin Johnson

Lyle T. Alverson Professor of Law Kristin Johnson, one of GW Law’s newest faculty members, brings an impressive resumé of work at the crossroads where cutting-edge technologies meet the global financial system.

Her scholarship examines the rise of AI in finance as well as the creation of distributed digital ledger technologies, such as blockchain, that have spurred the explosive growth of cryptocurrencies in commercial and consumer financial transactions. She is also a leading voice in the development of international standards to prevent and defend against cyberthreats that plague both traditional financial institutions and crypto markets.

“For many decades, financial services and banking have relied on predictive technology,” said Johnson, who joined the faculty in September. “Artificial intelligence is changing how the largest financial institutions operate and creating pathways to better manage risks, reduce frictions, and identify fraud. For many, AI will significantly alter regulatory and compliance programs.”

Johnson said accelerated adoption of AI could make financial systems more accessible, enabling greater financial inclusion in the United States and globally. “At the same time,” she added, “there are risks—known and emerging—that we must carefully manage to effectively protect vulnerable consumers.” Those concerns include data privacy, security, and integrity.

During the Biden administration, Johnson was a commissioner on the Commodity Futures Trading Commission and, later, served as assistant secretary for financial institutions at the Department of the Treasury. Before joining academia, she worked in the private sector, including as vice president and assistant general counsel in the Treasury Services Division at JP Morgan and as an analyst at Goldman Sachs.


AI and the Doctrinal Collapse

In addition to teaching one of the first courses nationwide on AI law and policy, Associate Professor of Law Alicia Solow-Niederman is causing a quake with her work on how the legal regimes that govern data are failing. Focusing on privacy law and copyright law, she exposes how the boundaries between these partially overlapping, but distinct, bodies of law are blurring and becoming “illegible.” This has led to a phenomenon she calls “inter-regime doctrinal collapse.”

In “AI and Doctrinal Collapse,” her forthcoming article in the Stanford Law Review, Solow-Niederman contends that AI developers are able to manipulate copyright and privacy law to their advantage, with individual and systemic costs. Big corporations are rewarded by their ability to sidestep individual privacy rights and acquire creators’ works through privacy policies and terms of service.

 
Image
Alicia Solow-Niederman

'When a leading AI developer can simultaneously argue that data is public enough to scrape ... and private enough to keep secret ... something has gone seriously awry with how law constrains power.'

Alicia Solow-Niederman
Associate Professor of Law

 

She argues that these issues become clear only when they are examined across legal regimes.

“We don’t have a strong enough information privacy law, and we don’t think enough about the political economy of data and data acquisition and how that interacts with legal regimes like copyright law,” she said. “We’re really good at looking within one issue of law—privacy law or copyright law—but what if there are two frameworks that both regulate data, and they have very different laws and associated normative goals?”

Her analysis addresses who can exploit the existing legal structures and to what ends. Solow-Niederman is most concerned when inter-regime doctrinal collapse disproportionately helps the “haves” and permits private claims that threaten the public accountability and legitimacy of law itself.

As she explains in the Stanford Law Review article: “When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies— and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power.”


Overseeing Federal AI Procurement

For scholars tracking AI's impact on the law, keeping pace is one of the toughest challenges. Jessica Tillipman, associate dean for government procurement law studies, often finds herself discussing Federal Acquisition Regulation updates that have been released just hours before class. These ongoing procurement reforms, significant on their own, are unfolding in tandem with fastmoving debate over how to regulate AI in federal acquisition. “You’re chasing a moving target,” said Tillipman, whose upcoming article “Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement” will appear in the winter 2026 issue of Public Contract Law Journal.

“The draft for the Public Contract Law Journal started as one article in December 2024. By April, I had to rework it once the current government began changing direction. Then it became something else entirely when the Trump administration released America’s AI Action Plan,” Tillipman said.

 

'You’re chasing a moving target. ...They’re getting the agencies to use AI without transparent governance.'

Jessica Tillipman
Associate Dean for Government Procurement Law Studies

Image
Jessica Tillipman
 

“Then, as I was writing about that, the GSA was coming out with $1 deals,” she added, referring to the General Services Administration’s agreements with several major AI companies. The companies offered their AI models to federal agencies at a discounted rate of $1 per agency for one year.

“They’re getting the agencies to use AI without transparent governance,” Tillipman said.

Tillipman argues that the rapid deployment of these technologies, layered on top of already shifting procurement rules, is dismantling many of the guardrails that have traditionally reduced integrity risks in the federal acquisition system. Policy changes are leaving agencies vulnerable to both familiar forms of corruption and new avenues of exploitation. Yet, she expects little restraint as the current White House continues to align with industry on AI development and deployment.

GW Law’s distinguished scholarship underpins its capacity to facilitate crucial conversation and policy as the country grapples with the acceleration of technology aimed at dramatically changing the way people live and work. Our law school’s breadth of expertise also ensures that law students are exposed to and develop the intellectual skills necessary to navigate this new landscape.

“We promise students that GW Law will equip them to be problem solvers in a complex, dynamic, and sometimes polarized world. Where the fast changing world of AI is concerned, we more than make good on that promise by delivering a staggering range of the nation’s leading legal AI scholars who are not only thinking and writing about these issues and teaching students to understand them in the classroom, but are also helping to shape the direction of AI law as the nation’s most impactful AI public intellectuals,” said Dean Matthew.