Document Type
Article
Abstract
In the summer of 2023, the misuse of ChatGPT by two New York attorneys who filed briefs citing fabricated cases made national headlines. This cautionary tale quickly had company, as incidents of other lawyers whose use of artificial intelligence (AI) went horribly wrong filtered in from around the country, including incidents in Texas, Georgia, Colorado, and California. But it was not just errant legal research that was to blame: the cases involved everything from a faulty criminal habeas brief to flawed, mass-generated eviction pleadings by a landlord’s law firm to a high-profile white collar criminal case, in which the convicted defendant blamed his lawyer’s use of generative AI in writing his closing argument. This Article, and its accompanying presentation, begins by discussing these cases as a way of illuminating the multiple areas of ethical risk presented by the use of generative AI. From breaches of the duty of technology competence and the duty of confidentiality, to the duties of supervision and use of independent professional judgment, a lawyer’s use of generative AI can implicate multiple dimensions of ethical concern. Part of the problem, as this Article discusses, is that use of AI tools has spread at a faster pace than lawyers’ grasp of the risks involved with the technology. For example, in the recent Wolters Kluwer Future Ready Lawyer Report, while seventy-four percent of attorneys surveyed expect generative AI to be integrated into their practices within the next year, a significantly lower percentage of lawyers actually understand AI tools. This Article then looks at the responses of stakeholders in the legal profession to generative AI. For example, multiple state and national bar associations have appointed taskforces to study AI and make recommendations regarding its use. In addition, in states like Florida and California, ethics bodies have issued advisory opinions or recommendations on regulating use of AI, tackling such unanswered questions as whether a lawyer must obtain the client’s informed consent in order to use generative AI in the client’s representation. Attorneys are also having to confront AI policies adopted by various law firms and the legal malpractice carriers that insure them. A final response considered by this Article analyzes the extent to which measures such as these disclosure policies are a proportional reaction to the examples of lawyer misuse of generative AI, or whether they are an overreaction—a “solution in search of a problem.” With the landscape of potentially reportable generative AI applications constantly expanding to include most search engines and word-processing applications, one must ask the question: can the traditionally risk averse, technologically backward legal profession adapt? This Article, and its accompanying presentation, hopes to address this and other questions posed by attorney use of generative AI.
Recommended Citation
Hon. John G. Browning,
Robot Lawyers Don’t Have Disciplinary Hearings—Real Lawyers Do: The Ethical Risks and Responses in Using Generative Artificial Intelligence,
40
Ga. St. U. L. Rev.
917
(2024).
Available at:
https://readingroom.law.gsu.edu/gsulr/vol40/iss4/9