The twice-monthly “Dear Ethics Lawyer” column is part of a training regimen of the Legal Ethics Project, authored by Mark Hinderks, former managing partner and counsel to an AmLaw 200 firm. Read More

Q: Dear Ethics Lawyer, Seemingly out of nowhere, the world is abuzz about ChatGPT and other generative AI tools capable of nearly instantaneous creation of writings that address complex questions, including briefs, memos and other legal documents. Depending upon who you listen to, this is either the end of human usefulness, an incredible tool to magnify our efforts, or a risky novelty riddled with false information. From a legal ethics standpoint, should our law firm ban it, use it or something in between?

A: Generative AI tools likely hold great promise for increasing the efficiency of lawyers with rapid access to information and construction of documents. However, these tools are still in early stages of development, and present professional responsibility issues. For example, in searching the internet for answers they may not always distinguish between information that is correct and incorrect, and in some instances may “hallucinate,” i.e., simply create answers and underlying information that are false. For example, in Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y.), counsel for plaintiffs have been ordered to show cause why they should not be sanctioned for using ChatGPT to write a brief they submitted which contained false citations to judicial decisions that did not exist, but which ChatGPT created and incorporated into the brief.

In general, inclusion of false or misleading information in briefs or other legal documents would violate a lawyer’s duties of competence (Model Rule 1.1) and candor to the tribunal (Model Rule 3.3), and would likely be misconduct under Model Rule 8.4(c) (conduct involving misrepresentation) and (d)(conduct prejudicial to the administration of justice), perhaps among others. In addition, there may be issues with Model Rule 1.6 confidentiality or waiver of privilege if and to the extent information relating to a client representation is shared with the generative AI tool. There may also be issues concerning violation of intellectual property rights relating to information gathered and used by the AI tool.

These lead to three recommendations from a professional responsibility (and risk management) standpoint:

  1. Results provided by generative AI tools should not currently be relied upon and used by lawyers (or those supporting them) without verification of all information.
  2. Information relating to client representation should not be input or submitted to the generative AI tool without informed client consent and consideration of any privilege waiver issues.
  3. Care should be exercised concerning any intellectual property issues related to use of information provided.

There may be uses for generative AI and for particular generative AI tools that, if carefully monitored and checked, add efficiency to client representation. As the tools become more sophisticated, safeguards are built-in, and practical experience develops, these uses may become greater over time. But for now, these tools should be used only with great caution, or not at all.