The Ethical Use of Generative AI

This February, the Seattle IP Inn of Court held its monthly meeting around one of the most pressing topics facing legal practitioners today: the ethical use of generative artificial intelligence in law practice. The program was presented by Group 4, featuring Inn Mentorship Chair Aaron Weisman, and drew on guidance from the American Bar Association, the Washington State Bar Association, and emerging ethics authority to give members a practical, grounded framework for navigating this rapidly evolving landscape.

The session was substantive, timely, and — true to Inn form — deeply engaging. Here is what members covered.

The Big Seven: Duties That Govern AI Use in Law Practice

The presentation opened with what the group called the “Big Seven” — the seven duties under the Washington Rules of Professional Conduct (RPC) that every lawyer must consider when using generative AI tools. As the WSBA has noted, these are not exhaustive, but they represent the most commonly encountered ethical issues to date:

  1. Competence (RPC 1.1)
  2. Diligence (RPC 1.3)
  3. Confidentiality (RPC 1.6)
  4. Communication (RPC 1.4)
  5. Candor Toward Tribunals (RPC 3.3)
  6. Supervision of Lawyers and Nonlawyers (RPCs 5.1 and 5.3)
  7. Billing (RPC 1.5)

Each duty was examined in turn, with practical steps members can take to comply.

Competence: Know the Tool You Are Using

RPC 1.1 requires lawyers to provide competent representation, and Comment 8 makes clear that competence extends to the technology a lawyer uses in practice. Importantly, competence does not mean perfection or demanding that every lawyer become an early adopter of every new legal technology. It does require a baseline understanding of the AI tool at hand — its capabilities, its limitations, and its risks.

Practical steps include: understanding the limitations and potential risks of the AI tool being used; critically analyzing AI-generated output for accuracy and relevance; and exercising independent professional judgment before relying on any AI-generated content.

Diligence: Efficiency Does Not Replace Scrutiny

RPC 1.3 requires lawyers to act with reasonable diligence and promptness. The key insight here is that diligence is tethered to competence — if an AI tool promises to make a task more efficient, the lawyer still must use it with the requisite technical skill. Speed is not a substitute for scrutiny. Members were reminded to thoroughly review AI-generated output for legal and factual soundness, and to independently verify cases and legal arguments the AI presents.

Confidentiality: What You Share With AI Matters

This area generated significant discussion. RPC 1.6 prohibits a lawyer from revealing client information and requires reasonable efforts to prevent unauthorized disclosure. When it comes to AI tools, confidentiality concerns arise both from what information a lawyer inputs into a system and how that system uses it on the back end.

Practical guidance included: carefully reading end-user agreements and privacy policies before using any AI tool with client data; and, for lawyers using AI-powered chatbots for client intake, including appropriate disclaimers about the attorney-client relationship and clearly explaining to prospective clients whether their information will be kept confidential.

Communication: Clients Have a Right to Know

RPC 1.4 requires lawyers to reasonably consult with clients about the means by which their objectives will be accomplished, and to explain matters to the extent necessary for clients to make informed decisions. Applied to AI, this means disclosing the use of AI tools in legal research and drafting, obtaining informed consent before inputting confidential client information into an AI system, and explaining both the potential risks and the reasonably available alternatives.

Candor: You Are Responsible for What You File

RPC 3.3 prohibits lawyers from making false statements of fact or law to a tribunal — and it extends to AI-generated content submitted in court filings. The rise of AI “hallucinations,” where a tool confidently cites nonexistent cases, makes this duty more critical than ever. Members were reminded to verify the accuracy of all AI-generated content before submission and to alert the court immediately if material evidence is later discovered to be false.

Supervision: AI Training Is Now Part of Firm Management

RPCs 5.1 and 5.3 govern a lawyer’s duty to supervise other lawyers and nonlawyers, including independent contractors and vendors. This duty now extends to ensuring that anyone in a firm or legal department using AI tools has received adequate training to use those tools consistently with their ethical obligations. Best practices include establishing clear AI use policies, providing regular training, and carefully evaluating the contractual and technical safeguards offered by any AI product.

Billing: Transparency Is Non-Negotiable

RPC 1.5 prohibits unreasonable fees and requires transparency about the basis for billing. When AI tools are involved, lawyers must disclose how AI affects billing, charge fees that reflect the actual time spent on AI-assisted tasks, and obtain informed client consent before passing on the cost of AI tools as a matter expense. If a product carries a per-use cost, the client should know before the lawyer uses it on their matter.

A Note on the Evolving Landscape

The program drew on several authoritative sources, including ABA Formal Opinion 512 (July 2024) on generative AI tools, WSBA Advisory Opinion 2025-05 on AI-enabled tools in law practice, and the WSBA Legal Technology Task Force Report published in September 2025. Members left with a clear message: the ethical rules governing AI use are not new rules — they are existing professional duties applied to a new context. The framework is already in place. The obligation is to apply it thoughtfully.

This month’s meeting was a reminder of why programs like this matter. As generative AI becomes an increasingly standard part of legal practice, the Inn remains committed to preparing its members to use these tools responsibly, ethically, and in full compliance with their professional obligations.

Special thanks to Group 4 and Inn Mentorship Chair Aaron Weisman for an exceptional presentation.