top of page

Technology MCLE - Beyond the Hype: Real World Applications and Ethical Implications of AI in Legal Practice

Writer: Armilla Staley NgomoArmilla Staley Ngomo

Updated: 50 minutes ago

Introduction

On February 28, 2025, over one hundred and thirty professionals logged in to attend the San Diego Federal Bar Association’s CLE webinar titled “Beyond the Hype: Real World Applications and Ethical Implications of AI in Legal Practice,” presented by U.S. Magistrate Judge Allison H. Goddard; James Cooper, Professor of Law at California Western School of Law; and Kashyap Kompella, a widely recognized thought leader on AI and emerging technologies. The panelists provided a wealth of information about how to use and apply AI tools to the legal field, as well as provided cautionary tales about the importance of confidentiality and upholding ethical standards when engaging with these tools.


Kashyap Kompella

Mr. Kompella began the presentation by explaining general concepts and terminology of Artificial Intelligence (AI). He explained how the term “artificial intelligence” was coined 69 years ago, and referenced recidivism prediction software as an early example of AI. According to Mr. Kompella, deep learning involves artificial neuron sets; however, there is no similarity between the way AI and the human intelligence works. Generative AI models have the capacity to generate new content, and large language models use algorithms that learn how to predict the next words in a text. Examples of large language models are: the Transformer, which was invented by Google in 2017; Open AI’s ChatGPT; and Deep PT. AI can also generate text that is grammatically correct and has the semblance of elegance. However, Mr. Kompella warned that hallucination is content generated by a large language model that seems accurate, but is made up. This has been a concern with the advent of ChatGPT.



Research shows that up to 50% of legal tasks could be accomplished through the use of AI, which is much higher than in other industries, such as healthcare, which is closer to 30%. However, there are challenges to Generative AI uses in law, as legal data is largely proprietary. While there are limitations to large language models, there is also technology to help with the accuracy. The technology is rapidly changing, and terms such as “professional grade” or “lawyer grade” AI are becoming more prevalent.


U.S. Magistrate Judge Allison H. Goddard

Judge Goddard discussed the need to balance AI with confidence in the judiciary, especially as courts and judges consider concerns related to confidentiality, information security, bias, and complacency. Judges can use AI to help simplify verdict forms, to help describe technology in a patent case by various reading levels, and even to address an attorney who may have overstated a case in their brief.


For example, judges can upload parties’ briefs into AI tools such as Claude or Westlaw Quick Check Judicial to create a dashboard report that includes the cases relevant to both sides, and whether the case was presented by the movant or the opponent (or both). This is a great way for judges or law clerks to “amp up” their research, as the “cited authority” tab can help ensure that the cases cited by the parties are still good law, as well as provide a quotation analysis to help identify whether the party or parties may have described a case out of context. Judge Goddard also uses these AI tools to check her own work and confirm she has not missed any key authority by uploading her orders to check her citations are good law and quotes are in good order.


Because these AI tools provide a court-issued ID and are part of a closed system, judges can use them more confidently for research, including to create databases that can be shared across designated users. Another example of how these AI tools can be used effectively is by uploading all of the transcripts and trial briefs from a trial and asking the program to create a timeline of events. The timeline contains pinpoint cites that takes the judge straight to the record where that event is cited. Finally, Judge Goddard explained these AI tools can be used to create a table with all of the objections that were made during a particular witness’s testimony.

Google NotebookLM can also be used to upload a collection of documents and create a podcast of the documents, which Judge Goddard has used in social security appeals. This has made it easier for her to search her own orders when she can only recall some of the facts. She also noted that law students may begin their externships or clerkships with experience using CoCounsel 2.0, so judges should be aware of its use and significance.


James Cooper 

Professor Cooper spoke to the audience about how AI tools can be used responsibly by litigators. One example is to use them to perform “predictive analytics.” In other words, the tool can predict the potential outcome of a particular case by recognizing its pattern recognition and trends in similar cases, precedent, circuit decisions, and judge predictions. The tool cannot only predict how the judge might rule in a particular case, but can also provide an initial case evaluation to forecast whether the case will settle, win, or lose, and provide the bases for its conclusion. The tool can also provide insights about timeliness and litigation costs, along with case duration. Some examples of these predictive AI tools are Bloomberg Law and Lex Machina.

Notably, the practice of “predictive analytics” is illegal in France, and can result in a conviction and prison sentence of up to five years. France (and other civil law countries) are primarily concerned with the privacy of judges and the use of judges’ names, as they want to preserve the judges’ anonymity. The European Union also passed an Act sharing similar concerns. Meanwhile, the common law system is more open to AI technology and do not consider it to be an “unsupervised machine lawyer.”


The ABA has taken a position to fold this technology into ABA Model Rules of Professional Conduct (Rule 1.1, Comment 8), while the State Bar of California set out guiding principles with respect to AI less than a year after ChatGPT came out, specifically discussing how attorneys can charge clients for work generated by AI. Finally, Professor Cooper posed some questions to consider regarding the future of AI for lawyers: Can AI inventions be patented or copyrighted? What is fair use/data scraping? What about unauthorized training?


Conclusion

Toward the end of the webinar, the panel provided some additional insights and takeaways for the audience. Judge Goddard noted it is incumbent on lawyers to have a voice in the AI system, stating a “human always needs to be in the loop.” And, to have a good and accurate body of case law, “people need to use AI.” She said attorneys also need to stay involved in the community to learn more about the ethical duties and considerations of using AI.


Professor Cooper cautioned that, “AI is moving, and is moving fast. There is an impetus for us to know what is going on, and attending CLE’s on the subject. Cite, but verify.” And Mr. Kompella closed out the webinar by noting that “we always need to be careful of overreliance on AI tools.”


The written materials for this program can be found here, and a recording can be viewed here.


The San Diego FBA thanks the panelists for sharing their time and expertise with our community, 2022-2024 Board Member, Sanjay Bhandari, for organizing the webinar, and Vice President of Strategic Relations, Armilla Staley-Ngomo, for providing additional programming support.



bottom of page