Using GenAI as a legal tool

3 min and 33 sec to read, 887 words One of the key use cases for generative AI that has been identified by observers and analysts is in legal work. There are many different reasons for this — legal analysis is rule-based, legal texts are often well-structured and there are…

5 min and 22 sec to read, 1341 words

One of the key use cases for generative AI that has been identified by observers and analysts is in legal work. There are many different reasons for this — legal analysis is rule-based, legal texts are often well-structured and there are well-defined corpora of texts that should be taken into account when working on a legal problem of some kind. 1 See for example Stokel-Walker, Chis “Generative AI is coming for the lawyers” Wired 2023.02.21.

Some have argued that generative AI may fundamentally transform the law as a practice and institution2 See “Generative AI could radically alter the practice of law” The Economist June 2023. and some official bodies like the European Commission for the Efficiency of Justice have started to grapple with the challenges that the use of these tools generate. In the information note “Use of Generative Artificial Intelligence (AI) by judicial professionals in a workrelated context” published this year, we find some pretty solid advice for the use of generative AI overall:

D. How should it be applied?

  1. Make sure that the tool’s use is authorised and appropriate for the desired purpose.
  2. Bear in mind that it is only a tool and try to understand how it works (be aware of human cognitive biases).
  3. Give preference to systems that have been trained on certified and official data, the list of which is known, to limit the risks of bias, hallucination, and copyright infringement.
  4. Give the tool clear instructions (prompts) about what is expected of it. It is through conversation that the machine will obtain the instructions it needs, so do not hesitate to engage it, unlike a search engine. Asking for clarification or even refining or modifying the request is possible. For example, give the machine a context (country, period of time), define the task (e.g. write a summary in xx words), specify who the output is intended for, how it is to be produced and the tone the tool should adopt, ask for a specific presentation format, check that the instructions have been properly understood (by asking the machine rephrasing them), provide examples of the answers expected for similar questions to enable the tool to imitate their form and style.
  5. Enter only non-sensitive data and information which is already available in the public domain.
  6. Always check the correctness of the answers, even in case references are given (especially check the existence of the reference).
  7. Be transparent and always indicate if an analysis or content was generated by generative AI.
  8. Reformulate the generated text in case it shall feed into official and/or legal documents.
  9. Remain in control of your choice and the decision-making process and take a critical look at the made proposals.

These recommendations are quite good and thoughtful — the only thing missing here is possibly the use of generative AI to challenge your own views and provide a second and third opinion on what you have written yourself. Adding this would make this list a solid checklist for anyone engaging with these tools in their day-to-day. Even the list of things not to do is quite good:

E. When should it not be applied?

  1. In case you are not aware of, do not understand or do not agree to the terms and conditions of use.
  2. In case it is forbidden/against your organisational regulations.
  3. In case you cannot assess the result for factual correctness and bias.
  4. In case you would be required to enter and thus disclose personal, confidential, copyright protected or otherwise sensitive data.
  5. In case you must know how your answer was derived.
  6. In case you are expected to produce a genuinely self-derived answer

The thing that stands out here are the two last recommendations – and I think they are confused. The idea of a genuinely self-derived answer seems close to the medieval epistemological category of “illumination” where knowledge just appears in the writers mind by divine intervention. It is unlikely that there is anything that is genuinely self-derived at all; we all depend on sources and education and references of different kinds. And the trick here is to realize that these tools are not information retrieval tools — they do not derive answers nor do they produce search results in the same way that a classical search engine does — they are more like improvisation tools, suggesting possible variations on a theme, new ideas and holes in your argument. Think about them as a discussion partner of sorts.

If used in a good way, these tools will significantly increase the robustness and speed of the legal system – but law still needs to be practice by (human) hand. Not least – as shown by Irina Carnat in a recent paper – because of accountability issues: Carnat, in her conclusion, puts it well:3 See Carnat, Irina, Addressing the Risks of Generative AI for the Judiciary: The Accountability Framework(s) Under the EU AI Act. Available at SSRN: https://ssrn.com/abstract=4887438 or http://dx.doi.org/10.2139/ssrn.4887438

The recent leap in the state of the art of NLP brought about by the development of generative LLMs have an intuitive appeal for the legal profession, including the prospect of using them to assist judicial decision-making. While these advancements present new frontiers for legal automation, the risks of overreliance on the algorithmic output pose significant concerns in terms of accountability.

The analysis in this paper has highlighted how the accountability gap identified in prior instances of AI-driven judicial decision-making, such as the COMPAS case, persists despite the technological shift towards more advanced generative models. Factors like model opaqueness, their proprietary nature, lack of knowledge of the system’s functioning and accuracy, continue to hinder the human decision-maker’s understanding of the system’s functioning, leading to inappropriate reliance on the AI’s outputs.

To address these risks, this paper has examined the regulatory framework established by the EU
Artificial Intelligence Act. The risk-based approach of the regulation, including the specific provisions for high-risk AI systems deployed in the justice domain, provides a foundation for developing an accountability framework. Key elements of this framework include the requirements for human oversight, clear delineation of roles and responsibilities along the AI value chain, and the critical importance of AI literacy among all stakeholders involved in the deployment and use of these systems.

Ultimately, the successful and responsible integration of generative LLMs in judicial decision-making will require a comprehensive approach that goes beyond technical considerations. It must address the cognitive biases, ensuring that human decision-makers maintain active agency and control over the process, rather than becoming passive overseers of algorithmic outputs. By emphasizing the shared responsibility across the AI value chain and the imperative of AI literacy, the regulatory framework can help mitigate the risks of automation bias and preserve the fundamental principles of due process and the rule of law.

The one issue I have with Carnat’s conclusion is that she assumes that this accountability is robust today. The real question about the use of any new tool in legal practice should be if it – relative to the existing state of affairs – improves things or not. There are always risks associated with any tool, but are they greater or less than the system as it works today? The ability to use generative AI to challenge early drafts of judicial decisions might well suffer from a deficit in accountability, but what accountability do we have in today’s system?

This is why proposals such as Orly Lobel’s about a right to automated decision making4 See Lobel’s paper: Lobel, Orly, The Law of AI for Good (January 26, 2023). San Diego Legal Studies Paper No. 23-001, Available at SSRN: https://ssrn.com/abstract=4338862 or http://dx.doi.org/10.2139/ssrn.4338862 ., even in judicial matters, are so intriguing. It is not that the machine will not be biased, make mistakes or sometimes even hallucinate — it is more about the frequency with which it does that, and the comparable frequency of similar mistakes made by today’s system.

My prediction for this would be that in 30 years, not testing an opinion, a contract or a legal document in an LLM or its descendant AI will be considered malpractice.

Footnotes and references

+

2 responses to “Using GenAI as a legal tool”

  1. Nicklas, your article raises insightful points about the profound impact AI is likely to have on the legal profession, but I think you overstate society’s need need for accountability and I think you understate the way that AI will completely change the practice of law. Here’s my take on it and I’d love to work out a taxonomy with you on this.

    AI already outperforms any human lawyers on pretty much any given knowledge-based legal task (e.g., what’s the relevant statute or principle, how does it apply), and attempts to control or limit its use may ultimately prove futile as consumers gravitate towards its speed, accessibility, and iterative capabilities. There’s really no competition between the ability of a lawyer vs. AI to handle general legal questions. The potential for AI to serve as a default first point of legal consultation for many people is already the base case. Absurd attempts like North Carolina’s law to declare all AI illegal unless it’s Lexis, Westlaw or Bloomberg serves to prove the point that control is futile. Under no circumstances is the North Carolina order practically enforceable.

    I think we need a vision of how legal education and the structure of the profession may evolve in response to AI. In my mind, society needs to move towards shorter law programs (e.g., a 1 year degree, not 3 years) focused on training lawyers to leverage AI and prompts. The high horse of a 3-yr Juris Doctorate in the U.S. or a 5 year BA/MA in Europe is over. Instead, the profession should move into a split between transactional/general practice (a one-year degree) and litigation specialists (one year, but with practicum), similar to the way that the UK currently splits barristers and solicitors, but with much more appropriate training times.

    I think a taxonomy to map out these changes is valuable. Changing the thinking here requires undoing centuries of what has essentially been a guild. Here’s my first cut on the taxonomy:

    I. AI Capabilities in Legal Practice today
    A. Current and near-future abilities
    B. Consumer usage patterns (regardless of law, what’s today’s practice)

    II. Legal Education Restructuring
    A. One-year AI-focused programs, a new kind of JD
    B. Transition from traditional education models to solicitor/barrister

    III. Evolution of Legal Professions
    A. Transactional/General Practice (a 1-yr general law degree)
    B. Litigation Specialists (will be practicum-based and different)

    IV. Accountability and Quality Assurance
    A. Malpractice in AI-assisted practice (holding litigators accountable, but changing the bar for transactional)
    B. Mitigating AI hallucinations (duties of parties to verify and validate statements of law).

    V. Judicial and Regulatory Adaptation
    A. Court system reforms that assume people use AI, not prohibit it.
    B. Regulatory challenges (e.g., North Carolina case)​​​​​​​​​​​​​​​​ and convincing legislators of wholesale reform of the profession.

    The future lawyers are not lawyers. They’re you, me, Claude, Gemini, ChatGPT and all of us,. We will need to sue and litigate and that’s important, but wholly different than the day-to-day practice of law, which is not going to persist in the guild that’s represented the profession for centuries.​​​​​​​​​​​​​​​​

    Let’s write this up. ❤️

    1. Thank you! Yes, lets!! This is a good idea.

Leave a Reply

Discover more from Unpredictable Patterns

Subscribe now to keep reading and get access to the full archive.

Continue reading