Code the Law Weekly #4

Casetext deserves immense credit for convincing Thomson Reuters to buy them for $650,000,000. But Casetext also deserves a dash of criticism for ...

Code the Law Weekly #4
Image by Adam Ziegler, using Adobe Firefly (prompt: "a giant crowd of lawyers locked in jail"; style filters: Photo, Clay)

Unpopular Opinion

Casetext deserves immense credit for convincing Thomson Reuters to buy them for $650,000,000. But Casetext also deserves a dash of criticism for abandoning their stated mission to "make the world's laws free and understandable." The reality is that noble mission - much like the ubiquitous legal tech startup aspiration to "increase access to justice" - is very helpful when getting off the ground but difficult to align with marketplace demands.

Missing the Point

A court in Canada says:

[W]hen artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.

A court in Massachusetts asks the advocates, at the end of an awful oral argument:

[B]efore you leave ... did either of you use  – I'm just going to call it "AI" I know it has a longer name of some sort – to assist in the preparation of any of your filings with the court? We're trying to consider whether we need to adopt any rule changes.

I'll say it again: judges have absolutely no business encouraging, discouraging, influencing or questioning the means and methods used by lawyers to serve their clients. This is true for AI and any other technology, tool or approach.

Judges have no idea what they're doing in this area, and they're only going to mess things up. It's impossible to separate "AI" from the technology that already exists and is used everyday, including by the judges themselves. It's also an improper intrusion into the attorney's work and relationship with clients – akin, if not identical, to an unwarranted breach of work product protection.

Judges should police - with extreme prejudice - the accuracy and integrity of arguments and authorities submitted to them, but they already have all the procedural tools they need to do this. They do not need to fumble around in the dark with AI, or whatever it's called.    

Makers & Doers

Sam Harden experiments with combining legislation, legislative staff analyses and open-source AI into a "Chat with the Law" demonstration app.

Docsum wants to help you "close deals faster with the magic of AI."

Marc Andreessen was surprised to learn how generative AI could contribute to the more creative parts of lawyering.

Filevine tries to transform the drudgery of threatening to sue people.

Allison Morell shares her learning-by-doing experiment with retrieval based Q&A.

Keeping An Eye On ... Functions

About two weeks ago, OpenAI announced that its chat APIs could now determine which external tool to use to solve a problem beyond its inherent capabilities and could take the initial steps toward applying that tool. OpenAI terms this "function calling." This capability is very powerful, and only scratches the surface of what's to come.

It works basically like this:

  1. You describe a function (aka tool) to the AI in a structured manner
  2. You ask the AI for an answer or output that is beyond its inherent capability
  3. The AI tells you which function it needs to use to provide the requested output, and it provides the data you need to trigger the function
  4. You use the response to trigger the specified function, which provides an output

If this sounds like a complicated mess with a seemingly trivial payoff, you're right. For now. But it will quickly get more powerful and easier to implement. In the meantime, give it a try using this example code from OpenAI. Perhaps you can build a tool that checks whether citations in a text are real ;)