top of page

In Contracts, Everything Looks Like a Need for Expertise

Updated: Sep 19, 2022


“Confirmation bias” is a term for the tendency to interpret new evidence as confirming your existing beliefs or theories. The notion is expressed in the old saw that if all you have is a hammer, everything looks like a nail.


My hammer is that it’s important to get right what you say in a contract and how you say it, and that expertise is required to do so. And I see a lot of nails that could do with being hammered. Am I in the throes of confirmation bias?


That came to mind when I saw this item on law.com noting that Lawgeex plans to lay off employees as it pivots to serve smaller businesses. Like LegalSifter, Lawgeex has applied artificial intelligence to help with review of contracts. This most recent pivot follows one I wrote about in this 2021 blog post, in which I say that “it appears that LawGeex has chosen not to invest in experts. Instead, they’re relying on nonexpert lawyers on staff.”


Now let’s consider a different context. In this 2019 blog post and this 2021 blog post, I describe the shortcomings of relying on artificial intelligence to learn the patterns in a stash of signed contacts and then use that, together with a menu of preferences, to create, in an instant, a markup of the other side’s draft. The result is that you replicate dysfunction. Instead, I see—surprise, surprise—a role for expertise. This is from the 2019 post:


By contrast, LegalSifter relies on expertise. We build “sifters”—algorithms that look for specific contract concepts—and bundle them in document types targeted at different transactions (leases, sponsored research agreements, services agreements, hotel agreements, and so on) and different users (buyers, sellers, landlords, tenants, and so on). The expertise comes into play in deciding what issues to look for, in determining how those issues are expressed in contracts (so we can instruct the technology accordingly), and in deciding what to tell users about those issues.


And here’s a third context: I’m prone to saying that legaltech applied to contracts is generally a grand exercise in garbage in, garbage out. Take contract-lifecycle-management software. CLM products usher contracts through the entire process, but generally they don’t meddle much with the content. Hence GIGO. Ideally, you’d bring expertise to bear at some stage.


So yes, everywhere I look I see nails for expertise to hammer. But I believe I’m not exhibiting confirmation bias.


I start with the assumption that contracts matter, and that a lot is at stake. Yet experience suggests, and my own research and writing show in excruciating detail, that mainstream contract drafting is dysfunctional in terms of what contracts say and how they say it.


Consider just a recent example, my new analysis of the phrase consequential damages (available via this blog post). Excluding consequential damages is the most common component in limitation-of-liability provisions, yet it’s a source of great confusion. What does that say about transactional practice?


Because of the extent of the general dysfunction, contract parties waste inordinate amounts of time and money in drafting, reviewing, and negotiating contracts and monitoring compliance, and they’re exposed to the risk of dispute and other suboptimal outcomes. It’s currently impossible to quantify the dysfunction, but that doesn’t make it any less real.


How do we fix the dysfunction? By coming up with comprehensive guidelines for clear and concise contract language. By figuring out what works and doesn’t work in addressing particular issues or particular kinds of transactions—what I’ve been doing for more than 20 years. In other words, by applying expertise. The next step is building that expertise into products like LegalSifter Review, to make the expertise more accessible.

Recent Posts

See All

Recently I noticed this article on Artificial Lawyer. The title is Generative Legal AI + “The Last Human Mile”, and it’s about limits to applying AI to legal work. It says this: The last mile problem

bottom of page