For Contract Review, Don’t Believe the AI-Markup Hype
Updated: Jan 24
Author: Ken Adams, Chief Content Officer
[See also Ken Adams's more recent post, More on the Shortcomings of AI Markups.]
When we speak with potential clients, some remark to us, with a sage nod, that, well, LegalSifter doesn’t mark up the draft being reviewed.
To make sure we understand what they’re referring to, we looked into this. We quickly determined that what these companies are alluding to is something I’ve seen already. I described it in this recent blog post about BlackBoiler, another artificial-intelligence-and-contract-review vendor:
A client provides BlackBoiler with a corpus of its negotiated and signed high-volume contracts. BlackBoiler’s technology learns what’s in the corpus, so when the client sends BlackBoiler a new contract, in two minutes BlackBoiler marks up that contract to reflect the client’s preferred position.
Because other companies are doing the same as BlackBoiler, I thought it might help to describe again, in greater detail, the implications of creating a markup based on a company’s negotiation record and how that compares to LegalSifter’s expertise-based approach.
Good: You Build on Past Decisions
There’s a clear benefit to having technology mark up a draft to reflect how you’ve responded to similar drafts in the past—you get to build on your negotiation record when reviewing new drafts. You don’t have to constantly revisit the same decisions each time you encounter them; instead, the technology makes those decisions for you.
But this approach comes with shortcomings.
Bad: Your Negotiation Record Might Send Mixed Signals
If in reviewing a new draft all you bring to bear is your prior negotiation record, you’re limiting yourself.
Your negotiation record might be very mixed. How you treated, for example, services agreements might have varied greatly, depending on the services being performed, the industry, and the negotiation leverage of the parties. If your negotiation record has enough of that variety, it might send mixed signals for purposes of reviewing a new draft.
Bad: Your Negotiation Record Likely Incorporates Poor Decisions
AI review based on your negotiation record won’t be capable of identifying shortcomings in that negotiation record.
Given the dysfunction of traditional contract drafting (as chronicled in my book A Manual of Style for Contract Drafting and my articles and blog posts), it’s safe to assume that some proportion of the decisions in your negotiation record were misguided. For example, when you see in an AI-assisted markup the word commercially inserted before the phrase reasonable efforts, you know that beneath the gleaming AI façade is addled conventional wisdom.
Bad: You Ignore What Isn’t in Your Negotiation Record
If your AI review is based entirely on your negotiation record, any issues not reflected in the negotiation record will be invisible to it, no matter how relevant they might be. And you won’t have the opportunity to learn about those issues.
Bad: Review Based on Your Negotiation Record Requires Up-Front Work
AI review based on your negotiation record requires an implementation phase. That means delay and extra cost.
Bad: You Might Not Have a Negotiation Record
AI review based on your negotiation record won’t work if you don’t have a negotiation record. Perhaps you’ve just started operations or are new to a particular kind of transaction. Or perhaps you’re dealing with a one-off transaction, like a hotel agreement.
LegalSifter’s expertise-based approach circumvents these problems:
Instead of relying on whatever expertise is encoded in your negotiation record, LegalSifter offers expertise in terms of the issues it looks for and the advice it offers.
The advice offered is either our out-of-the-box advice or advice developed by the client based on its own needs and experience. That advice can incorporate lessons learned from your negotiation record.
The expertise built into our Sifters—our algorithms that look for a particular issue—allows our clients to sidestep many misconceptions of traditional contract drafting.
We have hundreds of sifters. Collectively, they have a much broader range than whatever happens to be reflected in your negotiation record.
You can start “sifting” documents with LegalSifter immediately, with no implementation phase and related costs.
What contracts you can sift is limited only by the kinds of contracts LegalSifter is equipped to address. We have the basics covered, and we’re expanding our inventory, for example by adding sponsored research agreements.
We have access to industry-leading expertise, spearheaded by, well, me.
So the attraction of AI-markup of draft contracts is more than offset by the shortcomings. We’re comfortable we’ve made the right bet in opting instead for our expertise-based approach.
AI might well have a role to play when it comes to incorporating information from your negotiation record. But that should be only part of the story, not the whole story.
The Real Concern
We suspect that when people scratch their chin reflectively over the idea of AI markups of draft contracts, it’s not because that’s what they’re actually looking for. Instead, AI markups point to an obvious challenge posed by AI-assisted review of contracts, namely how to revise the draft to address issues flagged in the review.
Ideally, you wouldn’t have to revert to the haphazard traditional contract-drafting process. Instead, it should be feasible to build in an extra step that allows the user to access, or build, the necessary language.
We already go some way toward addressing that challenge with the advice built into our product. We expect to explore other processes too, but as an add-on to our expertise-based approach, instead of prioritizing AI markups over all other considerations.