The Human Factor: Why Our Mapping of Contracts Won’t Always Go As Planned
Updated: Feb 12, 2021
Starting in 2018, when I joined LegalSifter as a consultant, I’ve helped design the specifications for our “Sifters”—algorithms intended to spot a given issue expressed in contracts. Although I’m now LegalSifter’s chief content officer and have an array of responsibilities to match, I still create sifter specs. It’s work that’s conducive to humility, for reasons I’ll now explain.
The issue we want a given sifter to look for might be expressed in several markedly different ways, with each of those ways offering further variants of their own. The effect is one of choices upon choices. After all, the world of contracts is a vast, teeming hive of activity involving every country, every industry, and people from every kind of background and experience. And we have to look for the good, the not-so-good, and the downright bad ways that a given concept might be expressed.
So designing specs requires varying amounts of deal experience, subject-matter expertise, semantic acuity, and imagination. It’s all very human, so things don’t always go exactly as planned.
For example, in 2019 I worked on specs for the sifter “Amendment: Unilateral,” which looks for anything in a contract saying that a party may amend the contract without the consent of the other party. I came up with a pattern, we refined the specs, and the sifter went into production.
Then a year later, one of the new sifters I decided to build was … “Amendment: Unilateral”! I put together a new set of specs, then one of my colleagues gently pointed out that we now had two set of specs for the same sifter. But what could have been annoying ended up being a happy accident: the second set of specs had two patterns that didn’t overlap with the first, so we now had a much more reliable sifter.
What explains this mishap? Well, one result of constantly working on specs is that my short-term memory purges itself regularly, so a few weeks after working on a given sifter I might have forgotten entirely about it. So now it’s check first, then build.
OK, but what explains that I did two entirely different set of specs to look for the same concept? I think that’s a function of the sprawling, chaotic nature of contracts and the imagination that’s required to come to grips with the sheer variety you encounter. Any work that requires insight will likely be prone to sporadic idling, reversing direction, and stumbling. And you can’t always expect to get the full picture at the first attempt.
But I’m OK with that. It resembles the course of my 25 years of researching and writing on contract language. Looked at up-close, I was hardly a model of efficiency, but I kept building, I kept refining, I kept fixing my mistakes. So now when I step back and look at what I’ve built, I see something substantial.
That’s how I see things playing out with LegalSifter. We’re at the start of a mammoth undertaking. And because we have to make sense a human construct—the world of contracts—our own work will necessarily be human too. But we’ll keep at it, and we’ll build something.
Recent Posts
See AllRecently I noticed this article on Artificial Lawyer. The title is Generative Legal AI + “The Last Human Mile”, and it’s about limits to applying AI to legal work. It says this: The last mile problem