Part of the Solution

Idealistic musings about eDiscovery

Monthly Archives: July 2015

On TAR 1.0, TAR 2.0, TAR 1.5 and … TAR 3.0?

The problem with technology-assisted review is that the best practices to bring about the most accurate, defensible review are, quite frankly, too onerous for most attorneys to accept.

In “TAR 1.0”, the initial iteration of computer-aided document analysis, as many documents as possible from the total corpus had to be loaded up into the TAR system and, from this nebulous blob of relevant data, non-relevant data, fantasy football updates and cat memes, a statistically-valid sample was drawn at random. It then fell to a senior attorney on the litigation team to manually review and code this “seed set”, after which the computer would identify similarities among documents with similar tags and try to extrapolate those similarities to the entire document corpus.

There are a number of aspects to modern document review that aren’t practical with this scenario – using unculled data to generate the seed set, assuming that you have most of the corpus documents to draw from at the outset – but the most glaring impracticality is also the most critical requirement of TAR 1.0:

Senior attorneys, as a rule, HATE to review documents.

It’s why they hire junior attorneys or contract reviewers. It’s because generally, senior attorneys’ time is better spent on tasks that are more overtly significant to their clients, which in turn justifies them to bill a lot more per hour than the reviewers do. And, if a statistically valid seed set contains some 2,400 randomly selected documents (presuming a confidence score of >95 percent and a margin of error of +/- two percent), that’s the better part of an entire workweek the senior attorney would have to devote to the review.

No wonder TAR 1.0 never caught on. It was designed by technologists – and brilliantly so – but completely ignored the realities of modern law practice.

Now we’re up to “TAR 2.0”, the “continuous active learning” method which has received less attention but is nonetheless a push in the right direction toward legal industry-wide acceptance. In TAR 2.0, the computer constantly re-trains itself and refines its notions of what documents do and do not meet each tag criterion, so that the initial seed set can be smaller and more focused more on documents that are more likely to be responsive, rather than scattershooting randomly across the entire document corpus. As more documents are loaded into the system, the tag criteria can be automatically applied during document processing (meaning that the new documents are classified as they enter the system), and refinements crafted as humans review the newly loaded docs would then in turn be re-applied to the earlier-predicted docs.

Now, that last paragraph makes perfect sense to me. The fact that, despite my editing and revisions, it still would appear confusing to the average non-techie is one of the big problems with TAR 2.0: those of us who work with it get it, but explaining it to those who don’t is a challenge. But the biggest problem I see with TAR 2.0 once again must be laid at the feet of the attorneys.

Specifically, most of the training and re-training in a TAR 2.0 system will come courtesy of the manual document reviewers themselves. Ignoring for a moment the likelihood that review instructions to an outsourced document review bullpen tend to be somewhat less than precise anyway, several reviewers can look at the same document and draw very different conclusions. Let’s say you have a non-practicing JD with a liberal arts background, a former corporate attorney with engineering and IP experience, an inactive plaintiff’s trial lawyer, and a paralegal who was formerly a nurse. Drop the same document – let’s say, a communiqué from an energy trader to a power plant manager – in front of all four, and ask them to tag for relevance, privilege, and relevant issues. You’re likely to get four different results.

Which of these results would a TAR 2.0 system use to refine its predictive capabilities? All of them. And TAR has not yet advanced to the sophistication required to analyze four different tagging responses to the same document and refine from them the single most useful combination of criteria. Instead, it’s more likely to cloud up the computer’s “understanding” of what made this document relevant or not relevant.

The IT industry uses the acronym GIGO: garbage in, garbage out. Blair and Maron proved back in 1985* that human reviewers tend not only to be inaccurate in their review determinations, but that they are also overconfident in their abilities to find sufficient documents that meet their criteria. In TAR 2.0, ultimately, the success or failure of the computer’s ability to accurately tag documents may be in the hands of reviewers whose only stake in the litigation is a paycheck.

Until last week, I was strongly in favor of a “TAR 1.5” approach: start with a smaller seed set reviewed and tagged by a more-senior attorney, let the TAR system make its initial definitions and determinations, use those determinations to cull and prioritize the document corpus, then let the document reviewers take it from there and use “continuous active learning” to further iterate and refine the results. It seemed to me that this combined the best practices from both versions of the process: start with the wisdom and craftsmanship of an experienced litigator and apply it to all the available documents, then leave the document-level detail to contract reviewers using the TAR-suggested predictions as guidance.

But last week, I interviewed with the founders of a small company that have a different approach. Neither desiring to put any pressure on the company nor wanting to inadvertently divulge any trade secrets that might have been shared, I won’t identify them and won’t talk about their processes other than to say that perhaps they’ve come up with a “TAR 3.0” approach: make automatic TAR determinations based on statistical similarity of aspects of the document, rather than on the entire content of each document. It’s a lawyerly, rather than a technical, approach to the TAR problem, which to me is what makes it brilliant (and brilliantly simple).

Whether I become part of this company or not, the people who run it have given me a lot to think about, and I’ll be sharing my thoughts on these new possibilities in the near future.

*David C. Blair & M.E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 COMMC’NS ACM 289 (1985).

H-P Is Out, iManage Is Back In

On May 15, I was laid off from Hewlett-Packard as they prepared for their big corporate meiosis* in November. I found out in short order that about three-fourths of the remaining eDiscovery experts company-wide were also let go. My private opinion was that this likely signaled HP’s intent to get out of the eDiscovery software business.

Looks like my hunch is at least partially right. My friends at iManage (née Interwoven), formerly a part of Autonomy and later assimilated by HP, have bought their company back.

The press release is here. Former-and-new-CEO Neil Araujo’s first blog post on the buyout is here. Neil writes:

For the iManage leadership, this transaction is about much more than a product: it’s about a community that spans people, partners and hundreds of thousands of users, many of whom have used this solution for more than a decade. iManage also represents a set of values, based on our history of listening, innovating and delivering great products and support. Our buyout enables the team to continue to innovate with a community of thought leaders that share this passion.

My heartiest congratulations to my old colleagues in Chicago. (Hmmm … wonder if they need an eDiscovery expert?)

*After all these years, I finally found a use for that word from high-school biology! HP is splitting into two distinct companies, HP and HP Enterprise, on November 1.

Information Governance vs. eDiscovery

I have a new post up on Greg Buckles’ eDJ Blog on the intersection of information governance and eDiscovery.

Thanks, Greg, for the forum!

Jack Halprin

We lost Jack Halprin yesterday. Greg Buckles has a great tribute to Jack on his site, but I want to add a couple of words of my own.

When I applied to join Autonomy in 2010, the company was not looking for an eDiscovery expert. Because I presented myself as one, however, the Powers That Were asked their VP of eDiscovery and Compliance – a well-established eDiscovery expert – to evaluate my candidacy. Yep, it was Jack.

Knowing I was from Houston, Jack called his friend Greg to check me out. Fortunately, Greg and I had met socially a few times and the feedback was positive. So positive, in fact, that I was able to collaborate on a couple of projects with Jack before he left for his dream job at Google. I was never privileged to meet Jack in person, but we spent plenty of time on the phone with each other.

Jack died of cancer Thursday morning. He was 46. As Greg wrote:

If you are up to it, raise Jack’s favorite Jägermeister shot in remembrance. If you really do remember what that tastes like from your college days, try a memorial donation to Lymphoma Research Foundation or Larkin Street Youth Services in Jack’s name.
Thanks for your support, Jack. We’ll miss you.