Part of the Solution

Idealistic musings about eDiscovery

Tag Archives: Electronic discovery

On TAR 1.0, TAR 2.0, TAR 1.5 and … TAR 3.0?

The problem with technology-assisted review is that the best practices to bring about the most accurate, defensible review are, quite frankly, too onerous for most attorneys to accept.

In “TAR 1.0”, the initial iteration of computer-aided document analysis, as many documents as possible from the total corpus had to be loaded up into the TAR system and, from this nebulous blob of relevant data, non-relevant data, fantasy football updates and cat memes, a statistically-valid sample was drawn at random. It then fell to a senior attorney on the litigation team to manually review and code this “seed set”, after which the computer would identify similarities among documents with similar tags and try to extrapolate those similarities to the entire document corpus.

There are a number of aspects to modern document review that aren’t practical with this scenario – using unculled data to generate the seed set, assuming that you have most of the corpus documents to draw from at the outset – but the most glaring impracticality is also the most critical requirement of TAR 1.0:

Senior attorneys, as a rule, HATE to review documents.

It’s why they hire junior attorneys or contract reviewers. It’s because generally, senior attorneys’ time is better spent on tasks that are more overtly significant to their clients, which in turn justifies them to bill a lot more per hour than the reviewers do. And, if a statistically valid seed set contains some 2,400 randomly selected documents (presuming a confidence score of >95 percent and a margin of error of +/- two percent), that’s the better part of an entire workweek the senior attorney would have to devote to the review.

No wonder TAR 1.0 never caught on. It was designed by technologists – and brilliantly so – but completely ignored the realities of modern law practice.

Now we’re up to “TAR 2.0”, the “continuous active learning” method which has received less attention but is nonetheless a push in the right direction toward legal industry-wide acceptance. In TAR 2.0, the computer constantly re-trains itself and refines its notions of what documents do and do not meet each tag criterion, so that the initial seed set can be smaller and more focused more on documents that are more likely to be responsive, rather than scattershooting randomly across the entire document corpus. As more documents are loaded into the system, the tag criteria can be automatically applied during document processing (meaning that the new documents are classified as they enter the system), and refinements crafted as humans review the newly loaded docs would then in turn be re-applied to the earlier-predicted docs.

Now, that last paragraph makes perfect sense to me. The fact that, despite my editing and revisions, it still would appear confusing to the average non-techie is one of the big problems with TAR 2.0: those of us who work with it get it, but explaining it to those who don’t is a challenge. But the biggest problem I see with TAR 2.0 once again must be laid at the feet of the attorneys.

Specifically, most of the training and re-training in a TAR 2.0 system will come courtesy of the manual document reviewers themselves. Ignoring for a moment the likelihood that review instructions to an outsourced document review bullpen tend to be somewhat less than precise anyway, several reviewers can look at the same document and draw very different conclusions. Let’s say you have a non-practicing JD with a liberal arts background, a former corporate attorney with engineering and IP experience, an inactive plaintiff’s trial lawyer, and a paralegal who was formerly a nurse. Drop the same document – let’s say, a communiqué from an energy trader to a power plant manager – in front of all four, and ask them to tag for relevance, privilege, and relevant issues. You’re likely to get four different results.

Which of these results would a TAR 2.0 system use to refine its predictive capabilities? All of them. And TAR has not yet advanced to the sophistication required to analyze four different tagging responses to the same document and refine from them the single most useful combination of criteria. Instead, it’s more likely to cloud up the computer’s “understanding” of what made this document relevant or not relevant.

The IT industry uses the acronym GIGO: garbage in, garbage out. Blair and Maron proved back in 1985* that human reviewers tend not only to be inaccurate in their review determinations, but that they are also overconfident in their abilities to find sufficient documents that meet their criteria. In TAR 2.0, ultimately, the success or failure of the computer’s ability to accurately tag documents may be in the hands of reviewers whose only stake in the litigation is a paycheck.

Until last week, I was strongly in favor of a “TAR 1.5” approach: start with a smaller seed set reviewed and tagged by a more-senior attorney, let the TAR system make its initial definitions and determinations, use those determinations to cull and prioritize the document corpus, then let the document reviewers take it from there and use “continuous active learning” to further iterate and refine the results. It seemed to me that this combined the best practices from both versions of the process: start with the wisdom and craftsmanship of an experienced litigator and apply it to all the available documents, then leave the document-level detail to contract reviewers using the TAR-suggested predictions as guidance.

But last week, I interviewed with the founders of a small company that have a different approach. Neither desiring to put any pressure on the company nor wanting to inadvertently divulge any trade secrets that might have been shared, I won’t identify them and won’t talk about their processes other than to say that perhaps they’ve come up with a “TAR 3.0” approach: make automatic TAR determinations based on statistical similarity of aspects of the document, rather than on the entire content of each document. It’s a lawyerly, rather than a technical, approach to the TAR problem, which to me is what makes it brilliant (and brilliantly simple).

Whether I become part of this company or not, the people who run it have given me a lot to think about, and I’ll be sharing my thoughts on these new possibilities in the near future.

*David C. Blair & M.E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 COMMC’NS ACM 289 (1985).

Information Governance vs. eDiscovery

I have a new post up on Greg Buckles’ eDJ Blog on the intersection of information governance and eDiscovery.

Thanks, Greg, for the forum!

Why Hasn’t TAR Caught On? Look In The Mirror.

Oh, this is good. If you haven’t already signed up for the ALM Network (it’s free, as is most of their content), it’s worth doing so just to read this post (first of a two-part series) from Geoffrey Vance on Legaltech News. It pins the failure of acceptance of technology-assisted review (TAR) right where it belongs: on attorneys who refuse to get with the program.

As I headed home, I asked myself, how is it—in a world in which we rely on predictive technology to book our travel plans, decide which songs to download and even determine who might be the most compatible on a date—that most legal professionals do not use predictive technology in our everyday client-serving lives?

I’ve been to dozens of panel discussions and CLE events specifically focused on using technology to assist and improve the discovery and litigation processes.  How can it possibly be—after what must be millions of hours of talk, including discussions about a next generation of TAR—that we haven’t really even walked the first-generation TAR walk?

Geoffrey asks why attorneys won’t get with the program. In a comment to the post, John Tredennick of Catalyst lays out the somewhat embarrassing answer:

Aside from the fact that it is new (which is tough for our profession), there is the point that TAR 2.0 can cut reviews by 90% or more (TAR 1.0 isn‘t as effective). That means a lot of billable work goes out the window. The legal industry (lawyers and review companies) live and die by the billable hour. When new technology threatens to reduce review billables by a substantial amount, are we surprised that it isn‘t embraced? This technology is driven by the corporate counsel, who are paying the discovery bills. As they catch on, and more systems move toward TAR 2.0 simplicity and flexibility, you will see the practice become standard for every review.

Especially with respect to his last sentence, I hope John is right.

Can You Be a “Salesman” and Still Be Part of the Solution?

I had a job interview by telephone last week. The position’s job posting read as though it had been lifted from my career bucket list; everything I want my career to be, and all the experience I have obtained, meshed perfectly with the contents of the job description.

I knew, however, that there might be more here than meets the eye when, upon initial contact, the reviewer mentioned that in addition to everything listed on the job posting, this would be “a true sales position”. I love to evangelize and identify solutions. I HATE to “sell”.

I thought the interview went fairly well (at least, for purposes of demonstrating my expertise). The interviewer disagreed; he even told me so during the call, saying that he didn’t hear me steering the conversation forcefully enough to specific solutions that could be presented. (Never mind the fact that the list of solutions this company represents is outdated and incomplete on their website, so I wasn’t sure what to recommend. The message was clear: I wasn’t SELLING hard enough.)

This brings me to a recent post on LinkedIn by Damian A. Durrant of Catalyst, entitled “More solving, less ‘selling'”. He believes as I do: don’t sell, SOLVE.

Sales is push, it says I am ramming something, anything, down your throat lubricated with lunch whether you need it or not. Unpleasant. Consulting is pull, it says I believe I have something that will help you, let’s talk about it. Better.

I have been a salesman. I have been a consultant. I much prefer the latter, as I am working to provide solutions. A salesman will make his numbers for the month. A solution provider will be someone the client goes back to again and again, because the provider makes the client’s job easier and less expensive. It’s the difference between making a one-time sale, and building a true relationship.

The e-discovery industry needs to shed itself of its copying and scanning “salesy” origins and start behaving more like the advisory firms, albeit more creatively, more nimbly and without the hefty billing rates.

Nicely said, Damian. Nicely said indeed.

I highly recommend you read his message.

It’s Worth The Reminder

If you have done one of these published “Q&A” things before, as I have, you know that the author not only provides the A, but also the Q. The author gets to emphasize exactly what she wants to emphasize, in exactly the way she wants to emphasize it. That being said, Gabriela Baron reminds us of some important ethical points on the subject of technology-assisted review that need emphasizing: specifically, that the ethical attorney must develop at least some competence with the technology:

Comment 8 to ABA Model Rule of Professional Conduct 1.1 requires lawyers to ‘keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.’ Lawyers need not become statisticians to meet this duty, but they must understand the technology well enough to oversee its proper use.

Her blog post is a pretty good, succinct summary, and one that bears being used to refresh our memory.

Chain Chain Chain …

Here’s a worthy reminder from Amy Bowser-Rollins of the need to maintain chain of custody logs while collecting eDiscovery. With all the emphasis these days on TAR, it’s nice to be reminded of the fundamentals every once in a while.

“The man who complains about the way the ball bounces is likely the one who dropped it.” – Lou Holtz

Craig Ball, Predictive Coding, and Wordsmithing

Boy, I wish I could write like Craig Ball does.

I have written many articles and blog posts on technology-assisted review, but all my thousands of words cannot communicate my beliefs on the subject as gracefully, powerfully, and concisely as Craig recently put it:

Indeed, there is some cause to believe that the best trained reviewers on the best managed review teams get very close to the performance of technology-assisted review. …

But so what?  Even if you are that good, you can only achieve the same result by reviewing all of the documents in the collection, instead of the 2%-5% of the collection needed to be reviewed using predictive coding.  Thus, even the most inept, ill-managed reviewers cost more than predictive coding; and the best trained and best managed reviewers cost much more than predictive coding.  If human review isn’t better (and it appears to generally be far worse) and predictive coding costs much less and takes less time, where’s the rational argument for human review?

So, um … yeah, what he said.

The Terminal Legal Hold: Pippins v. KPMG

I came up with the term “terminal legal hold” to describe the situation faced by an enterprise which can’t bring itself to delete obsolete data, lest any of that data be potentially responsive in future litigation and the organization’s document destruction policy couldn’t pass the Zubulake v. UBS Warburg “systematic and repeatable” test. The enterprise fears sanctions so greatly that they never delete anything. For obvious reasons, this isn’t a best practice.

A New York federal court, however, has now tacitly approved of — indeed, ordered — the “terminal legal hold”.

Pippins v. KPMG is being litigated before Magistrate Judge James L. Cott, in the Southern District of New York. KPMG is being sued by two as-yet-uncertified classes of audit associates who claim that they were misclassified as exempt employees under the Fair Labor Standards Act, and therefore are owed overtime pay. There are as many as 9,000 potential class members, and thus as many as 9,000 hard drives which they may have used. Counsel for the two parties could not agree on the sampling criteria or the number of drives to include in the sample. KPMG asserted that the cost to preserve the more than 2,500 drives currently in its possession was more than $1.5 million, and proposed that for the sake of proportionality, one hundred randomly-selected hard drives should be preserved as the sample set.

On October 11, Judge Cott ruled that KPMG has to preserve the hard drive of every potential class member. Because the district judge had not yet ruled on class certification, every auditor was a potential plaintiff and therefore a “key player” as defined in Zubulake v. UBS Warburg. “With so many unknowns involved at this stage in the litigation,” Judge Cott wrote, “permitting KPMG to destroy the hard drives is simply not appropriate at this time.”

KPMG filed an objection brief to the district judge on October 28, writing, “The ‘key player’ analysis has never been extended to require the preservation of ESI of every potential member of a putative class or proposed FLSA collective action.” Also:

[N]ever has it been held that an employer on notice of a putative class action or proposed collective must impose a ‘litigation hold’ and preserve ESI (among other materials) for every current or former employee who theoretically could bring an individual action in the future. If companies were required to retain documents whenever there is a mere possibility that they could be sued, they effectively would face a perpetual duty to preserve and thus would be unable to implement document-retention policies.

In other words: a terminal legal hold. Leonard Deutchman referred to this today as “the perfect e-discovery storm”:

At virtually the earliest moment in the litigation, the plaintiffs require the defendant to spend a remarkable amount of money simply on preservation — the cost to search, review and produce e-discovery has not yet even been discussed. … If the legal claims are insufficient or the class uncertifiable, millions will have been wasted in preservation; if, however, the allegations are shown to be strong and the class intact but the drives are not preserved, the defendant may then have been allowed to destroy, or let be destroyed, the mythical smoking gun ESI. Because the cost of preservation is so high, the issue of cost has arisen earlier than it usually does (when calculating the costs of processing, searching and production) — so early that neither side has the facts to support its position. Thus, the potential for gross injustice lies in taking either position.

This led to the filing of an amicus brief by the United States Chamber of Commerce on November 8, arguing that the magistrate judge got it wrong. “’Key players’ … could not, and does not, embrace every member of a putative class of thousands. … Put bluntly: no absent member of a properly certified class or non-party to a properly certified collective action should be a ‘key player.'”

Judge Cott may have gotten the “key player” analysis wrong, but Deutchman argues that the judge otherwise made the right call:

As a legal matter, and as a way of governing e-discovery practice, the court was wise to enforce the rules as they are by denying both sides’ motions, advising the defendant to allow the plaintiffs to examine the sample drives and letting the parties then act in their enlightened self-interest. In so doing, the court instructs those who follow to act as the defendant should have rather than as it did. Cooperation generally works when the parties act in their enlightened self-interest. By interpreting the rules properly, the court “enlightened” the defendant as to what its self-interest truly was. Presumably it, and those reading the opinion, will now know how to act.

My take on this: Judges are typically referees, and should not take it upon themselves to rescue parties from their own mistakes. However, every rule has an exception, and this strikes me as a valid one.

KPMG said the cost of preserving each hard drive would cost them $600; multiply by 9,000 hard drives, and they will have spent $5.4 million before processing a single file. Even if KPMG should have made stronger efforts at cooperation with respect to sampling of the preserved hard drives (and, in my opinion, they should have), Judge Cott’s decision sets a dangerous precedent in favor of dilatory plaintiffs who would rather win their case through expense and attrition than on the merits.

While as a commentator I’d like to be less cynical and believe that most plaintiffs want their cases litigated fairly, my experience as a defense litigator has taught me otherwise. If a savvy plaintiff’s lawyer sees an opportunity to make a case so expensive for the defendant that the defendant will gladly settle, regardless of culpability, the lawyer will gladly do so. The higher the potential expense, the greater will be the amount of the settlement. If Judge Cott’s order is allowed to stand, the mere threat of class certification would be enough to cause large defendants to reach for their checkbooks rather than begin the expensive task of preserving hard drives that might contain evidence that might be of use in some unspecified, unfiled, and unthreatened future litigation. The net result? Cases won’t be tried on the merits, and no enterprise will delete anything ever again.

I would have preferred Judge Cott force the parties to agree on a sampling protocol, appointing a special master if need be, and allowing KPMG to manage its own preservation of hard drives upon pain of sanctions if they mess it up (the cost of which , in all likelihood, would be far less than the cost of preserving all 9,000 hard drives).

(Update 1/9/12: Law.com’s Evan Koblentz reports this morning that the parties may reach a resolution on this issue.)