Part of the Solution

Idealistic musings about eDiscovery

Monthly Archives: June 2015

Why Hasn’t TAR Caught On? Look In The Mirror.

Oh, this is good. If you haven’t already signed up for the ALM Network (it’s free, as is most of their content), it’s worth doing so just to read this post (first of a two-part series) from Geoffrey Vance on Legaltech News. It pins the failure of acceptance of technology-assisted review (TAR) right where it belongs: on attorneys who refuse to get with the program.

As I headed home, I asked myself, how is it—in a world in which we rely on predictive technology to book our travel plans, decide which songs to download and even determine who might be the most compatible on a date—that most legal professionals do not use predictive technology in our everyday client-serving lives?

I’ve been to dozens of panel discussions and CLE events specifically focused on using technology to assist and improve the discovery and litigation processes.  How can it possibly be—after what must be millions of hours of talk, including discussions about a next generation of TAR—that we haven’t really even walked the first-generation TAR walk?

Geoffrey asks why attorneys won’t get with the program. In a comment to the post, John Tredennick of Catalyst lays out the somewhat embarrassing answer:

Aside from the fact that it is new (which is tough for our profession), there is the point that TAR 2.0 can cut reviews by 90% or more (TAR 1.0 isn‘t as effective). That means a lot of billable work goes out the window. The legal industry (lawyers and review companies) live and die by the billable hour. When new technology threatens to reduce review billables by a substantial amount, are we surprised that it isn‘t embraced? This technology is driven by the corporate counsel, who are paying the discovery bills. As they catch on, and more systems move toward TAR 2.0 simplicity and flexibility, you will see the practice become standard for every review.

Especially with respect to his last sentence, I hope John is right.

Can You Be a “Salesman” and Still Be Part of the Solution?

I had a job interview by telephone last week. The position’s job posting read as though it had been lifted from my career bucket list; everything I want my career to be, and all the experience I have obtained, meshed perfectly with the contents of the job description.

I knew, however, that there might be more here than meets the eye when, upon initial contact, the reviewer mentioned that in addition to everything listed on the job posting, this would be “a true sales position”. I love to evangelize and identify solutions. I HATE to “sell”.

I thought the interview went fairly well (at least, for purposes of demonstrating my expertise). The interviewer disagreed; he even told me so during the call, saying that he didn’t hear me steering the conversation forcefully enough to specific solutions that could be presented. (Never mind the fact that the list of solutions this company represents is outdated and incomplete on their website, so I wasn’t sure what to recommend. The message was clear: I wasn’t SELLING hard enough.)

This brings me to a recent post on LinkedIn by Damian A. Durrant of Catalyst, entitled “More solving, less ‘selling'”. He believes as I do: don’t sell, SOLVE.

Sales is push, it says I am ramming something, anything, down your throat lubricated with lunch whether you need it or not. Unpleasant. Consulting is pull, it says I believe I have something that will help you, let’s talk about it. Better.

I have been a salesman. I have been a consultant. I much prefer the latter, as I am working to provide solutions. A salesman will make his numbers for the month. A solution provider will be someone the client goes back to again and again, because the provider makes the client’s job easier and less expensive. It’s the difference between making a one-time sale, and building a true relationship.

The e-discovery industry needs to shed itself of its copying and scanning “salesy” origins and start behaving more like the advisory firms, albeit more creatively, more nimbly and without the hefty billing rates.

Nicely said, Damian. Nicely said indeed.

I highly recommend you read his message.

It’s Worth The Reminder

If you have done one of these published “Q&A” things before, as I have, you know that the author not only provides the A, but also the Q. The author gets to emphasize exactly what she wants to emphasize, in exactly the way she wants to emphasize it. That being said, Gabriela Baron reminds us of some important ethical points on the subject of technology-assisted review that need emphasizing: specifically, that the ethical attorney must develop at least some competence with the technology:

Comment 8 to ABA Model Rule of Professional Conduct 1.1 requires lawyers to ‘keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.’ Lawyers need not become statisticians to meet this duty, but they must understand the technology well enough to oversee its proper use.

Her blog post is a pretty good, succinct summary, and one that bears being used to refresh our memory.

Proportionality in Discovery: Example #243

Courtesy K&L Gates, this recent opinion from USDC California in which the judge points out that you can’t very well conduct discovery with any sense of proportionality if you don’t know what the damages in question are:

[T]he court indicated that Plaintiff’s “tight-lipped” disclosures regarding damages, including indicating its desire for the defendant to wait for Plaintiff’s expert report, were “plainly insufficient.”  The court went on to reason that “[e]ven if [Defendant] were willing to wait to find out what this case is worth—which it is not—the court still needs to know as it resolves the parties’ various discovery-related disputes.  Proportionality is part and parcel of just about every discovery dispute.” (Emphasis added.)

Moral of the story: Modern discovery is not compatible with a plaintiff mindset of “We won’t specify an amount of damages sought, because then we can’t shortchange our potential recovery.”

Why Manual Review Doesn’t Work

I’ve had the occasional conversation with Greg Buckles in which we take opposing views on the validity of the 1985 Blair-Maron study. Herb Roitblat now weighs in with a quite scientific, yet blissfully simple, explanation why manual review should never be considered the “gold standard” for document review accuracy.

It may seem that we have effective access to all of the information in a document, but the available evidence suggests that we do not. We may be confident in our reading ability, but at best we do a reasonable job with the subset of information that we do have.

Get to Herb’s conclusion to see what (besides the obvious) this has to do with technology-assisted review. It’s worth the read.