How lawyers can improve their use of Machine-Assisted Review.
The fact is, when it comes to electronic discovery, lawyers take a disproportionate role in the battle to solve what is largely a technological problem, i.e, how to find information that is relevant to a case from the huge vats of Electronically Stored Information (“ESI”) that are collected and preserved in many cases or government investigations. . . .
Clients would be wise to heed the history of the recent past of environmental law, when science and law collided, and actively seek out a balance of technology and technologists in conjunction with their lawyers. Together they are equipped to solve their problems rather than focusing on “lawyer-centric” solutions, examples of which will be provided in the rest of this article. . . .
Unlike the key word approach, the modern “machine-assisted review” approach is based on selecting a statistically significant, random sample of documents that are coded for relevance by a team of “expert” reviewers. This seed set is then used by text analytics software — similar to that used in spam filtering or by IBM’s Watson playing Jeopardy computer — to apply the expert’s relevancy decisions to the rest of the ESI. In comparison to the key word approach, machine-assisted review is much more accurate, faster and cheaper. The reason I am frequently told that our profession doesn’t immediately adopt machine- assisted review is only because it’s easier to continue with the status quo than to educate lawyers or Judges on why technology is better, faster, more auditable and conversely, how inferior linear review is. . . .
. The purpose of this article is to suggest that companies, outside law firms and judges, need to do a better job of embracing technology and bringing technologists to the table if we are truly going to make progress in addressing the problems caused by technology overload, over preservation, and search and retrieval. Electronic discovery is a legal issue with which lawyers as trusted counselors must do a better job of not just looking for more skill from within their ranks, but also embracing the inclusion of technologists to collectively solve these messy technical problems. That requires work from attorneys learning about technology, vetting out the various offerings between different technologies, and creatively adopting technology.
And this piece from Magistrate Judge Andre Peck on computer-assisted coding, or predictive coding, rather than key-word searches.
If the hot topic in 2010 conferences was proportionality, this year it is computer-assisted coding, often generically called “predictive coding.” By computer-assisted coding, I mean tools (different vendors use different names) that use sophisticated algorithms to enable the computer to determine relevance, based on interaction with (i.e., training by) a human reviewer.
Unlike manual review, where the review is done by the most junior staff, computer-assisted coding involves a senior partner (or team) who review and code a “seed set” of documents. The computer identifies properties of those documents that it uses to code other documents. As the senior reviewer continues to code more sample documents, the computer predicts the reviewer’s coding. (Or, the computer codes some documents and asks the senior reviewer for feedback.)
When the system’s predictions and the reviewer’s coding sufficiently coincide, the system has learned enough to make confident predictions for the remaining documents. Typically, the senior lawyer (or team) needs to review only a few thousand documents to train the computer.
Some systems produce a simple yes/no as to relevance, while others give a relevance score (say, on a 0 to 100 basis) that counsel can use to prioritize review. For example, a score above 50 may produce 97% of the relevant documents, but constitutes only 20% of the entire document set.
Counsel may decide, after sampling and quality control tests, that documents with a score of below 15 are so highly likely to be irrelevant that no further human review is necessary. Counsel can also decide the cost-benefit of manual review of the documents with scores of 15-50.
To my knowledge, no reported case (federal or state) has ruled on the use of computer-assisted coding. While anecdotally it appears that some lawyers are using predictive coding technology, it also appears that many lawyers (and their clients) are waiting for a judicial decision approving of computer-assisted review.