Rule 1.1 of the Model Rules of Professional Conduct reads that a lawyer “should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”
Ask any practitioner and they are likely well aware of the supposed “risks” of technology-assisted review, but ongoing discussion of technology-assisted review (TAR)’s benefits suggests that these perceived risks are little more than widespread misconceptions. As the overall value of TAR continues to move into mainstream case management and document review, it is not improbable that at some point in the not too distant future, failing to utilize TAR may be perceived as a breach of an attorney’s duty to their clients—if not just a bad business decision.
So, let’s put an end to some of these myths and further illustrate the benefits of TAR:
Myth: Technology-assisted review “fully automates” review, thereby replacing skilled legal judgment.
Reality: Technology-assisted review is a tool that leverages expert legal judgment as part of a review process. Many of the misconceptions about TAR likely stem from the belief that it automatically identifies and codes documents absent any input from the user. To date, most TAR offerings rely on rigorous legal analysis from expert trainers to produce results that identify and prioritize documents in the corpus. Those trainers must continually test the system and tweak it to achieve desired results. Furthermore, those training the system may deny the machine’s suggestions and manually review each document before production if so desired.
Simply put, TAR is a tool that is only as effective as the humans who train and test it.
Myth: Technology-assisted review is not as accurate or consistent as manual keyword searches.
Reality: Processes leveraging TAR can produce more accurate results than manual keyword search. Many in the legal profession reject the notion that TAR can identify the “smoking gun” and other relevant documents more effectively than keyword searches. However, there is substantial research suggesting just the opposite.
Research on keyword searches, such as Blair and Maron’s 1985 study, found that such searches produced an average recall of 20 percent—meaning approximately 80 percent of responsive documents were missed. More contemporary research such as the TREC Legal Track confirmed these results, with average recall for keyword searches hovering below 25 percent. Additionally, TREC found relatively low precision (or accuracy) with more than 70 percent of identified documents deemed irrelevant upon review.
Many studies analyzing processes leveraging TAR produced results at least as accurate as manual review. The 2011 TREC Legal Track found that “the [TAR] efforts of several participants achieve recall scores about as high as might reasonably be measured using current evaluation methodologies.” Furthermore, a recent study by the RAND institute for Civil Justice found that, despite the inability to compare manual review to TAR without some degree of “unrealism or artificiality, the empirical evidence that is currently available does suggest that similar results would be achieved with either approach.”
With the above myths dispelled, naysayers may see TAR as a burden imposing a series of extra steps, or raise concerns about costs. Join us for the final installment in this series as we address those concerns.