Search Results for: "document review"

Document Review: MythBusters Edition

A couple years ago, I wrote a blog entitled, “Bust These 4 Myths on Your Next Document Review.” In this blog, I looked at four common document review myths and the realities behind the fallacies. Fast forward two years, there is no better time to revisit these myths to take the pulse of document review in 2017, considering the evolution of ediscovery technology, processes, rules and case law.

Myth #1: Document review just happens; you don’t really need a plan.

Since the adoption of the FRCP amendments we have seen courts admonish parties for:

  • Discovering new documents not in the original collection
  • Missing documents that should have been produced the first time around
  • Amassing costs for inefficient discovery methods

With document review technology at the top of its game, the misconception that document review is trivial is fading. In meeting with corporations and law firms, I hear legal teams appreciating the importance of having a review methodology. More often than not, those teams are inquiring as to how their processes can be improved.

2017 Document Review Lesson #1: Don’t procrastinate or wander aimlessly when it comes to review. Know your path from collection through production and be able to justify your methods.

Myth #2: Any attorney can conduct (or manage) a document review.

Today, document review is not the unglamorous chore of former times. With advancements in the review tools, increasingly senior attorneys are finding themselves immersed in document review more than in years past. The tools are easier to operate than ever before, and senior attorneys – typically subject matter experts on the case – are in the best position to review the most pertinent documents, especially if predictive coding is used.

At the same time, in order to fully leverage analytics and predictive coding features, the attorney will need advanced training or someone skilled in using these powerful features to guide them through. With formable technology at their fingertips and millions of documents to wrangle, today’s document reviewers are not only licensed and highly qualified attorneys; but also may have specific training and certifications in various document review platforms. Many have expertise in a different language other than English, or substantive knowledge in their practice area.

2017 Document Review Lesson #2: The days of brute force attorney review are over. Today’s document review requires subject matter experts in the case, working side-by-side with technology-minded attorneys that know how to maximize time and minimize costs.

Myth #3: All document review technology is equal.

What appears equal at face value, is not equal in action. While most major review tools function generally in a similar manner, there are enhancements unique to a particular provider and its tool set. From running searches and batching documents to using predictive coding or reviewing audio files, experienced document reviewers will recognize the fine distinctions of each provider’s platform, knowing when and how those features can be helpful. If they cannot answer a question, the reviewer should know how to get a hold of the technology provider’s technical support team to lend a hand.

2017 Document Review Lesson #3: Get into the technology weeds. Understanding the nuances of a provider’s technology is the only way to reap the benefits of a modern document review.

Myth #4: It will be obvious when you can stop review…when you run out of documents.

Predictive coding has changed how legal teams approach document review; however, even in 2017, the adoption of this is technology is marginal at best. Outmoded teams are still conducting linear reviews of every document, while progressive teams have figured out how to embrace predictive coding so that only the most vital documents are being reviewed for production. But, this does require a team that knows how to sample and interpret the metrics and reports generated by the technology.

2017 Document Review Lesson #4: The terminology related to predictive coding can cause one’s head to spin. Dust off your math skills (or leverage a specialist) – it’s the only way a savvy document review professional knows when a review is finished.

Leverage KrolLDiscovery for Document Review

Looking to modernize your document review methods?

KrolLDiscovery offers advanced document review services around the world, with fully managed review teams and up-to-date facilities in eight locations in four countries: Washington D.C., Chicago, Pittsburgh, Miami, Minnesota, London, Poland and Germany. KrolLDiscovery’s managed document review services teams provide you with specialized document review attorneys to meet your case needs. KrolLDiscovery review platforms are integrated with top-of-the line technology-assisted review and predictive coding features to search, categorize, redact and annotate documents. Our review teams utilize this technology to maximize efficiency through intelligent document prioritization and categorization, automated workflow, advanced search functionality and multilingual support.

Bust these 4 Myths on Your Next Document Review

Conducting an effective and efficient legal document review requires a hybrid of rocket science, brain chemistry and hot coffee. With ediscovery technology and best practices constantly evolving, if your document review practices haven’t changed in the last five years, you are likely wasting time and money for your organization and/or your client. To ensure your practices meet today’s demands, bust these four document review myths on your next ediscovery project.

Myth #1: Document review just happens; you don’t really need a plan.

There is an outmoded notion that document review is a side-show to the main ediscovery attraction; however, that view could not be farther from the truth. Without question, document review is the most expensive aspect of discovery; accordingly, planning for the review should start at the same time the case team is thinking about preservation and collection. Failing to create a big picture plan and a thorough review manual can result in:

  • Multiple “side reviews” for special issues or newly discovered documents
  • Downtime for reviewers as they wait around for more work
  • Documents being touched two or three times

Lesson #1: Don’t let any member of your case team procrastinate when it comes to planning for review.

Myth #2: Any old attorney can conduct (or manage) a document review.

With formidable technology at their fingertips and millions of documents to wrangle, today’s document reviewers are anything but humdrum lawyers. Today’s document reviewers are not only licensed and highly qualified attorneys, but they often have specific training and certifications in various document review platforms. They understand how to employ the strictest quality control protocols to maximize time and minimize the cost of a review. Further, sometimes a review requires expertise in a language other than English or substantive knowledge in the following areas: banking and financial services, pharmaceuticals, insurance, telecommunications, technology, agriculture, transportation, oil and gas, securities, and more.

Lesson #2: Make sure you are equipped with the right people resources for your next review.

Myth #3: All document review technology is equal.

While most major review platforms function in a generally-similar manner, there are significant differences in how the capabilities are executed by users – which impacts the expected results of a specific function. Rock star document review professionals understand and can articulate the features and benefits of multiple review tools and know how and when to leverage features such as workflow, predictive coding, near de-duplication, topic grouping, smart searching, machine translation (just to name a few of the vital technologies key to any modern review). Running searches, batching and assigning documents, categorizing, and performing QC should be second nature for experienced reviewers, and they should know the nuances of these functions in any specific technology platform. If they cannot answer a question, they should know how to get a hold of the technology provider’s technical support team to lend a hand.

Lesson #3: You should not settle for anything less than document review technology superstars. 

Myth #4: It will be obvious when you can stop your review…when you run out of documents.

With the advent of predictive coding and other technology assisted review technologies, no longer do attorney’s eyes need to be placed on every document. As such, determining when to call the review “complete” is complex. Today’s document review professionals are skilled at interpreting metrics and reports generated by the document review technology and know when the numbers are showing a high-quality review versus an incomplete review.

Lesson #4: Dust off your math skills (or leverage a specialist) – it’s the only way a savvy document review professional knows when a review is finished.

Leverage Kroll Ontrack for Document Review

With Kroll Ontrack, your document review could not be in better hands, eyes or minds. For nearly a decade, Kroll Ontrack has offered advanced document review services in the US. Now, with a document review facility in London, Englandnewly opened in December 2014 – Kroll Ontrack’s document review expertise is global.

Technology Empowers Legal Review Teams and Increases Document Review Efficiency

Technology Empowers Legal Review Teams and Increases Document Review Efficiency

In his recent New York Times article, John Markoff highlights how “advances in artificial intelligence,” (ediscovery software in particular) have created a shift in the way law firms and corporations conduct legal discovery. While Mr. Markoff agrees that technology provides greater efficiencies and cost savings in document review, he claims that a heavier reliance on technology will ultimately result in a decreased demand for lawyers. While we agree that litigants are relying more heavily on technology during discovery, the suggestion that this will have negative repercussions for the practice of law is misguided.

Document review is, by far, the most expensive part of the legal discovery process. Today, for every $1 spent on processing data, $3 or more is spent on document review. This disparity makes traditional document review economically impractical and counter to notions of proportionality and reasonableness. Furthermore, recent court decisions and learned commentary from reputable organizations such as The Sedona Conference® have opened the door for technology to bring efficiencies to the costly practice of document review.

Kroll Ontrack has a long history of developing groundbreaking technology solutions calculated to make the discovery process less expensive and more efficient. These innovations include advanced search technologies, multilingual character recognition and early data assessment. Most recently, Intelligent Review Technology (IRT), which integrates smart technology with legal expertise, has been developed and implemented in the award-winning Ontrack® Inview™ document review tool. IRT expedites document review decisions by prioritizing and suggesting categories for yet-to-be-reviewed documents. IRT learns from lawyers while they work, strengthens the defensibility of document production decisions and ultimately empowers attorneys to focus on what they do best – devising case strategy and making tactical decisions. Most importantly, studies show that this powerful technology can reduce the expense of document review by more than 50 percent.

“Litigants and their counsel are relentlessly pursuing more efficient, cost-effective legal discovery processes,” said George May, vice president of product strategy, Kroll Ontrack. “It’s inevitable that reliance on technology will continue to increase as clients strive to reduce the total cost of litigation.”

Automated Workflow – Providing Structure to Document Review

The traditional practice of document review has a lot of moving parts. It often involves many teams of people, working under tight timeframes, searching for relevant information and striving to produce in a timely manner. As discovery practices have evolved, legal technology service providers have responded with new technologies designed to reduce data volumes, enhance search capabilities, and improve analytics and reporting. However, little time has been spent improving the process of document review.

Automated workflow has changed the document review landscape, improving this often cumbersome process. After all, administration and execution of the review process should run like clockwork. Even when all of the essentials for a document review are in place – reviewers, facilities, training and technology (including up-to-date advanced searching techniques) – the process requires significant planning, oversight and the ability to smoothly absorb hiccups along the way. Automated workflow functionality, which enables document review administrators to design, manage and monitor the document review process, implements defensible quality control mechanisms and allow reviewers to efficiently check in completed work and request more documents.

Workflow Design

Using automated workflow technology, document review administrators can create workflow components to define an automated document review process and associated rules. Rather than sketching the process on loose-leaf pages, giant whiteboard charts or even inside someone’s head, administrators can visually design the workflow directly within the review tool using an intuitive interface to view, create, copy or delete components in an on-screen display. Next, review managers can assign document sets to specific reviewer groups, define review criteria and determine how documents will flow through the review stream. To ease the setup process, each component of the workflow corresponds to a particular question to help define the state, reviewer group, document check-in criteria, document routing and routing automation. Additionally, if a review must start with limited review criteria, more complex, defined criteria can be added later. The resulting process is defensible, repeatable and efficient.

Automated Distribution

Document review administrators can also eliminate the process of manually distributing document batches to reviewers. Automated workflow features allow administrators to define and electronically distribute document sets to designated reviewer groups. During the review process, reviewers can mark documents as incomplete, flag them for a manager or check them in as complete. Documents that meet check-in criteria then automatically proceed to the next applicable stage based on the defined review stream. However, documents will fail check-in if the reviewer attempts to designate conflicting categorizations. At that point, the reviewer is notified to correct the categorization decision. This predefined process increases efficiency, productivity and accuracy. It also eliminates time otherwise wasted in manual distribution and collection and frees the administrator for other important tasks, such as quality control (QC). The result is a faster and more cost-effective review process.

Accurate Monitoring

In a manual workflow, administrators must monitor reviewers closely to minimize lag time and ensure reviewers are supplied with ongoing assignments. Status reports can be run only at the end of a workday, forcing administrators to stay late in order to track the progress of the review. Automating the process alleviates the strains associated with the often tedious spreadsheet creation. Using an intuitive interface, administrators can electronically access and manage the review, monitor progress and address problem documents as they arise. Real-time graphical status reports display high-level or detailed views of the review progress based on stages, reviewer groups or individual reviewers. Additionally, enhanced reporting capabilities enable the administrators to self-schedule project summary reports and choose to have them distributed over e-mail. With these features, the administrators can easily update team members and clients, and can more accurately project the completion date and related staffing needs.

Quality Control & Defensibility

Improved quality control is another benefit of automated workflow. Rather than running hundreds of QC searches at the end of a review, administrators can utilize automated quality control capabilities to validate the consistency of review decisions, ensure accurate categorization of documents and verify that a logical end result is reached. When a misclassification is detected, an administrator can immediately notify the reviewer to correct the issue, thereby saving time normally reserved for extensive QC in the final stages of a review.

Managing any size of a document review project can seem unwieldy, complicated and nearly endless. Taking advantage of new automated workflow capabilities to manage and execute the document review process will ease the strain on human resources and will help clients drive down costs and advance to the next stages of the litigation.

Early Case Assessment and Automated Document Review Workflow Technology Results in Increased Efficiency and Cost Savings

Through its cutting-edge technologies and noted expertise, Kroll Ontrack provides effective solutions that decrease the burden of analysis and review while saving money and time.

The financial crisis that gripped the nation two years ago has spawned litigation relating to poor investments, bad banking and a myriad of consumer disputes. A leading national bank enlisted the services of Kroll Ontrack to design an efficient solution to manage significant amounts of data involved in a series of litigations. The client turned to the proven services and technologies offered by Kroll Ontrack to reduce the volume of data for process and review, strengthen defensibility and reduce costs and risk.

Read more

Ediscovery.com Review Challenge: Test Your Genius and Win!

Do you have a small matter, a big case, or something in between? Do you have databases that are cluttered with non-critical data? Are you looking for an innovative way to focus on the most relevant documents and procure accurate results while saving time and money? Ediscovery.com Review is the tool for you.

Ediscovery.com Review by Kroll Ontrack delivers fast and accurate results. It is an effective solution to conduct initial data assessment, analysis, review, and document production within a centralized platform. It was also recently awarded “Best Predictive Coding Solution” from a recent National Law Journal survey. Bottom line, big or small, ediscovery.com Review is the go-to review tool.

If you are not familiar with the cutting-edge features offered by ediscovery.com Review, now is the time to take the plunge!  Check out our interactive challenge, which gives you a quick glimpse into the key aspects of the all-in-one ediscovery document review tool that puts you in control.

So…what are you waiting for? Take the challenge to learn something new about ediscovery or strengthen the knowledge you already have. And, if you do, you could win a Bose SoundLink Mini Bluetooth Speaker. Busy? Not a problem – we made the quiz fun and easy. So take a break from your hectic day and sit back, relax, and win. Good luck!

Case-in-Point: New Mexico Court Addresses Predictive Coding and Technology Assisted Review

It’s not too often that judges write specifically about predictive coding and technology assisted review (TAR); when they do, it is important to take notice. These opinions can reveal the do’s and don’ts of implementing predictive coding and how to be more successful when using the technology. It also shows us that the judiciary is not only becoming more and more comfortable talking about the technology, but also accepting it.

The recent decision of New Mexico St. Inv. Coun. v. Bland, 2014 WL 772860 (N.M. Dist. Ct. Feb. 12, 2014) demonstrated that the court was not afraid to address predictive coding and TAR and was savvy on modern document review practices. The investigation performed by Day Pitney and Kroll Ontrack utilized TAR techniques including predictive coding, concept searching, near-duplication, and email threading capabilities. The result was both a success in wading through more than 2.6 million pages and the court’s favorable response to the investigation.

To find out how technology assisted review and predictive coding protocols were implemented in this investigation and to read more about the court’s decision, check out the full case study on ediscovery.com.

The EDRM’s New Computer Assisted Review Reference Model: Explained

Last month I was invited by the EDRM to take part in their webinar on the Computer Assisted Review Reference Model (CARRM). I was joined by three esteemed Technology Assisted Review experts: George Socha, EDRM, Herbert Roitblat, OrcaTec and Bob Rohlf, Exterro.

We took this chance to dive into the fascinating world that is predictive coding, also known as Technology Assisted Review (TAR), Computer Assisted Review (CAR), or intelligent review.  Predictive coding is the use of computer technologies to rank or categorize a collection of documents as responsive or not based on human review of a subset of the collection.

The talk started with a discussion of how we got to predictive coding today, and why the court’s blessing to use predictive coding in certain civil litigation cases is so important. Since the first blessing by Judge Peck in February of 2012, the number of cases using predictive coding has grown substantially. Without that blessing, it is unlikely predictive coding would still be growing.

The next question we addressed was simply “Why Predictive Coding?” The other experts and I discussed the ways predictive coding saves time and resources by finding the right documents as fast as possible, sorting and grouping documents more efficiently and validating the reviewer’s work before production.

After the opening discussion, we dove into an assortment of predictive coding topics, including the variety of technologies available and their differences, how to conduct effective predictive coding, and predictive coding workflow.

We closed the forum by discussing the best practices in ediscovery and predictive coding:

  • To be efficient, you must know which questions to ask your ediscovery experts – this means doing your research.
  • You need to be proactive in your firm’s ediscovery plan: create a plan and stick to it!
  • Be sure to ensure quality controls so the results are respectable.
  • Finally, do not be afraid to ask for help when you do not understand the process. The field is very new and growing.

For those of you who missed the webinar and would like a closer look, check out the recording of the EDRM’s New Computer Assisted Review Reference Model (CARRM)—Beyond the Test Drive and be sure to check out Kroll Ontrack’s Slideshare account for the latest presentations and infographics.

Deploying Trainers in Technology Assisted Review (TAR) without “Spoiling the Broth”

Deploying trainers in technology-assisted review (TAR)

Leaving little room for interpretation, the court in Coquina Investments v. Rothstein, stated that the defendants’ litany of ediscovery project management pitfalls (which involved over 200 attorneys across two firms) culminated into a “case of too many cooks spoiling the broth.” While Coquina Investments involved format of production issues, the same rationale applies when deploying trainers in technology-assisted review (TAR) —too many trainers can lead to inconsistency and poor machine learning.

Taking Control of the Technology-Assisted Review Kitchen

Using TAR in litigation is strikingly similar to working in a professional kitchen. There are many parts moving on parallel tracks. Just like a pastry chef may begin working on dessert while a grill chef prepares the main dish, you may have reviewers allocated to train a recently found hard drive while a sub-team performs corrective training on a production set. And above all else, in either scenario, nothing leaves the kitchen without a taste test (quality control). But perhaps the most difficult task involves assigning appropriate roles to a diverse cast of employees during the stages of machine training.

  • Lead Attorney: The Chef de Cuisine—in charge of all things related to the kitchen. This role involves making executive decisions like when to stop review, how to provide additional training and who will train the machine.
  • Subject Matter Experts (SMEs): The Sous-Chefs—second-in-command to the Chef de Cuisine. These are attorneys that have a firm knowledge of the nature of the case and the issues involved. They are capable of making high-level decisions and have an expansive knowledge of the dispute.
  • Contract AttorneysThe Chefs de partie—line cooks responsible for certain areas of production. These are attorneys who are comfortable and trained on the issue at hand, but do not have the level of knowledge possessed by Subject Matter Experts.

Choose Your Recipe

The Chef de Cuisine works closely with the Sous-Chefs to ensure that everyone clearly understands the basics of the recipe so that when the Chef de Cuisine (the Lead Attorney) is out of the “kitchen” the quality of the output remains constant.

When it comes to dedicating a team of SMEs to train the system, the adage “less is more” carries the day. As discussed in a document produced by the TREC 2008 Legal Track, determining whether a document is responsive or not responsive is a deceptively subjective process.  Lawyers “draw lines”—often at different places—across a number of determinations like “the nature of the risk posed by production, the party requesting the information” and the willingness of the production party to face a challenge for underproduction. Because the risk of inconsistencies in deciding responsiveness is exacerbated by the introduction of more trainers, rarely will you want more than five SMEs training the system. The restaurant owner mutters, “but my project is big, there is no way that I can rely on only five reviewers.”  Generally, two to five reviewers can handle the targeted review load for even a very large project. The total amount of training documents will vary depending on if you plan to “seed” the system (and how much “seeding” you plan to do), the number of documents in your data set and your desired confidence level. Ultimately, responsiveness decisions made on this fraction of documents will be extrapolated to all remaining documents in the data set; it becomes critical that the SMEs are in sync with the goals and structure of the case.

Reduce and Stir

While the ideal structure for deploying this handful of SMEs is still up for debate, there is common consensus that there must be some process in place to arbitrate consistency when responsiveness disputes arise. I’ve seen some interesting hierarchical training structures over the years designed to handle training disputes. These are some of the most common:

Training Structures of technology-assisted review

Finally: Tasting the Broth

An effective document review and an efficient kitchen both rely upon QC measures to ensure quality and consistency of output. A well-designed plan for validating the automated technology-assisted review output is key to knowing when to stop training for quality and when the documents are ready for consumption at the next stage of the case. Where the Chef de Cuisine is responsible for ensuring that only quality dishes leave her kitchen, the Lead Attorney is also responsible for the quality of the data in her case. Only when quality control measurements reflect defensible levels of recall and precision will a Lead Attorney be in a position to move beyond first-pass review and plate the production for the requesting party—Bon Appetit!

To gain hands-on TAR experience, register now for the newest educational course offered by Kroll Ontrack, TAR Learning Labs.  The next Learning Lab is coming up in Minneapolis, MN, in early June.  Sign up soon, space is limited!

Math and Statistics for Ediscovery Lawyers Using Technology Assisted Review (TAR)

Math and Statistics for Ediscovery Lawyers Using Technology-Assisted Review

Legal professionals learn a lot of complicated concepts and principles through education and/or practice… but they usually don’t have much—or anything—to do with math or statistics.

Then, along came Technology-assisted Review (TAR), which promises to revolutionize ediscovery by leveraging sampling techniques and advanced algorithms to predict whether documents are responsive to particular criteria. Simply put, these revolutionary technologies rely heavily on math and statistics—and modern practitioners need to be more tech- and math-savvy than ever before (or be willing to engage the appropriate predictive coding experts or other resources) in order to understand and leverage this new methodology to its highest potential.

However, to master the TAR process, legal professionals don’t need to dust off their slide rules and graphing calculators, crack open a stats book, or flock to the nearest college or university to enroll in a math class. Rather, they simply need to focus on understanding the key concepts and metrics necessary to manage the predictive coding process.

To help put you on the path to TAR technology mastery, here is a cursory overview of the metrics and processes you need to know.

Key Metrics in Effectiveness Reporting

After the technology has run machine learning, it will generate a report with raw data metrics and calculations that should look something like this:

Key Metrics in Technology-Assisted Review Effectiveness Reporting

Understanding the metrics on this report is key to analyzing your technology’s performance and determining what to do next.

True and False Positives/Negatives
In search or review exercises, “responsive” or “not responsive” classifications are reviewed. When the document is suggested as “responsive,” and the suggestion is correct, this is referred to as a True Positive; when it is incorrect (i.e., a non-responsive document is incorrectly coded as responsive), it is a false positive. Accordingly, when a document is suggested as “not responsive,” and the suggestion is correct, this is an example of a true negative; if the suggestion is incorrect (i.e., it should have been coded “responsive”), then it is a false negative.

Recall, Precision, and F-measure
While these three key metrics have been discussed previously, understanding these metrics is essential to successfully, effectively and efficiently employing predictive analytics for document review. Generally speaking, precision is the fraction of relevant documents within retrieved results—essentially a measure of exactness. Recall is the fraction of retrieved relevant documents, or the measure of completeness. F-measure is the harmonic average between the system’s recall and precision.

TAR technology Precision

Accuracy
Accuracy incorporates how well the classifier did by identifying the fraction of correctly coded documents, essentially expressed as (True Positives + True Negatives) / (All Documents). While accuracy can be helpful, it should not drive decisions. Accuracy can be skewed upward if there is an overwhelming amount of either true positives or true negatives in the database.

Analyzing Technology-Assisted Review Metrics with Sampling

Sampling is one of the most versatile tools in your technology-assisted review arsenal. The sampling process examines a fraction of the document population to determine characteristics of the whole, further validating what you do or don’t have and strengthening the defensibility of your review processes and procedures. Notably, it is often used to perform quality control (QC), which can take place iteratively, or at the back end of a review to assess it.

Quality control rests on a simple principle: TAR predictions are not always right. Through sampling, various sets of the data are drawn, manually reviewed by a quality control team, and evaluated. Based on these results, teams can decide whether additional training is needed, or the team might conclude that the technology is categorizing documents so effectively that they are comfortable relying wholly on machine predictions and stopping manual review.

The Next Step: Mastery

While general knowledge of these predictive coding metrics is a great start, it is merely a drop in the bucket. To learn more about mastering the math behind the TAR technology, don’t miss the May 10th webinar hosted by Kroll Ontrack and ACEDS, MATH & STATS 101: What Lawyers Need to Know to (Properly) Leverage TAR.

 
css.php