Conceptual Search verses Predictive Coding


In my last blog entry titledSuccessful Predictive Coding Adoption is Dependent on Effective Information Governance”, a question was posted which I thought deserved a wider sharing with the group; “What is the difference between predictive coding and conceptual search?” Being an individual not directly associated with either technology but with some interesting background, I believe I can attempt to explain the differences, at least as it pertains to discovery processes.

Conceptual search technologies allow a user to search on concepts…(pretty valuable insight, right?) instead of searching on a keyword such as “dog”. In the case of a keyword search on “dog”, the user would generate a results set of every document/file/record with the three letters D-O-G present in that specific sequence. The results could include returns on “dogs”, the 4- legged animals, references to “frankfurters”, references to movies (Dog Day Afternoon) etc. in no particular priority.

True conceptual search capability understands (based on search criteria) that the user was looking for information on the 4-legged animals so would return references to not just “dogs” but would also include references to “Golden Retrievers”, “Animal Shelters”, “Pet Adoption” etc.. Some conceptual search solutions will also cluster concepts to give the user the ability to quickly fine-tune their search; for example create a cluster of all dog (animal) references, a cluster for all food related references and so on. Many eDiscovery analytic solutions include this clustering capability.

Predictive coding is a process which includes both automation and human interaction to best produce a results set of potentially responsive documents that trained human reviewers can check.

Predictive coding takes the conceptual search and clustering idea much further than just understanding concepts. A predictive coding solution is “trained” in a very specific manner for each case. For example, the legal team with additional subject matter expertise, manually choose document/records/files that they deem as responsive examples for the particular case and input them to the predictive coding system as examples of content/format which should be found and coded as responsive to the case. Most predictive coding processes include several iterative cycles to fine-tune the example training examples. An iterative cycle would include legal professionals sampling/reviewing those records coded as responsive by the solution and determining if they are truly responsive in the opinion of the human reviewer. If the reviewers find examples of documents that are not deemed responsive, then those documents would then in turn be used to train the solution to disregard or not code as responsive specific content based on the iterative examples. This iterative cycle could be processed several times until the human professionals agree the system has reached the desired level of capability. By the way, this iterative process can and is also used to sample results sets of documents deemed non-responsive to determine if the solution is not finding potentially responsive content. This process is called “Elusion”. Elusion is the process to count the proportion of misses that a system yielded. The proportion of misses, is the proportion of responsive documents that were not marked responsive by the solution. Elusion is the proportion of missed documents that are in fact responsive. This elusion process can also be used in the iterative cycle to further train the system.

The obvious benefit of a predictive coding solution in the eDiscovery process is to dramatically reduce the time spent on legal professionals reading each and every document to determine its responsiveness. A 2012 RAND Institute for Civil Justice report estimated a savings of 80% for the eDiscovery review process (73% of the total cost of eDiscovery) when using a predictive coding solution.

So, to answer the question, conceptual search is an automated information retrieval method which is used to search electronically stored unstructured text for information that is conceptually similar to the information provided in a search query. In other words, the ideasexpressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query.

Predictive coding is a process (which can include conceptual search) which uses machine learning technologies to categorize (or code) an entire corpus of documents as responsive, non-responsive, or privileged based on human chosen examples used to train the system in an iterative process. These technologies typically rank the documents from most to least likely to be responsive to a specific information request. This ranking can then be used to “cut” or partition the documents into one or more categories, such as potentially responsive or not, in need of further review or not, etc1.

1 Partial definition from the eDiscovery Daily Blog: http://www.ediscoverydaily.com/2010/12/ediscovery-trends-what-the-heck-is-predictive-coding.html

Successful Predictive Coding Adoption is Dependent on Effective Information Governance


Predictive coding has been receiving a great deal of press lately (for good reason), especially with the ongoing case; Da Silva Moore v. Publicis Groupe, No. 11 Civ. 1279 (ALC) (AJP), 2012 U.S. Dist. LEXIS 23350 (S.D.N.Y. Feb. 24, 2012). On May 21, the plaintiffs filed Rule 72(a) objections to Magistrate Judge Peck’s May 7, 2012 discovery rulings related to the relevance of certain documents that comprise the seed set of the parties’ ESI protocol. 

This Rule 72(a) objection highlights an important point in the adoption of predictive coding technologies; the technology is only as good as the people AND processes supporting it.

To review, predictive coding is a process where a computer (with the requisite software), does the vast majority of the work of deciding whether data is relevant, responsive or privileged to a given case.

Beyond simply searching for keyword matching (byte for byte), predictive coding adopts a computer self-learning approach. To accomplish this, attorneys and other legal professionals provide example responsive documents/data in a statistically sufficient quantity which in turn “trains”the computer as to what relevant documents/content should be flagged and set aside for discovery. This is done in an iterative process where legally trained professionals fine-tune the seed set over a period of time to a point where the seed set represents a statistically relevant sample which includes examples of all possible relevant content as well as formats. This capability can also be used to find and secure privileged documents. Instead of legally trained people reading every document to determine if a document is relevant to a case, the computer can perform a first pass of this task in a fraction of the time with much more repeatable results. This technology is exciting in that it can dramatically reduce the cost of the discovery/review process by as much as 80% according to the RAND Institute of Civil Justice.

By now you may be asking yourself what this has to do with Information Governance?…

For predictive coding to become fully adopted across the legal spectrum, all sides have to agree 1. the technology works as advertised, and 2. the legal professionals are providing the system with the proper seed sets for it to learn from. To accomplish the second point above, the seed set must include content from all possible sources of information. If the seed set trainers don’t have access to all potentially responsive content to draw from, then the seed set is in question.

Knowing where all the information resides and having the ability to retrieve it quickly is imperative to an effective discovery process. Records/Information Management professionals should view this new technology as an opportunity to become an even more essential partner to the legal department and entire organization by not just focusing on “records” but on information across the entire enterprise. With full fledged information management programs in place, the legal department will be able to fully embrace this technology to drastically reduce their cost of discovery.

Automatic Deletion…A Good Idea?


In my last blog, I discussed the concept of Defensible Disposal; getting rid of data which has no value to lower the cost and risk of eDiscovery as well as overall storage costs (IBM has been a leader in Defensive Disposal for several years). Custodians keep data because they might need to reuse some of the content later or they might have to produce it later for CYA reasons. I have been guilty of over the years and because of that I have a huge amount of old data on external disks that I will probably never, ever look at again. For example, I have over 500 GB of saved data, spreadsheets, presentations, PDFs, .wav files, MP3s, Word docs, URLs etc. that I have saved for whatever reason over the years. Have I ever really, reused any of the data…maybe a couple of times, but in reality they just site there. This brings up the subject of the Data Lifecycle. Fred Moore, Founder of Horison Information Strategies wrote about this concept years ago, referring to the Lifecycle of Data and the probability that the saved data will ever be re-used or even looked at again. Fred created a graphic showing this lifecycle of data.

Figure 1: The Lifecycle of data – Horison Information Systems

The above chart shows that as data ages, the probability of reuse goes down…very quickly as the amount of saved data rises. Once data has aged 90 days, its probability of reuse approaches 1% and after 1 year is well under 1%.

You’re probably asking yourself, so what!…storage is cheap, what’s the big deal? I have 500 GB of storage available to me on my new company supplied laptop. I have share drives available to me. And I have 1 TB of storage in my home office. I can buy 1TB of external disk for approximately $100, so why not keep everything forever?

For organizations, it’s a question of storage but more importantly, it’s a question of legal risk and the cost of eDiscovery. Any existing data could be a subject of litigation and therefore reviewable. You may recall in my last blog, I mentioned a recent report from the RAND Institute for Civil Justice which discussed the costs of eDiscovery including the estimate that the cost of reviewing records/files is approximately 73% of every eDiscovery dollar spent. By saving everything because you might someday need to reuse or reference it drive the cost of eDiscovery way up.

The key question to ask is; how do you get employees to delete stuff instead of keeping everything? In most organizations the culture has always been one of “save whatever you want until your hard disk and share drive is full”. This culture is extremely difficult to change…quickly. One way is to force new behavior with technology. I know of a couple of companies which only allow files to be saved to a specific folder on the users desktop. For higher level laptop users, as the user syncs to the organization’s infrastructure, all files saved to the specific folder are copied to a users sharedrive where an information management application applies retention policies to the data on the sharedrive as well as the laptop’s data folder.

In my opinion this extreme process would not work in most organizations due to culture expectations. So again we’re left with the question of how do you get employees to delete stuff?

Organizational cultures about data handling and retention have to be changed over time. This includes specific guidance during new employee orientation, employee training, and slow technology changes. An example could be reducing the amount of storage available to an employee on the share or home drive.

Another example could be some process changes to an employee’s workstation of laptop. Force the default storage target to be the “My Documents” folder. Phase 1 could be you have to save all files to the “My Documents” folder but can then be moved anywhere after that.

Phase 2 could include a 90 day time limit on the “My Documents” folder so that anything older than 90 days is automatically deleted (with litigation hold safeguards in place). This would cause files not deemed to be important enough to moved to be of little value and “disposable”. The 3rd Phase could include the inability to move files out of the “My Documents” folder (but with the ability for users to create subfolders with no time limit) thereby ensuring a single place of discoverable data.

Again, this strategy needs to be a slow progression to minimalize the perceived changes to the user population.

The point is it’s a end user problem, not necessarily an IT problem. End users have to be trained, gently pushed, and eventually forced to get rid of useless data…