Visualizing Hawaii: A GC’s Perspective Pt 2


Continued from yesterday…

Scenario #2 (using the same example from yesterday except your email retention policy is now 2 years and you have an Information Governance program that ensures all unstructured data is searchable and actively managed in place)

Its 1:52 pm on the Friday before you leave on a much anticipated 2 week vacation in Hawaii…yada, yada, yada

It’s a letter from the law offices of Lewis, Gonsowski & Tolson informing you that their client, ACME Systems, is suing your company for $225 million for conspiracy to harm ACME’s reputation and future sales by spreading false information about ACME’s newest product line. You’re told that the plaintiff has documentation (an email) from an ABC Systems employee outlining the conspiracy. You also receive a copy of the “smoking gun” email…

——-

From: Ted
Date: June 2, 2012
To: Rick

Re: Acme Systems new solutions

“I would say we need to spread as much miss-information and lies about their solution’s capabilities as possible.  We need to throw up as much FUD as we can when we talk to the analyst community to give us time to get our new application to market.  Maybe we can make up a lie about them stealing their IP from a Chinese company.” 

——-

Should I cancel the vacation? …Not yet

You call the VP of IT and ask her if she has the capability to pull an email from 13 months ago. She tells you she does have all of the emails going back two years but there are literally millions of them and it will take weeks to go through them.

You remember getting a demo from Recommind two weeks ago showing their On Demand Review and Analysis platform with a really neat capability to visualize data relationships. So you call up Recommind and setup a quick job.

IT starts the upload of the email data set to the Recommind Cloud platform.

You call your wife and ask her to delay the vacation until Monday…she’s not happy but it could have been worse.

The next morning (Saturday) you meet your team at the office and sign into the hosted eDiscovery platform and pull up the visualization module and run a search against the uploaded email data set for any mention of ACME Systems. Out of the 2 million emails you get hits on 889 emails.

You then ask the system to graphically show the messages by sender and recipient. You quickly find Ted and Rick and their email and even one from Rick to David… Interesting.

Within the hour you are able to assemble the entire conversation thread:

Email #1

From: CEO
Date: May 29, 2012
To: Sandra; Steve

Subject: Acme Systems new solutions

Please give some thought about what we should do to keep momentum going with our sales force in response to ACME Systems latest release of their new router. I can see our sales force getting discouraged with this new announcement.

Please get back to me with some ideas early next week.

Thanks Greg

Email #2

From: Steve
Date: May 29, 2012
To: Greg; Sandra

Re: Acme Systems new solutions

Greg, I will get with Sandra and others and brainstorm this topic no later than tomorrow and get back to you. Sandra, what times are good for you to get together?

Thanks Steve

 

Email #3

From: Sandra
Date: May 30, 2012
To: Ted

Re: Acme Systems new solutions

Ted, considering ACME’s new router announcement, how do you think we should counter their PR?

Thanks Sandra

 

Email #4

From: Ted
Date: June 1, 2012
To: Sandra; Bob

Re: Acme Systems new solutions

If it wasn’t illegal, I would suggest we need to spread as much misinformation about their new router as possible to the analyst community to create as mush FUD as we can to give us time to get our new solution out. Maybe we can make up a lie about them stealing their IP from a Chinese company.

But obviously that’s illegal (right?). Anyway…I suggest we highlight our current differentiators and produce a roadmap showing how and when we will catch and surpass them.

Regards Ted

 

Email #5

From: Rick
Date: June 1, 2012
To: Ted

Re: Acme Systems new solutions

Ted, I heard you had a funny suggestion for what we should do about ACME’s new router… What did you say?

Thanks Bob

 

Email #6 (The incriminating email)

From: Ted
Date: June 2, 2012
To:  Rick

Re: ACME Systems new solutions

“I would say we need to spread as much miss-information and lies about their solution’s capabilities as possible.  We need to throw up as much FUD as we can when we talk to the analyst community to give us time to get our new application to market.  Maybe we can make up a lie about them stealing their IP from a Chinese company.”

It looks like I will make the flight Monday morning after all…

The moral of the story

Circumstances often dictate the need for additional technical capabilities and experience levels to be acquired – quickly. The combination of rising levels of litigation, skyrocketing volumes of information being stored, tight budgets, short deadlines, resource constraints, and extraordinary legal considerations can put many organizations involved in litigation at a major disadvantage.

The relentless growth of data, especially unstructured data, is swamping many organizations. Employees create and receive large amounts of data daily, a majority of it is email – and most of it is simply kept because employees don’t have the time to spend making a decision on each work document or email whether it rises to the level of a record or important business document that may be needed later. The ability to visualize large data sets provides users the opportunity to get to the heart of the matter quickly instead of looking at thousands of lines of text in a table.

Predicting the Future of Information Governance


Information Anarchy

Information growth is out of control. The compound average growth rate for digital information is estimated to be 61.7%. According to a 2011 IDC study, 90% of all data created in the next decade will be of the unstructured variety. These facts are making it almost impossible for organizations to actually capture, manage, store, share and dispose of this data in any meaningful way that will benefit the organization.

Successful organizations run on and are dependent on information. But information is valuable to an organization only if you know where it is, what’s in it, and what is shareable or in other words… managed. In the past, organizations have relied on end-users to decide what should be kept, where and for how long. In fact 75% of data today is generated and controlled by individuals. In most cases this practice is ineffective and causes what many refer to as “covert orunderground archiving”, the act of individuals keeping everything in their own unmanaged local archives. These underground archives effectively lock most of the organization’s information away, hidden from everyone else in the organization.

This growing mass of information has brought us to an inflection point; get control of your information to enable innovation, profit and growth, or continue down your current path of information anarchy and choke on your competitor’s dust.

img-pred-IG

Choosing the Right Path

How does an organization ensure this infection point is navigated correctly? Information Governance. You must get control of all your information by employing the proven processes and technologies to allow you to create, store, find, share and dispose of information in an automated and intelligent manner.

An effective information governance process optimizes overall information value by ensuring the right information is retained and quickly available for business, regulatory, and legal requirements.  This process reduces regulatory and legal risk,  insures needed data can be found quickly and is secured for litigation,  reduces overall eDiscovery costs, and provides structure to unstructured information so that employees can be more productive.

Predicting the Future of Information Governance

Predictive Governance is the bridge across the inflection point. It combines machine-learning technology with human expertise and direction to automate your information governance tasks. Using this proven human-machine iterative training capability,Predictive Governance is able to accurately automate the concept-based categorization, data enrichment and management of all your enterprise data to reduce costs, reduce risks, enable information sharing and mitigate the strain of information overload.

Automating information governance so that all enterprise data is captured, granularity evaluated for legal requirements, regulatory compliance, or business value and stored or disposed of in a defensible manner is the only way for organizations to move to the next level of information governance.

Finding the Cure for the Healthcare Unstructured Data Problem


Healthcare information/ and records continue to grow with the introduction of new devices and expanding regulatory requirements such as The Affordable Care Act, The Health Insurance Portability and Accountability Act (HIPAA), and the Health Information Technology for Economic and Clinical Health Act (HITECH). In the past, healthcare records were made up of mostly paper forms or structured billing data; relatively easy to categorize, store, and manage.  That trend has been changing as new technologies enable faster and more convenient ways to share and consume medical data.

According to an April 9, 2013 article on ZDNet.com, by 2015, 80% of new healthcare information will be composed of unstructured information; information that’s much harder to classify and manage because it doesn’t conform to the “rows & columns” format used in the past. Examples of unstructured information include clinical notes, emails & attachments, scanned lab reports, office work documents, radiology images, SMS, and instant messages.

Who or what is going to actually manage this growing mountain of unstructured information?

To insure regulatory compliance and the confidentiality and security of this unstructured information, the healthcare industry will have to 1) hire a lot more professionals to manually categorize and mange it or 2) acquire technology to do it automatically.

Looking at the first solution; the cost to have people manually categorize and manage unstructured information would be prohibitively expensive not to mention slow. It also exposes private patient data to even more individuals.  That leaves the second solution; information governance technology. Because of the nature of unstructured information, a technology solution would have to:

  1. Recognize and work with hundreds of data formats
  2. Communicate with the most popular healthcare applications and data repositories
  3. Draw conceptual understanding from “free-form” content so that categorization can be accomplished at an extremely high accuracy rate
  4. Enable proper access security levels based on content
  5. Accurately retain information based on regulatory requirements
  6. Securely and permanently dispose of information when required

An exciting emerging information governance technology that can actually address the above requirements uses the same next generation technology the legal industry has adopted…proactive information governance technology based on conceptual understanding of content,  machine learning and iterative “train by example” capabilities

The lifecycle of information


Organizations habitually over-retain information, especially unstructured electronic information, for all kinds of reasons. Many organizations simply have not addressed what to do with it so many of them fall back on relying on individual employees to decide what should be kept and for how long and what should be disposed of. On the opposite end of the spectrum a minority of organizations have tried centralized enterprise content management systems and have found them to be difficult to use so employees find ways around them and end up keeping huge amounts of data locally on their workstations, on removable media, in cloud accounts or on rogue SharePoint sites and are used as “data dumps” with or no records management or IT supervision. Much of this information is transitory, expired, or of questionable business value. Because of this lack of management, information continues to accumulate. This information build-up raises the cost of storage as well as the risk associated with eDiscovery.

In reality, as information ages, it probability of re-use and therefore its value, shrinks quickly. Fred Moore, Founder of Horison Information Strategies, wrote about this concept years ago.

The figure 1 below shows that as data ages, the probability of reuse goes down…very quickly as the amount of saved data rises. Once data has aged 10 to 15 days, its probability of ever being looked at again approaches 1% and as it continues to age approaches but never quite reaches zero (figure 1 – red shading).

Contrast that with the possibility that a large part of any organizational data store has little of no business, legal or regulatory value. In fact the Compliance, Governance and Oversight Counsel (CGOC) conducted a survey in 2012 that showed that on the average, 1% of organizational data is subject to litigation hold, 5% is subject to regulatory retention and 25% had some business value (figure 1 – green shading). This means that approximately 69% of an organizations data store has no business value and could be disposed of without legal, regulatory or business consequences.

The average employee creates, sends, receives and stores conservatively 20 MB of data per day. This means that at the end of 15 business days, they have accumulated 220 MB of new data, at the end of 90 days, 1.26 GB of data and at the end of three years, 15.12 GB of data. So how much of this accumulated data needs to be retained? Again referring to figure 1 below, the blue shaded area represents the information that probably has no legal, regulatory or business value according to the 2012 CGOC survey. At the end of three years, the amount of retained data from a single employee that could be disposed of without adverse effects to the organization is 10.43 GB. Now multiply that by the total number of employees and you are looking at some very large data stores.

Figure 1: The Lifecycle of data

The above lifecycle of data shows us that employees really don’t need all of the data they squirrel away (because its probability of re-use drops to 1% at around 15 days) and based on the CGOC survey, approximately 69% of organizational data is not required for legal, regulatory retention or has business value. The difficult piece of this whole process is how can an organization efficiently determine what data is not needed and dispose of it automatically…

As unstructured data volumes continue to grow, automatic categorization of data is quickly becoming the only way to get ahead of the data flood. Without accurate automated categorization, the ability to find the data you need, quickly, will never be realized. Even better, if data categorization can be based on the meaning of the content, not just a simple rule or keyword match, highly accurate categorization and therefore information governance is achievable.

Total Time & Cost to ECA


A key phase in eDiscovery is Early Case Assessment (ECA), the process of reviewing case data and evidence to estimate risk, cost and time requirements, and to set the appropriate go-forward strategy to prosecute or defend a legal case – should you fight the case or settle as soon as possible. Early case assessment can be expensive and time consuming and because of the time involved, may not leave you with enough time to properly review evidence and create case strategy. Organizations are continuously looking for ways to move into the early case assessment process as quickly as possible, with the most accurate data, while spending the least amount of money.

The early case assessment process usually involves the following steps:

  1. Determine what the case is about, who in your organization could be involved, and the timeframe in question.
  2. Determine where potentially relevant information could be residing – storage locations.
  3. Place a broad litigation hold on all potentially responsive information.
  4. Collect and protect all potentially relevant information.
  5. Review all potentially relevant information.
  6. Perform a risk-benefit analysis on reviewed information.
  7. Develop a go-forward strategy.

Every year organizations continue to amass huge amounts of electronically stored information (ESI), primarily because few of them have systematic processes to actually dispose of electronic information – it is just too easy for custodians to hit the “save” button and forget about it. This ever-growing mass of electronic information means effective early case assessment cannot be a strictly manual process anymore. Software applications that can find, cull down and prioritize responsive electronic documents quickly must be utilized to give the defense time to actually devise a case strategy.

Total Time & Cost to ECA (TT&C to ECA)

The real measure of effective ECA is the total time and cost consumed to get to the point of being able to create a go-forward strategy; total time & cost to ECA.

The most time consuming and costly steps are the collection and review of all potentially relevant information (steps 4 and 5 above) to determine case strategy. This is due to the fact that to really make the most informed decision on strategy, all responsive information should be reviewed to determine case direction and how.

Predictive Coding for lower TT&C to ECA

Predictive Coding is a process that combines people, technology and workflow to find, prioritize and tag key relevant documents quickly, irrespective of keyword to speed the evidence review process while reducing costs. Due to its documented accuracy and efficiency gains, Predictive Coding is transforming how Early Case Assessment (ECA), analysis and document review are done.

The same predictive coding process used in document review can be used effectively for finding responsive documents for early case assessment quickly and at a much lower cost than traditional methods.

ECAlinearReview

Figure 1: The time & cost to ECA timeline graphically shows what additional time can mean in the eDiscovery process

Besides the sizable reduction in cost, using predictive coding for ECA gives you more time to actually create case strategy using the most relevant information. Many organizations find themselves with little or no time to actually create case strategy before trail because of the time consumed just reviewing documents. Having the complete set of relevant documents sooner in the process will give you the most relevant data and the greatest amount of time to actually use it effectively.

Next Generation Technologies Reduce FOIA Bottlenecks


Federal agencies are under more scrutiny to resolve issues with responding to Freedom of Information Act (FOIA) requests.

The Freedom of Information Act provides for the full disclosure of agency records and information to the public unless that information is exempted under clearly delineated statutory language. In conjunction with FOIA, the Privacy Act serves to safeguard public interest in informational privacy by delineating the duties and responsibilities of federal agencies that collect, store, and disseminate personal information about individuals. The procedures established ensure that the Department of Homeland Security fully satisfies its responsibility to the public to disclose departmental information while simultaneously safeguarding individual privacy.

In February of this year, the House Oversight and Government Reform Committee opened a congressional review of executive branch compliance with the Freedom of Information Act.

The committee sent a six page letter to the Director of Information Policy at the Department of Justice (DOJ), Melanie Ann Pustay. In the letter, the committee questions why, based on a December 2012 survey, 62 of 99 government agencies have not updated their FOIA regulations and processes which was required by Attorney General Eric Holder in a 2009 memorandum. In fact the Attorney General’s own agency have not updated their regulations and processes since 2003.

The committee also pointed out that there are 83,000 FOIA request still outstanding as of the writing of the letter.

In fairness to the federal agencies, responding to a FOIA request can be time-consuming and expensive if technology and processes are not keeping up with increasing demands. Electronic content can be anywhere including email systems, SharePoint servers, file systems, and individual workstations. Because content is spread around and not usually centrally indexed, enterprise wide searches for content do not turn up all potentially responsive content. This means a much more manual, time consuming process to find relevant content is used.

There must be a better way…

New technology can address the collection problem of searching for relevant content across the many storage locations where electronically stored information (ESI) can reside. For example, an enterprise-wide search capability with “connectors” into every data repository, email, SharePoint, file systems, ECM systems, records management systems allows all content to be centrally indexed so that an enterprise wide keyword search will find all instances of content with those keywords present. A more powerful capability to look for is the ability to search on concepts, a far more accurate way to search for specific content. Searching for conceptually comparable content can speed up the collection process and drastically reduce the number of false positives in the results set while finding many more of the keyword deficient but conceptually responsive records. In conjunction with concept search, automated classification/categorization of data can reduce search time and raise accuracy.

The largest cost in responding to a FOIA request is in the review of all potentially relevant ESI found during collection. Another technology that can drastically reduce the problem of having to review thousands, hundreds of thousands or millions of documents for relevancy and privacy currently used by attorneys for eDiscovery is Predictive Coding.

Predictive Coding is the process of applying machine learning and iterative supervised learning technology to automate document coding and prioritize review. This functionality dramatically expedites the actual review process while dramatically improving accuracy and reducing the risk of missing key documents. According to a RAND Institute for Civil Justice report published in 2012, document review cost savings of 80% can be expected using Predictive Coding technology.

With the increasing number of FOIA requests swamping agencies, agencies are hard pressed to catch up to their backlogs. The next generation technologies mentioned above can help agencies reduce their FOIA related costs while decreasing their response time.

Healthcare Information Governance Requires a New Urgency


From safeguarding the privacy of patient medical records to ensuring every staff member can rapidly locate emergency procedures, healthcare organizations have an ethical, legal, and commercial responsibility to protect and manage the information in their care. Inadequate information management processes can result in:

  • A breach of protected health information (PHI) costing millions of dollars and ruined reputations.
  • A situation where accreditation is jeopardized due to a team-member’s inability to demonstrate the location of a critical policy.
  • A premature release of information about a planned merger causing the deal to fail or incurring additional liability.

The benefits of effectively protecting and managing healthcare information are widely recognized but many organizations have struggled to implement effective information governance solutions. Complex technical, organizational, regulatory and cultural challenges have increased implementation risks and costs and have led to relatively high failure rates.  Ultimately, many of these challenges are related to information governance.

In January 2013, The U.S. Department of Health and Human Services published a set of modifications to the HIPAA privacy, security, enforcement and breach notification rules.  These included:

  • Making business associates directly liable for data breaches
  • Clarifying and increasing the breach notification process and penalties
  • Strengthening limitations on data usage for marketing
  • Expanding patient rights to the disclosure of data when they pay cash for care

Effective Healthcare Information Governance steps

Inadvertent or just plain sloppy non-compliance with regulatory requirements can cost your healthcare organization millions of dollars in regulatory fines and legal penalties. For those new to the healthcare information governance topic, below are some suggested steps that will help you move toward reduced risk by implementing more effective information governance processes:

  1. Map out all data and data sources within the enterprise
  2. Develop and/or refresh organization-wide information governance policies and processes
  3. Have your legal counsel review and approve all new and changed policies
  4. Educate all employees and partners, at least annually, on their specific responsibilities
  5. Limit data held exclusively by individual employees
  6. Audit all policies to ensure employee compliance
  7. Enforce penalties for non-compliance

Healthcare information is by nature heterogeneous. While administrative information systems are highly structured, some 80% of healthcare information is unstructured or free form.  Securing and managing large amounts of unstructured patient as well as business data is extremely difficult and costly without an information governance capability that allows you to recognize content immediately, classify content accurately, retain content appropriately and dispose of content defensibly.

The ROI of Information Management


Information, data, electronically stored information (ESI), records, documents, hard copy files, email, stuff—no matter what you call it; it’s all intellectual property that your organization pays individuals to produce, interpret, use and export to others. After people, it’s a company’s most valuable asset, and it has many CIOs, GCs and others responsible asking: What’s in that information; who controls it; and where is it stored?

In simplest terms, I believe that businesses exist to generate and use information to produce revenue and profit.  If you’re willing to go along with me and think of information in this way as a commodity, we must also ask: How much does it cost to generate all that information? And, what’s the return on investment (ROI) for all that information?

The vast majority of information in an organization is not managed, not indexed, not backed up and, as you probably know or could guess, is rarely–if ever–accessed. Consider for a minute all the data in your company that is not centrally managed and  not easily available. This data includes backup tapes, share drives, employee hard disks, external disks, USB drives, CDs, DVDs, email attachments  sent outside the organization and hardcopy documents hidden away in filing cabinets.

Here’s the bottom line: If your company can’t find information or  doesn’t know what it contains, it is of little value. In fact, it’s valueless.

Now consider the amount of money the average company spends on an annual basis for the production, use and storage of information. These expenditures span:

  • Employee salaries. Most employees are in one way or another hired to produce, digest and act on information.
  • Employee training and day-to-day help-desk support.
  • Computers for each employee
  • Software
  • Email boxes
  • Share drives, storage
  • Backup systems
  • IT employees for data infrastructure support

In one way or another, companies exist to create and utilize information. So… do you know where all your information is and what’s in it? What’s your organization’s true ROI on the production and consumption of your information in your entire organization? How much higher could it be if you had complete control if it?

As an example, I have approximately 14.5 GB of Word documents, PDFs, PowerPoint files, spreadsheets, and other types of files in different formats that I’ve either created or received from others. Until recently, I had 3.65 GB of emails in my email box both on the Exchange server and mirrored locally on my hard disk. Now that I have a 480 MB mailbox limit imposed on me, 3.45 GB of those emails are now on my local hard disk only.

How much real, valuable information is contained in the collective 18 GB on my laptop? The average number of pages of information contained in 1 GB is conservatively 10,000. So 18 GB of files equals approximately 180,000 pages of information for a single employee that is not easily accessible or searchable by my organization. Now also consider the millions of pages of hardcopy records existing in file cabinets, microfiche and long term storage all around the company.

The main question is this: What could my organization do with quick and intelligent access to all of its employees’ information?

The more efficient your organization is in managing and using information, the higher the revenue and hopefully profit per employee will be.

Organizations need to be able to “walk the fence” between not impeding the free flow of information generation and sharing, and having a way for the organization as a whole to  find and use that information. Intelligent access to all information generated by an organization is key to effective information management.

Organizations spend huge sums of money to generate information…why not get your money’s worth? This future capability is the essence of true information management and much higher ROIs for your organization.