Vanishing Point

The Case for Data-Driven Decision Making in a World of Bias

Posted by Phil Cunningham October 4, 2017 at 9:30 AM

Data-Driven Decisions If we take the news at face value, Big Data is our great modern panacea. But as we’ve learned in a short amount of time, more data does not always guarantee – or even inform – better decisions. The criticality of context, the ability to make sense of data, and an ability to put relevant insights to use are all vital to making intelligent decisions with lasting positive implications.

 

We know this. Yet in many organizations, data volume simply takes priority over data quality. Employees are incentivized to track and measure ‘the numbers’ and to gather data and build robust stores of information. And while those activities may have a strategy behind them, the methods by which data are collected and tracked or measured and stored frequently open firms to significant error. To make data-based decisions valuable, an organization must have a way of knowing its quality. And to do that, an organization must have a clear understanding of inherent data biases to make decisions with confidence.

 

Understanding Data Bias

Organizational data biases fall into two common, though overlapping, categories – statistical and business – generated during and after data gathering and analysis:

 

  • In statistics, bias is likely to relate to data manipulation.
    • Non-random selection of groups or procedures
    • Changing criteria after examining data
    • Eliminating data from analysis
    • Assessment bias

 

  • In business, bias is likely to be introduced by inherently human cognitive and lingual characteristics.
    • Confirmation bias
    • Failure to exclude outliers
    • Over-fitting data by testing it against multiple hypotheses
    • Assumptions of normality when they do not in fact exist
    • Personal emotions
    • Specific words used to convey meaning

 

When organizations fail to consider their data gathering and analysis methods, they introduce bias and open themselves up to myriad challenges and risks. Most of those pitfalls can be traced to biases like desire for personal gain, cognitive predispositions, and status quo or confirmation bias contaminating the gathering and organizing processes.

 

And that leaves us with some critical questions: In a world of near infinite data, how can an organization’s employees confirm that what they see in data is real? How do individual and organizational biases affect the credibility – and thus usefulness – of data? Ultimately, given inherent bias, can data be trusted?

 

The short answer is no; you cannot trust your data. The long answer is that you can’t trust bias either. Kodak and Netflix are cases in point.

 

In 1975, a Kodak employee invented the digital camera and in 1989, the DSLR. Leadership decided not to invest in the technology because they (wrongly) assumed people would only ever want to look at photos in physical format. Moreover, they (wrongly) believed digital photography would cannibalize their near-monopoly on the photography value chain. Their failure was using their own biases rather than incorporating consumer data to make these decisions.

 

Similarly, in 2000, in the midst of the “dot.com” collapse, Netflix offered itself for sale to Blockbuster. Confirmation bias regarding the ineffectiveness of online business infected Blockbuster leadership. Even with ongoing attempts by consumers to acquire personalized (rather than standard) cable packages hinting at a market for pick-and-choose entertainment, Blockbuster refused the opportunity. They made two incorrect decisions – one, that consumers still wanted mass-produced entertainment; and two, that the Internet was an unfeasible marketplace.

 

Ultimately, both the Kodak and Blockbuster decisions were shelved. Their respective firms’ leadership could not get past its own biases to see the opportunity inherent in the data.

 

Balancing Data and Bias in the Real World

In a ‘perfect’ decision-making world, organizations would hire statisticians, econometricians, and data scientists. Those professionals would bring particular capability and discipline to data gathering and interpretation. They would have the methodologies and resources to identify statistical biases, adjust how data are gathered and assessed, and put structures in place to secure future data from bias. These professionals would lack inherent human biases.

 

Alas, we neither live nor operate in such a world. True, big data technology providers, predictive analytics advisors, and other “decision-aid” firms are filling the marketplace. But to date, few organizations have the wherewithal or resources to enlist those services. Instead, most organizations are filled with smart, well-intentioned people making the best decisions they can with flawed data and individual bias. And all too often, the results of these decisions can have sweeping, real-world impact.

 

Confident Decision Making In Light of Data Bias

How can decision makers overcome these issues? First, be aware that bias exists. When the organization identifies the need to make a decision, begin with the end in mind. Explicitly state what you know about the decision at hand and the assumptions going into it. On that foundation, plan the process to work toward the decision, understanding that you may need to take extra steps during the decision-making process. A concerted planning, implementation, and analysis effort built before the process begins will help to control bias from the outset and can enable agile shifts should unforeseen biases emerge.

 

With objectives and assumptions clear, seek to identify relevant, unbiased data and variables. If possible, enlist a data scientist or impartial counsel to aid in uncovering additional variables that may impact your analysis and decision processes. Incorporating objective, professional, outside assistance at this point can help minimize the effects of internal bias; it can control for biases by providing perspective through a “fresh set of eyes.”

 

Immediately following the statement of facts and assumptions, data analysis and modeling steps, introduce skepticism. Force challenges. Assign a ‘Devil’s Advocate.’ Perform alternate futures exercises. This step requires a trusted advisor knowledgeable about your objectives to argue against assumptions, highlight biases and emotional decisions, and question the validity of the conclusions in different environments.

 

In the last step, the decision is made. It should be made by a group of objective personnel. The step may involve purposely excluding critical stakeholders from the process to ensure emotions or incomplete knowledge do not bias the choice. Moreover, this final choice should be made in assembly, again following the same ‘Devil’s Advocate’ approach, with people explicitly present to argue against implicit and explicit biases.

 

Once made, monitor and evaluate the decision and its outcome. Determine whether the process accurately predicted and produced the expected outcome. Assert whether the data stemming from the process should be fed back into the decision-making model. Adjust the model to improve performance in future use scenarios.

 

There is no doubt that organizational decision-making can and should be informed by both data and human insights. There is also no doubt that we have yet to reach a point at which humans are flawlessly capable of making purely data-driven decisions, free of bias. Balancing these realities and limiting the impact of bias takes purposeful, concerted steps before, during, and after the decision. Experts at bringing cross-industry, orthogonal thinking to considering the various possible outcomes of decisions for public and private sector organizations, Toffler Associates can facilitate the process of building data and decision processes you can trust and use for confident decisions on issues with lasting impact.

 

It’s time to recognize that bias cannot be eliminated, but it can be understood to make decisions with confidence.

 

Follow @TofflerInsights

Phil Cunningham

Phil Cunningham

Phil Cunningham is a Senior Associate focused on the integration of technological and human processes to enable successful, large-scale IT implementation, transition, and adaptation within the public and private sectors. Phil earned his his MBA at George Mason University and his Bachelor of Science from The College of William and Mary. He currently is a PhD student in Experimental Economics at George Mason University.

POST A COMMENT

Subscribe to our blog