Managers Shouldn’t Fear Algorithm-Based Decision Making

Managers Shouldn’t Fear Algorithm-Based Decision Making

Managers Shouldn’t Fear Algorithm-Based Decision Making
Organizations are increasingly favoring algorithms in an effort to make organizational decision making and judgment more rigorous, be it for setting prices or for selecting candidates to interview for a job. But many professionals remain wary of rule-based decision making, and the interaction between formulas and expert judgement remains a gray area. How empowered should employees be to alter or ignore an algorithmically generated decision? How and when should formulas be adjusted or altered?

Tom Blaser and Linnea Gandhi, two managing directors at the consultancy The Greatest Good (TGG), are enthusiastic champions of algorithms. Their recent Harvard Business Review article, coauthored with Noble Prize–winning behavioral economist Daniel Kahneman and TGG’s CEO Andrew M. Rosenfield, explains how formulas can, among other improvements, help remediate the hidden scourge of inconsistent human decision making. In an interview with HBR they discuss how they have become so trusting of algorithms and why they advise companies to share their enthusiasm for rule-based decision making.

Read Also:
How To Raise Your Data-Viz Game Without A Design Degree

Blaser: When making predictions with data, algorithms tend to be superior to humans. Even for decisions that are traditionally the domain of highly trained experts, decades of studies show that even simple statistical models are often an improvement over human inconsistency. We understand that our admiration for algorithms is not widely shared. There is the phenomenon termed “algorithm aversion” — humans are more willing to accept flawed decision making from a human than from a formula. We give other people wide leeway and tolerate errors, but we suddenly become very judgmental if a formula makes a mistake.

Gandhi: Our collaborations with companies can sometimes include helping them overcome this aversion. Performing an audit to measure the inconsistency of the “business-as-usual” approach is often a good first step. Once you recognize that you have a problem, the question becomes whether, and especially how, algorithms can be used to help fix it.

What’s your advice for how experts should interact with algorithms?

Read Also:
Bridging the Trust Gap: The Hidden Landmine in Big Data

Gandhi: The first rule of thumb is to resist the temptation to override algorithms. Professionals will sometimes find that mechanical predictions feel wrong, or at least seem in need of adjustment. That’s quite natural. After all, humans have access to as much information as they can seek out, while models are limited to a programmed set of variables. Humans can flexibly update assumptions and read into nuance, while models are rigid in their interpretations of a fixed set of data. In theory, we can see the big picture.

The evidence, however, points in the opposite direction. Our access to a seemingly infinite set of variables for consideration in any given problem often works to our disadvantage, as we are — consciously or not — subject to the influence of irrelevant factors. For instance, parole decisions by judges become far more lenient following a lunch break, and admissions decisions made on cloudy days focus more on academic attributes.

Read Also:
Infinite Data Overlap Detection Arrives to Speed Business Insights

Blaser: Another problem is that even when we make a good effort to collect all the relevant inputs when making a judgment, we are generally not very good at giving these inputs appropriate weight or combining them in a consistent way. In a study of UK magistrates making bail decisions, they described their process as a relatively complex integration of many pieces of evidence. But their actual judgments appear to be driven disproportionately by what amounts to a heuristic based on a few (often idiosyncratic) pieces of data.

Read Full Story…

Leave a Reply

Your email address will not be published. Required fields are marked *