Please wait, loading...

 

Self Deception: Six Steps to CURFEW Bad Information in for Analytics and AI

October 29, 2019by Lara Zada
https://www.lone-star.com/wp-content/uploads/2019/10/CURFEW.png

 

Six Steps to CURFEW Bad Information in for Analytics and AI

There is a dilemma faced by anyone who crunches numbers, develops AI, or supports “fact based” analytics. Narrative is more powerful than our elegant math.

Apparently, humans are hard-wired to prefer simple, deterministic ideas and gladly accept simple, crisp concepts. We prefer them over complex truth. We persist in our preferences even when faced with evidence to the contrary, and we rarely seek evidence to change our views.

This is not news. Daniel Kahneman wrote about it in his book Thinking, Fast and Slow. He includes observations of his own odd biases, pointing out that knowing does little to extinguish them.

This unpleasant reality intruded again in a recent story, Blind Spots in the ‘Blind Audition’ Study. You might have heard of the blind audition study, which found women were more likely to be offered positions in orchestras when auditions screened performers from judges. For anyone concerned about fairness, it was appealing. The study offered an uncomplicated way to judge performers on merit alone.

It was widely cited. Malcom Gladwell made it popular, but many of us piled on. However, most of us didn’t carefully read the study or read summaries by others, whom didn’t really read it either.

Now the tables have turned and show that data and the study don’t really support blind auditions helping women. This is NOT a simple topic; there are many factors like changing demographics, a decline in gender bias, and others.

It was so simple and neat, so appealing.

The real caution here is not gender bias, orchestras, or blind auditions. The lesson is the power of narrative. A simple appealing narrative will win out over truth. Once the truth has been hidden by the simple wrong idea, wrong ideas can linger for a long, long time.

A sobering example is the study of causes that lead to the stock market crash of 1929. The crash started over a several day period. There was a narrative, “now is the time to buy.” But the decline persisted into 1932, and it took more than 20 years before valuations returned. The causes of the crash are widely agreed on but wrong according to thoughtful analysis.

Analysis doesn’t kill bad narratives – only alternative narratives do. For the crash of Black Monday, no narrative has yet emerged to debunk the myths, which are still “studied” in economics.

We’d like to think otherwise. We want to believe “in data we trust.” We want to we agree with Richard Feynman when he said “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” While Feynman was right about facts and experiments proving us wrong, Kahneman is also right. We are superb at ignoring facts.

This has serious implications for AI. If we rely on humans to label data and provide arbitrate that is “correct” we must expect bias and errors. And, we must expect consensus error in many cases, where powerful false narrative is in force.

What can we do? Here are six questions we can ask to test whether we should be more skeptical on a subject. The mnemonic spells CURFEW. We can’t stop bad narratives from emerging, and in a free society, should not even want to. But, these six questions can help us, both as individuals and as developers of analytics and AI.

Asking these six questions can help us avoid error and build narratives to replace the ones we use do deceive ourselves, and the self-delusion of clients.

Conflicts of Interest – Does the “expert” who owns a narrative gain by it? Some advice is unbiased, but much is not. Advisors and subject matter experts often don’t understand their own biases. When they have reason to want to believe a narrative it’s cognitively comfortable. We need to hear from someone who does not agree. If the expert with the second (or third) opinion has a conflict too, at least we are hearing another narrative.

Uncertainty – is their more than one answer? – People love stories about coins with two heads. We like the idea that we know how a risk will play out. But we don’t.

The world is filled with uncertainty. Some of it offers possible rewards, and some uncertainty we call “risk.” But our brains, and AI are poor at accommodating it. As we’ve pointed out before, some birds are better at uncertainty than we are.

AI is bad at this for two reasons. First the most common forms don’t handle probability very well. Second, the humans who trained the AI may be biased.

We need to ask about all the possibilities, and if possible, understand the odds or statistics; we need to understand the entire span of uncertainty. This is one reason why all and any software platforms Lone Star uses preserve uncertainty. With modern computing power there is rarely a reason to do otherwise.

Reliance on Authority – does someone “important” say so? We tend to believe silly things experts say. Harry Markowitz won a Nobel prize in economics because one Sunday afternoon, it dawned on him the prevailing wisdom on portfolio valuation could not possibility be right. An expert authored a book decades before; no one before Harry bothered to check.

We do need to “stand on the shoulders of giants” as Newton said. And, we can’t check everything. But we can ask, what is the track record of this expert?

Today the authority is not a person; it is data. When “authority” is a data set or data science, Judea Pearl, one of the great living experts on mathematics- suggests the TPT test:

  • Transparency – are the analytics or the model processing the data understandable? It doesn’t have to be simple, but can we test it? Audit it? Do we understand what’s going on inside?

 

  • Power – is the predictive power useful? The most commonly accurate weather prediction is that tomorrow will be like today. There is predictive “accuracy” in this transparent model, but it’s not usefully predictive. There is no useful power.

 

  • Testability – can we test to see if the assumptions in the model or analytics are consistent with the data and anything else we hold to be correct?

Familiar Risks – do the risks seem to be known? In many settings, we have a risk narrative that’s nothing more than tribal lore. Some members of our company, family or profession faced a risk that went wrong, and our tribe decided we needed to keep an eye on this. There are nearly always two important errors in the way we deal with these familiar risks. First, they blind us to other risks which are often much worse. This is a placebo effect. We feed good because we are doing risk management. A second important error is our failure to understand our options for mitigation.

To break free from the forces of familiar risk, we need to look at what others think. What risks do other tribes deal with and how do they deal with them? Risks can be mitigated in two ways. We can try to either prevent the dreadful things from happening or, we can try to moderate impact if they do.  Getting out our own tribal lore helps with naming risks we might not have considered, finding means to avoid them, and moderating their impact. But the narrative of familiar risks can’t do any of that.

Ease of Understanding – is the answer easy to grasp? A commonly taught market theory is that the 1929 crash was caused (in part) due to margin loans. People were able to borrow to buy stock, and the money was too easy. This is an appealing and simple narrative, but the facts are a little different. Margin lending had become regulated, and lending was less available. The truth may be that margin loans were a bad idea but regulating them to quickly and aggressively made things worse. Or, maybe the counter narrative is just a simple story that’s also flawed.

The point is that we need to be wary of any simple explanation. And, the more complex the topic, the less likely a simple narrative is helpful. Occam’s razor seems to teach otherwise. But the simplest explanation of a complex topic may not be simple. William of Occam probably said something like, “Entities should not be multiplied without necessity.” Sometimes it is necessary.

Work – is it hard work to grasp other explanations? – Sadly, there are many important things which require effort before we can grasp them.

Quantum mechanics can make your head hurt; but no one really understands quantum mechanics. That’s in part an old joke, and both the joke and quantum mechanics are still true.

The easy narrative helps our lazy brains avoid work, as Kahneman explains. And, if we want to help promote truth and accuracy, we’ll need to generate alternative narratives. That’s work worth doing.

About Lone Star Analysis

Lone Star Analysis enables customers to make insightful decisions faster than their competitors.  We are a predictive guide bridging the gap between data and action.  Prescient insights support confident decisions for customers in Oil & Gas, Transportation & Logistics, Industrial Products & Services, Aerospace & Defense, and the Public Sector.

Lone Star delivers fast time to value supporting customers planning and on-going management needs.  Utilizing our TruNavigator® software platform, Lone Star brings proven modeling tools and analysis that improve customers top line, by winning more business, and improve the bottom line, by quickly enabling operational efficiency, cost reduction, and performance improvement. Our trusted AnalyticsOSSM software solutions support our customers real-time predictive analytics needs when continuous operational performance optimization, cost minimization, safety improvement, and risk reduction are important.

Headquartered in Dallas, Texas, Lone Star is found on the web at http://www.Lone-Star.com.

CONTACT USLONE STAR ANALYSIS
Smarter & Faster
The answers to count on – when your answers count.
STAY IN THE KNOW