[photo_box title=”Fake News: Orson Wells” image=”2189″]War of the Worlds radio broadcast[/photo_box]

Artificial Intelligence Experiments – Episode #2; AI Salvation from Fake News

Singularity Won’t Occur in 2017 – Those stories are Fake News.

The War of the Worlds radio broadcast is not very credible fake news… today.  You are pretty sure aliens don’t live on Mars.  You probably think of Orson Welles as a movie star, the one who did Citizen Kane.  But the radio broadcast of H.G. Wells story about Martians created a panic, because it was fake news.

You know all this without help.  But, what about news that might be fake, or, real?  Can AI help?

Lone Star tests provide some insight about whether a benevolent AI can block fake news. One recent test involved the web’s best known image search; https://images.google.com/   as explained in Episode #1.

For this experiment, we submitted 46 images for classification.  The first batch we submitted were pictures of humans; ourselves.  The rest were images of non-humans, including some animals chosen because we’d been told Google was good at this.

We rated Google as “Sort of right” most often (about 67%).  “Sort of right” means a human would almost certainly have chosen a different description, but Google’s choice was understandable.  One colleague submitted a picture of himself wearing tinted eyeglasses.  The result was “eyewear.”   Other silly examples are found in Episode 1.

Google never identified zebras.  It was only “sort of right” with terms like “wildlife.”

“Sort of right” is much like the problem of fake news.  Fake news is “sort of right” because it talks about real people and places. It often mentions events that did happen along with things that did not.  We don’t hold out much hope for AI classifiers blocking fake news.  Failure to notice a zebra is a pretty significant limitation of classification.  Similarly, giraffes are distinctive; that didn’t work either.

Our pessimism on fake news detection has to do with how AI classifiers work.   You need training examples to teach the AI.  While there are plenty of pictures of zebras, there are zero prior examples of the fake news story you want blocked.  Fake news is a much harder problem because what makes news interesting, or “newsworthy” is novelty.

AI has a hard time with novelty.   This is harder than zebras or giraffes.  If we built a fake news classifier; it will know “dog bites man” is probably true.  But it will fail when we need most; “man bites dog.”

Fake news AI will have to wait until after zebras.

Can another kind of classifier save us?  Can we use probabilistic, or Bayesian estimators?   These seem doubtful, too.  A real rare event (hurricane hits Alaska) sounds a lot like a fake (hurricane hits Alaska).

Something that should happen only once in a thousand years WILL happen this year.  In fact, several of them will happen.  We just don’t know WHICH unlikely things will happen.

Thinking about real and fake Alaska hurricane stories, you might train a classifier to block them both, or, neither, but figuring out which is fake is very hard.  That’s why humans fall for fake news, too.

Another doubtful (but popular) idea is that fake news classifiers might “consider the source.”  But, “real news” outlets simply forward reports: both “fake” and “real” news.  We can’t tell where they found their stories.   “Consider the source” won’t help for at least four other reasons.

  1. Early, honest, but wrong reporting. A former cabinet official famously said, “first reports are always wrong.” He meant everyone: media, the CIA, and the internet.   The fog of unfolding events, and rushing to report leads to errors.   Were reporters wrong when they thought there was a campus shooting in Ohio?  Was that fake news?  No.  Machines can also be honest but wrong.  In IoT, there are many ways for sensors to fib to us.

 

  1. Episode #3 will deal with trolls, but for now, we just point out trolls are in all kinds of “real” media, not just on phony news websites.  They go to a lot of trouble to seem legit.   Trolls are also why we can’t expect voting to be reliable fake news vetting.

 

  1. Confirmation Bias. It is hard for humans to objectively see what is true.  We see things through our biases.   Even after we know something is untrue, we cling to what we think should be true, or what we wish was true.   This is why we fall for satire stories at sites like theonion.com.  And this is just one bias; you can see more bias here; https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

 

  1. Incomplete Context. Satire helps explain this, too. Incomplete context leaves us unable to “get” jokes or detect phoniness.  If you read Indian pseudo-news sites like http://www.theunrealtimes.com/ or http://www.fakingnews.firstpost.com/ you find stories on demonetization.  That’s a big deal in India, but the rest of the world may not quite “get it.”

So, whether you long for a powerful AI to end fake news, or fear an AI singularity ushering in Orwellian control of the free press, you should probably find something else to fret about.

For the Industrial Internet of Things (IIoT), fake news is similar to some difficult problems like false alarms. But, unlike news reporting IIoT can use forms of machine learning and, cause/effect relationships like the ones we use in AnalyticsOS.   IIoT will be able to deal with problems close to fake news long before the New York Times.

About Lone Star Analysis

Lone Star Analysis enables customers to make insightful decisions faster than their competitors.  We are a predictive guide bridging the gap between data and action.  Prescient insights support confident decisions for customers in Oil & Gas, Transportation & Logistics, Industrial Products & Services, Aerospace & Defense, and the Public Sector.

Lone Star delivers fast time to value supporting customers planning and on-going management needs.  Utilizing our TruNavigator® software platform, Lone Star brings proven modeling tools and analysis that improve customers top line, by winning more business, and improve the bottom line, by quickly enabling operational efficiency, cost reduction, and performance improvement. Our trusted AnalyticsOSSM software solutions support our customers’ real-time predictive analytics needs when continuous operational performance optimization, cost minimization, safety improvement, and risk reduction are important.

Headquartered in Dallas, Texas, Lone Star is found on the web at http://www.Lone-Star.com

Recent Blog Posts