Price to Win Solution Back to Blog
[photo_box title=”Can AI Support Industrial” image=”2565″]There are plenty of claims about Artificial Intelligence (AI), prediction, and prescription. But will AI work in industrial applications?[/photo_box]

Can AI Support Industrial Prescriptive Analytics?

There are plenty of claims about Artificial Intelligence (AI), prediction, and prescription. Some are true. AI has powerful claims in web marketing, retail product recommendations, and for consumer help desks.

Will AI work in industrial applications? If they did, it would be wonderful. Many AI methods are self-taught, avoiding the need for process mapping and other things which seem tedious. So, will it work?

AI is not good at everything. We published some disappointing findings on image recognition a few months ago. Some Japanese researchers recently showed similar results. Voice and image recognition are interesting examples where often AI fails, but can still be useful in some cases.

Google and Microsoft both use AI voice recognition. They both claim to perform at about the 95% level (or a 5% error rate). While there has been a great deal of bragging about this, it is useful to remember that performance above 92% has been available since the 1990’s.

Is 95% good enough? It depends. It’s OK for some virtual assistants. It’s not good enough for 911 calls. Stressed callers for emergency services generate higher error rates than requesting a Netflix movie.

Understanding Differences in Applications

So, it is important to understand the differences in applications. Many marketing applications are about being “less wrong.” We’ve explained before, this can be quite valuable in applications like web retailing.

Do AI methods apply to industrial applications, like the Industrial Internet of Things (IIoT)?

A few AI methods will apply. But broadly we think the answer is “no.” For all their promise in lowering the cost of consumer sales and services, AI is going to fail badly in many places people are hoping to use it industrially. We think there are three reasons why. All three reasons stem from the requirements of production and industrial processes.

Reason #1 – Complex Failure Root Causes

Lone Star has more than a decade of experience in production analytics. One lesson from all that experience is that 21st century production is complex. We have created solutions for extremely complex production processes. It is not hard to find thousands or even a million tiny steps. Each of them has variability. Each depends on equipment, people, and something being processed.

For these complex systems, it is easy to show current AI (for example neural networks) simply can’t comprehend big processes. But even for simple systems, AI fails to cope with failure prediction and prescription.

The least complex industrial machine is a simple pump attached to an electric motor. There are roughly 50 ways this simple system can fail, and more than 50 causes of failures. For example, one way the motor can fail is the breakdown of winding insulation. There are several causes of the breakdown; age, operating over temperature, too much voltage, too much current, and others. Some root causes will degrade other parts of the pump/motor, too. If we think about a matrix with 50 failures and 50 root causes, there are 2500 cells in that matrix. How long will we have to collect data for the AI to gain enough insight to prescribe the correct action, or even to diagnose the correct impending failure?

Of course, the answer is we can’t wait that long. Years of data may not fill out the matrix. At this point AI promoters object; “we will learn the most common ones, the ones that count!” But modern industrial and production processes are mature. All failures should be rare.

Even if we could collect the data needed to train the AI, processing time for our 50 x 50 problem would be awful. To solve this, AI methods use a trick: lumping things together. A fancy name for this is “dimensionality reduction.” AI finds things which are correlated, or otherwise cluster together. In other words, AI needs to simplify the process, because it can’t cope with the real complexity.

In the pump and motor for example, we notice that most of the time, the rate of fluid flow through the pump, the electrical current flowing through the motor, and the output pressure of the pump are very highly correlated. So, AI lumps them together and treats them as one variable. If we train the AI with this assumption, we lose all insight into what happens when they are NOT correlated, which is to say, when something is wrong. If our simple pump and motor are too complex, what chance does AI have to address a modern factory or oil well? We think the answer is, no chance at all.

When faced with even moderate complexity, typical AI methods applied to production processes break down. We can’t find enough clean data to train them, and if we could we are forced to accept slow processing or painful simplification. These simplifications hide root causes, and make true prescription impossible.

Reason #2 – Ongoing Operations Evolution

The second reason AI has little chance to support industry is the constant change in these processes.

Production managers are constantly changing the configuration of their capital assets, their workforce, and the nature of their production.

Factories have changing product demand and product mix. Oil wells change their rate of flow and the constituents of their production flows.

But AI, based on the presumption of long observation times and statistical correlations, has trouble telling the difference between a good change (more oil coming up) and a bad change.

The shifting nature of industrial operations adds to the challenge of complexity.

Reason #3 – Poor time to value

Even in the few cases where AI can work, a third problem faces us: poor time to value. It turns out the self-taught AI is not “free.” It is common for the training period to take months, and this may occur after months of data collection. That means waiting a long time.

Since Net Present Value is a function of time, waiting months, or over a year, is a bad thing.

Self-training AI is not free. There is no free lunch, and this costs more than pragmatic alternatives.

About Lone Star Analysis

Lone Star Analysis enables customers to make insightful decisions faster than their competitors.  We are a predictive guide bridging the gap between data and action.  Prescient insights support confident decisions for customers in Oil & Gas, Transportation & Logistics, Industrial Products & Services, Aerospace & Defense, and the Public Sector.

Lone Star delivers fast time to value supporting customers planning and on-going management needs.  Utilizing our TruNavigator® software platform, Lone Star brings proven modeling tools and analysis that improve customers top line, by winning more business, and improve the bottom line, by quickly enabling operational efficiency, cost reduction, and performance improvement. Our trusted AnalyticsOSSM software solutions support our customers real-time predictive analytics needs when continuous operational performance optimization, cost minimization, safety improvement, and risk reduction are important.

Headquartered in Dallas, Texas, Lone Star is found on the web at http://www.Lone-Star.com.

Recent Blog Posts