how AI fails and how to fix it

by Chris PehuraC-SUITE DATA — 2024/03/20

Email us at to see how we make your plans work.

For AI to work correctly, it needs to identify patterns correctly. Patterns are things like the locations of aspects on an object. Identifying a car part or wing part for a plane. Identifying patterns in the data to signify a change in customer behavior or a business opportunity. Identifying patterns on how something moves along the factory floor, across the road, or even on store shelves.

For AI to work, it needs to be able to identify patterns but it will never do it 100% correctly. We have to strive to improve AI to optimize it so that we increase its accuracy. But we have to consider that AI is not going to be correct all the time and there is a business impact when it is not right. When designing AI and picking the data sets, you have to ask two questions.

  • What does the business gain when the AI works?
  • What does the business lose in terms of costs and risk when the AI doesn’t work?

For designing AI, there is an AI part that you optimize and then there is a manpower part where you complement the AI to keep it on track. You don’t want the AI to go completely off the rails. You want a kill switch and some way to intervene in the AI. You plan for when to pull the plug. It doesn’t matter what AI it is.

It could be client-facing AI where it’s targeting prospects trying to convert them to customers and retain and service customers. Or production AI where it is focusing on the factory floor, supply chain, or moving patients through a hospital. Or financial AIs where it’s looking at expenses, budgets, managing them, looking at business opportunities in the data, and then give recommendations. AIs, need to identify patterns correctly for them to work. And business needs to respond according to when the AI works and when the AI doesn’t work.

A car dealership targets a prospect to sell them a new car but it’s the wrong prospect. What are the costs? They just incur costs from targeting the prospect. They don’t experience any other costs or risks. What if that prospect is a current customer and the dealership is trying to sell them a new car? That customer recently bought a car. It could turn that customer off, not wanting to deal with that car dealership at a later date. But what about this? The car dealership has a service area where they maintain cars and they inappropriately targeted a customer. That customer paid $10,000 in car repairs and the dealership sent a message to that customer we’ll buy your car for $10,000. How will that customer react? These sorts of things need to be considered when dealing with AI.

A machine has been predicted by the production AI to break down, but it’s not going to break down. What’s the cost of the business when the machine is checked? They just incur the cost of it being checked. No other costs, no other risks. If the AI is wrong and thinks that the machine isn’t going to break down then the machine does. What’s the cost? Loss of revenue, higher costs in repair, angry customers, and even more revenue lost along with corporate reputation. Those costs need to be considered when designing AI and designing the business processes that support AI. What about in hospitals where a patient is assessed? The patient is assessed incorrectly that they have cancer or assessed incorrectly that they don’t have cancer? Each of those outcomes has different costs, different risks, and different liabilities.

Financial AIs have some similar issues. But because people like keeping their eyes on the cash, on the finances, on the expenses their costs and risks are minimal. There’s always a lot of signoffs with these AI. People do not like it when money starts flowing out and they don’t understand why.

Notice each of these different types of AIs forms an ecosystem in the company. A failure in one can cause failures in other AIs. To avoid problems you need people to support, to intervene, and to do different processes according to how the AIs are correct and incorrect. An AI can be wrong in two ways. It can have a false positive. It can have a false negative. A false positive is it thinks it’s something but it’s not. And, a false negative is it thinks it’s not something but it is. What you do to address costs and risk may be different based on a false positive versus a false negative. In either case, we need to drive to improve the accuracy of our AI to reduce the risks and costs to the business.

We can make AI more accurate by optimizing the AI’s model by giving it more training data. But we have to be prepared that we will reach a floor where we can’t make the AI more accurate despite how much more data we give it. In a lot of cases, adding more training data makes the AI less accurate in identifying patterns. To break past the floor, use multiple models.

Have your AI use multiple models. Even when there is still a floor, you can push that flour downward so the AI more accurate in identifying patterns. These different models don’t have to be the same kind of AI implementations. They can be neural nets. They can be semantic. They can be rule trees. It doesn’t matter. What matters is that you balance them out and have them work in tandem. There are different combinations of how you can put models together. You could string them together in sequence, done in parallel, through a network. Many possibilities. But you get a new floor, a point where the AI is just extremely difficult to train, extremely difficult to maintain, and when it breaks you’ll have no idea how to fix it. The solution? Use multiple AIs.

Use multiple AIs that specialize in a specific area. This reduces the complexity and makes them easier to train and retrain. It also allows you to target each AI separately using separate environment settings. It helps you better load balance your AI and manage the risks from AI. The more transparent AI interactions there are, the more potential business integration and intervention points you can have. But there becomes a problem because the more AIs you have, the more risks you need to manage concurrently. As a risk manager, you need to have an AI to help with the heavy lifting.

Use an AI that helps mitigate the risks within your company. It doesn’t have to be a smart AI, it just has to be an AI that makes sure all the other AIs don’t go too far off track. That they stay within their predefined set parameters. Once the AI ecosystem is better understood, the risk management AI can be optimized further to reduce risks and costs to the business.

The key to a good AI is that it is good at identifying key patterns. That’s how AI works and that’s how AI will work extremely well. When designing AI, you have to ask yourself two questions. One, what does the business gain when the AI works? And two, what does the business lose when the AI fails when AI fails? It can fail in two ways, with a false positive and a false negative. These are risks and costs from both.

AI has ticks and oddities. Make it your top priority to learn how to adapt and anticipate the outcomes when AI is acting weird. The good. The bad. And the really ugly.

by Chris PehuraC-SUITE DATA — 2024/03/20

Email us at to see how we make your plans work.