RISKS from AI are the real COSTS

by Chris PehuraC-SUITE DATA — 2024/03/18

Email us at contactus@csuitedata.com to see how we make your plans work.


AI is a real game changer and I’m happy that it’s happening. I loved AI ever since I was a kid. It got me into programming. It got me into computer engineering. I’ll never forget my experience with Prolog, Lisp, and my thesis — syntactic pattern recognition. They’re still fresh in my mind despite being 30 years ago.

AI has been around for quite a while in the background. Even with business engineering and business analysis-related efforts there were pockets of AI that I had helped develop and drive. There are so many opportunities for AI that you can use, that you can leverage. It helps reduce your costs. It helps reduce your employee headcount, helps you manage your talent pool and skill sets, and helps with training. The thing with AI that a lot of people don’t want to talk about is the negatives. And there’s a real big negative with AI. AI, when it screws up, there can be severe damage.

Remember a few weeks ago, there was Google with their Gemini incident? Gemini, Google’s AI, produced had Chinese Nazis and black popes as part of its results. People were pretty ticked off about that. Google lost billions in market value in just a matter of days. With the way and the nature of how AI works, a surprise like this from AI should be expected. We should assume it’s normal. AI has limitations and because of these limitations, there can be very severe risks with using AI.

For the simple stuff, the simple AI, like routing work, routing calls and work, damage is pretty minimal. People might be a little upset but you can smooth that over. It’s when you use AI to route patients in the hospital or have it look at the images for a patient. Identifying cancer and heart problems where the risk of being wrong with the AI being wrong results in really bad things. Loss of life. Loss of reputation. Getting sued. People don’t talk about this stuff. You hear hyperbole. Dystopian futures. Machines rising up to take over. Not, how do you minimize the risk of using AI?

You can use people and the expertise of people to keep AI in check. Doing that just delays the inevitable. We’re going to hit that wall where because of cost pressures we will forced to develop new AI to support the AI. This new AI will help manage the risk of AI. This new AI will specialize in risk management for AI. We already see this where there are AIs with deep specializations being combined and complemented with other AI to keep each other in check. Deep learning has aspects of this too.

I see a brand new discipline growing out of this. It’s not going to take too long for it to happen where risk managers that specialize in AI will appear. They will use specific tools to assess and measure the risk of AI systems.

With the way things are going and evolving, every leader in the C-Suite, every CxO is going to need to have some sort of understanding of AI. And the best way to do that is to understand the limitations of AI. These limitations are what cause all the risks, these little bottlenecks, these little ticks in how AI works. Understanding the limitations of AI is like understanding the personality of someone. Being able to anticipate how they’re going to act and how they’re going to behave. This sounds like understanding AI is a form of expectation management rather than education. This new understanding and acceptance of how to use AI will expand, mature, and evolve into various management practices that will find their way underneath the executive management umbrellas. We’ll have managers focused on managing mixed teams of people working side by side with AIs.

Because of cost pressures, communication chains, and risks — tech and non-tech people alike will need to have the same level of understanding about the limitations of AI. It is the only way we can keep the damaging consequences of AI in check.


Email us at contactus@csuitedata.com to see how we make your plans work.