Stanton Chase
The Hype and Hope of Artificial Intelligence

The Hype and Hope of Artificial Intelligence

September 2023

Share:

Video cover

Few innovations have captured the imagination and sparked as much anticipation across all industries as artificial intelligence. 

Artificial intelligence (AI) has become a buzzword permeating every corner of the tech space. Its promise of increased efficiency and automation has led to a surge of excitement and expectations. However, behind that excitement lies a sense of hope—hope that this technology can solve complex problems and usher in a brighter future. 

John Parkinson, former CTO of Capgemini and TransUnion, and current Partner and Managing Director at Parkwood Advisors, has extensive experience implementing AI tools and has seen firsthand the impacts they can have on a company and its employees. Greg Selker, Managing Director at Stanton Chase Baltimore and Regional Sector Leader of Technology for North America, recently spoke with John about what he’s seeing in the AI space and whether the hype and hope of AI are justified.  

This interview was condensed and edited for clarity.  

Implementing Current AI Tools

Are businesses considering the integration of machine learning and generative AI, and if so, how effectively will they be utilizing AI tools in terms of efficiency, return on investment, and intelligence?  

My observation is that there is a large percentage of the population that has shiny-object syndrome. There’s also a lot of media-fueled hype coupled with economic performance pressure. As a result, many people are grasping at straws. There isn’t a lot of deep thought around what is the right mix of statistical methods, machine learning, generative AI, and all the things coming down the road.  

Companies should be asking themselves, “How do I get ready for it? How do I start to learn what AI can really do and move past the hype-and-hope stage? How do I retool my workforce to make the most use of this?” And those are not easy questions to answer.  

A large percentage of the population has shiny-object syndrome. There’s also a lot of media-fueled hype coupled with economic performance pressure.

Generative AI Challenges and Capabilities 

How effective can generative AI be in replacing human creative and analytical output, given the need for understanding diverse data and its meaning—a typically human quality? Considering the current hype and its potential for learning from mistakes, could it outperform humans in these areas?   

AI generally won’t make the same mistake twice, but you can reinforce its propensity to make mistakes by the way you provide feedback.  

The performance of generative AI is better than theory predicts, which is a worrying sign. The real issue with generative AI is the non-deterministic nature of its process. You could provide the same inputs twice, but you’re not guaranteed the same output. It’s also perfectly capable of delivering a seemingly rational yet incorrect answer, or even fabricating responses. Thus, it can both lie and hallucinate, sometimes simultaneously.  

For instance, if you ask Chat GPT what two plus two is, it will consistently respond with four. However, if you assert that the correct answer is five, it’ll accept and reproduce that “fact”. Why does this occur? It didn’t comprehend the mathematical theory that produced the correct answer. It found the answer in its training data. If you confidently override the answer found in its training data, it will replace its answer with yours.  

Each model was initially trained on the original foundation model—a comprehensive scrape from the internet—but then, they were fine-tuned for specific domains: general conversations, politics, economics, medicine, science, mathematics, etc. Open AI doesn’t control what the tuning model is, but you get the ability to rapidly parallel process a query to determine which domain provides the highest confidence in the answer. Specifically, there’s an orchestration layer that not only guesses which model should answer the question but also recognizes the need to consult multiple models, or domains, in case its first guess was incorrect. If it guesses the wrong domain(s), there’s a higher chance your prompt wasn’t mapped to the ideal model to answer the question. The orchestrator works this out. All the models operate at roughly the same speed, so it isn’t particularly noticeable to the questioner that this complex process is happening behind the scenes.  

So, our understanding, which is evolving as we learn more, suggests a kind of hierarchy of problem complexity to which we apply different toolsets and chains. If the problem is bounded and deterministic, we build rules, as that’s the most efficient way to tackle it, and we don’t need all this machine-learning or AI stuff. We might use machine learning to determine what the rules are if they’re not readily apparent. But if the problem is bounded and deterministic, rule-based decision support systems function perfectly well.  

[AI] is perfectly capable of delivering a seemingly rational yet incorrect answer, or even fabricating responses. Thus, it can both lie and hallucinate, sometimes simultaneously.

Types of Problems and Toolsets 

What’s the difference between a problem that is going to be best solved with rules-based decision-support systems versus one that isn’t? 

There are three tiers to consider when analyzing AI models like this.  

First, consider a simple rules-based process such as dealing with a late payment. You send a prompt after 30 days, follow up after 60 days, and move it to collections after 90 days. AI isn’t required for this as a rules-based system can easily handle the task without human intervention. Thus, this is an efficient use case for such a system.  

Next, we have deterministic, non-bounded problems, like playing chess. These are wholly deterministic, meaning there’s a specific answer that remains constant. However, finding that answer can be challenging for a standard rules-based system. In mathematical terms, they’re not complete.  

The third tier involves unbounded, non-deterministic problems. Here, we should utilize generative AI tools to rank potentially useful answers. We can then attempt to reduce the dimensionality of the problem space to increase the likelihood that the answers we generate correspond to reality. The complexity here lies in the quality of the AI’s training, as it directly affects the quality of its output. Currently, we lack convenient methods to assess the quality of training datasets, even public ones. This is due to the prevalence of poor-quality data, which we’ve failed to curate and maintain effectively for the past 50 years.  

The dilemma arises from the need for large quantities of data to train the models, without which they become too narrow and underperform in answering general questions. Regrettably, the only large datasets we have are flawed. They’re rife with bias, errors, and omissions, representing subjective rather than objective truth. Therefore, it’s unsurprising that the outputs generated are likewise subjective. The dangerous utilization of generative AI occurs when we mistakenly believe we’re dealing with a deterministic space when, in fact, we are not.  

The dangerous utilization of generative AI occurs when we mistakenly believe we’re dealing with a deterministic space when, in fact, we are not.

Short-Term Applications of Automation and Generative AI

What are the best applications of automation and generative AI in the short term?   

Robotic process automation is almost entirely rule-driven today. There’s a bunch of easy stuff that we can do that will save you money. It’ll improve the quality of work. It’ll give your workforce more interesting things to do because we’ll take the dull stuff away. It also makes work easier to manage, which means you need less middle-management overhead. Those people can then be repurposed to do other things that should be more interesting to them and more valuable to the business.   

Doing that takes investment and effort, and some change management. And while we’re doing that, we can start looking at the harder layer of deterministic, but probably unbounded, problems, where mining the previous generations of machine learning makes sense. There are also whole other classes of problems to address while we are focusing on this. We have pretty good classifier technology today for certain areas.   

For example, if you want to use a trained machine learning model classifier to figure out whether the widgets coming off your line are good or not, we can get pretty darn close to a 100% accuracy, because while it’s a very high dimensional problem space, it’s a very bounded domain.  

We have pretty good video and audio domain trained models now as well, and the foundation models in speech recognition are really pretty good. We’re getting 95% to 97% accuracy in multiple languages now. We still don’t have a really good image classifier. We have a pretty good one, but they’re too easy to fool. Some of the transformer elements of generative AI are showing real promise in helping to understand the context of the image as part of the classification process. So, it’s getting better, but in comparison to other use-cases, we’re not there yet.  

The best approach is to find the right use cases that map to the tools that are really quite good, and do all those. And if we just do that, it’ll be 2030 before we’re done. By which time, we’ll know whether the hype and hope of generative AI means anything or not.  

Even using the best approach… it will be 2030 before we’re done. By which time, we’ll know whether the hype and hope of generative AI means anything or not.

Impact on Jobs and the Workforce

How will businesses and the workforce be impacted in the next three to five years as machine learning and automation replace tasks rather than jobs? 

In the short term, we don’t think there’s going to be much impact. I think that we expect the angst level to rise, because there is a lot of media hype about how this is going to destroy jobs. And it’s going to destroy tasks, no question.   

Over the next five years, in the U.S., we’re going to lose 75 million people from the workforce because they’re going to retire. We’re only going to replace them with about 25 million people from the next generation. As we continue building these tools and making them more complicated, we are not teaching people in high school or college the foundational skills required to understand how the tools we are building should be applied and how to manage the results of applying them.    

What does this mean? We’re going to have fewer, dumber people doing more complicated things.  

The more you automate things, the more you guarantee the failures to be more significant, and, consequently, the more experienced and capable your people will have to be to deal with them. You won’t have simple failures anymore; you’ll have very complicated failures. As the mean time between failures goes to infinity, so too does the mean time to recover.   

You really have to think about how your incident response teams figure out how to deal with incidents that don’t happen very often, because they don’t get to practice unless you make them. We have to rethink how a whole host of our management of routine, highly-automated processes works. We have to invent ways to manage that—the meta process of managing those processes—and then we have to think about how we train people, who will not do the work, to understand why the process failed.  

Also, the software-platform developers, who are building the automation platforms that companies are adopting, are not really thinking about this, which I believe is a problem. Reliability theory is not well understood, and failure mode analysis is not well understood or integrated into these automation platforms. Now, interestingly, generative AI is quite useful in doing scenario building. In fact, this may be the best immediate application and the highest value application we’ve come up with so far for generative AI.  

The more you automate things, the more you guarantee the failures to be more significant, and consequently, the more experienced and capable your people will have to be to deal with them. 

Preparing for Automation and Generative AI

How should companies best prepare for the complex failures and required skill changes due to the increased reliance on automated tasks? 

As it happens, I was having lunch with the dean of the business school at the college I attended, and I was expressing my concern that the output from colleges worldwide—not just in the United States—is producing students who know how to study but not how to learn. If we don’t address this, the first employer of these students will have to teach them how to learn.  

Studying alone is no longer enough. In fact, it never really was, but we used to get by because many jobs only required studying. Now, those jobs will likely be the first to disappear.  

In this era of automation and generative AI, one of the most crucial steps companies can take is to implement active, continuous programs that encourage all their human resources to keep learning. Part of the performance assessment and management process should focus on whether individuals demonstrate the ability to continually acquire new skills. If they aren’t, it’s important to determine whether the issue is motivational, attitudinal, or inherent. If it’s a matter of motivation or attitude, most companies can address and resolve it. If it’s an inherent issue, companies can address it by seeking and hiring individuals with the capacity for continuous learning.  

In this era of automation and generative AI, one of the most crucial steps companies can take is to implement active, continuous programs that encourage all their human resources to keep learning.

About the Author


Greg Selker is a Managing Director at  Stanton Chase and the Regional Sector Leader of Technology for North America. He has been conducting retained executive searches for 33+ years in technology, completing numerous searches for CEOs and their direct reports at the CXO level, with a focus on fast growth companies, often backed by leading mid-market private equity firms such as Great Hill Partners and JMI Equity. He has also conducted leadership development sessions with more than 50 executives from companies such as BMC Software, Katzenbach Partners, NetSuite, Pfizer, SolarWinds, Symantec, TRW, and VeriSign.  

Technology
AI & Technology

How Can We Help?

At Stanton Chase, we're more than just an executive search and leadership consulting firm. We're your partner in leadership.

Our approach is different. We believe in customized, personal, and fearless executive search, executive assessment, board services, succession planning, and leadership onboarding support.

We believe in your potential to achieve greatness and we'll do everything we can to help you get there.

View All Services