AI Hype Distracts Businesses!

The surge in AI hype further exacerbates the issue, prompting the need to distinguish between today’s practical ML projects & cutting-edge research advances

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Hearing about big AI advancements might seem like it would make more people use machine learning (ML). But that’s not true. Even before the recent exciting developments, like OpenAI’s ChatGPT and other generative AI tools, the story about a super powerful AI was causing issues for practical ML.

This is because, for most ML projects, the word “AI” creates too much excitement. It makes people expect too much and takes attention away from how ML can actually make business better.

Many practical uses of machine learning (ML) aim to make businesses work better by innovating in simple ways.

Don’t be dazzled by the fancy technology – at its core, ML’s main job is to make useful predictions, also known as predictive analytics.

This is valuable, as long as we avoid the false hype that claims it’s always “highly accurate” like a digital crystal ball.

This useful ability brings real value in a straightforward manner. ML predictions guide millions of decisions in operations.

For example, predicting which customers might leave allows a company to offer them incentives to stay. Similarly, predicting fraudulent credit card transactions helps a card processor block them.

It’s these practical uses that have the biggest impact on business operations, and the complex data science methods they use all come down to ML and only ML.

Here’s the issue: many people see ML as “AI.” It’s a reasonable mistake, but “AI” is a vague term that doesn’t consistently describe any specific method or benefit. Calling ML tools “AI” exaggerates what most ML deployments actually do.

In fact, there’s no bigger promise than calling something “AI.” This label brings to mind the idea of artificial general intelligence (AGI), software that can do any intellectual task humans can.

This creates a big problem for ML projects: they often lose focus on their value – how ML will actually make business processes better. As a result, many ML projects end up not delivering value.

On the other hand, ML projects that stay focused on their specific goal have a good chance of achieving it.

What Does AI Actually Mean?

“‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’”

–Devin Coldewey, TechCrunch

AI struggles to escape the concept of AGI (artificial general intelligence) for two main reasons.

Firstly, the term “AI” is often used without clarifying whether it refers to AGI or narrow AI, which represents practical and focused machine learning (ML) deployments.

Despite their significant differences, these distinctions blur in common discussions and marketing materials.

Secondly, defining AI apart from AGI poses a challenge. Attempts to define “AI” independently of AGI have become a research challenge in themselves, with no satisfactory resolution.

If it doesn’t mean AGI, it risks losing any meaningful definition. The challenge extends to defining AI, determining criteria for machine “intelligence,” or establishing performance benchmarks for true AI—these are interconnected issues.

The problem lies in the vagueness of the term “intelligence” when applied to machines, posing challenges for a precise engineering goal. Without a clear definition, it becomes challenging to measure, build, and make progress toward the goal.

The industry attempts to address this dilemma through what the author calls the “AI shuffle,” a dance of various AI definitions that often circle back on themselves.

Efforts to define AI as machines performing smart actions or demonstrating intelligence lead to circular definitions. Methods like ML, natural language processing, and others are mentioned, but employing these techniques doesn’t automatically qualify a system as intelligent.

Even defining AI by human-like qualities, as in the Turing Test, proves problematic due to the moving target of fooling human observers and the limited utility of such a capability.

Defining AI based on capabilities, such as performing tasks traditionally requiring human skills, also falls short. Once a computer accomplishes a task, it tends to be trivialised as a well-understood, mechanical process.

This paradox, known as The AI Effect, suggests that AI becomes artificial impossibility—getting computers to do things too difficult for computers to do.

The definition of AI remains elusive, with computer science pioneer Larry Tesler humorously suggesting that AI could be “whatever machines haven’t done yet.”

Ironically, the measurable success of ML initially fueled the hype around AI. ML’s ability to improve measurable performance, particularly in supervised learning, has proven valuable and earned it the title of “the most important general-purpose technology of our era.” The undeniable progress of ML has contributed to the overall hype surrounding AI.

All in with Artificial General Intelligence

“I predict we will see the third AI Winter within the next five years… When I graduated with my Ph.D. in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.”

–Usama Fayyad, June 23, 2022, speaking at Machine Learning Week

To resolve the challenge of defining AI, one approach is to embrace a comprehensive definition by aligning AI with AGI—software capable of performing any intellectual task that humans can do.

While this goal might sound like science fiction, it offers a clear and measurable objective, theoretically benchmarking the system against a diverse set of tasks.

These tasks could range from intricate email requests for a virtual assistant to instructions for a robot in a warehouse and even one-paragraph overviews guiding the machine, acting as a CEO, to run a Fortune 500 company profitably.

However, achieving AGI is an ambitious and uncertain endeavor, raising questions about if and when it could become a reality. This poses a problem for typical ML (machine learning) projects.

Labeling them as “AI” suggests they are on the same spectrum as AGI, creating unrealistic expectations and hindering progress. The grand narrative associated with “AI” confuses decision-makers and often leads to project dead-ends.

A more practical and exciting path forward involves focusing on running major operations more effectively. Many commercial ML projects aim to enhance organisational functions.

To increase their success rate, it’s essential to be realistic and avoid the misleading use of “AI” terminology. Instead, accurately label technologies, such as ML, for what they are.

Contrary to the exaggerated claims of human obsolescence, emphasising operational value and steering clear of hyperbolic “AI” rhetoric can prevent another era of AI disillusionment.

By differentiating ML from the broader term “AI,” the industry can insulate itself from the pitfalls of hype, avoiding the unnecessary disposal of ML’s true value proposition when the hype fades and reality sets in.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.