Generative AI is everything, everywhere, all at once!
Companies want to use generative AI and will do whatever it takes to be a part of it
Is generative AI going to stick around? Most indications suggest it will. But how exactly? That’s decision for tech creators & the whole industry.
However, as Silicon Valley continues with this trend, both big and small businesses are joining in.
Some are creating their own big language models for broad use or integrating generative AI into their product ideas.
As for others, they’re simply talking about it, without much substance, to take advantage of the popular buzzword.
Whenever a new technology comes along, there are some people who want to take advantage of those who don’t know much and make a fast buck. This is where AI washing comes into the picture.
It means that businesses are telling people that a product or business uses AI when it really doesn’t. People who don’t know better end up paying more for something that’s not as good as they think.
To avoid falling for AI scams, experts and officials give advice on what to look out for as warning signs. We’ll explore how regular people can protect themselves and how businesses can make sure they don’t exaggerate their AI claims.
Not very smart, but certainly artificial
For example, let’s talk about a recent case involving the Federal Trade Commission. In August, a federal court temporarily stopped a business called Automators AI (formerly Empire Ecommerce LLC) from tricking people with a deceptive scheme. They were selling business opportunities that they claimed were powered by AI.
The people behind this, Roman Cresto, John Cresto, and Andrew Chapman, were accused of taking $22 million from consumers. This violated the Business Opportunity Rule and the FTC Act.
In the lawsuit, the FTC alleges that Roman, John, and Chapman pretended to be successful millionaires who knew how to make e-commerce businesses grow. They said their website used AI and machine learning to make businesses more profitable. But it turned out to be all a trick.
Let’s start with Empire’s advertising. According to the lawsuit, Empire’s ads made big promises about the money people could make if they invested in their so-called “automated” e-commerce packages.
The initial investment cost between $10,000 and $125,000, with additional costs of $15,000 to $80,000. The company didn’t give potential customers the documents they were supposed to under the FTC’s Business Opportunity Rule.
The lawsuit says that most customers didn’t make the money the company promised. They ended up losing their investments. The e-commerce stores that Empire set up and managed got into trouble for breaking the rules and were eventually shut down.
Then, in November 2022, just before selling Empire to someone else, the company’s employees lost access to their software systems, and John and Roman deleted all the data and emails from Empire’s records.
But the dishonesty and scams didn’t stop after they sold the business. In January 2023, the same people used the same tricks to advertise their new venture, Automators AI.
They claimed to teach people how to use AI to find popular products on e-commerce websites and make over $10,000 in sales each month. They also said they could teach people how to use ChatGPT to create customer service scripts.
In their social media ads for Automators, Roman told a story about going from being poor to a super successful Amazon entrepreneur with tons of money. He said he dropped out of college at 20 and now can buy his mom a Tesla and travel the world in a fancy sports car.
“These scams are not new,” says Andy Thurai, a Vice President and Principal Analyst at Constellation Research. “What’s different this time is that the content made by AI can look so real.
The deep fakes and other fake content are almost like the real thing. Even experts might have a hard time telling what’s real and what’s fake. It will be even tougher for people who don’t know much about it.”
Buzzy like a bee
When a new technology comes into the spotlight, businesses are eager to stay relevant. However, their approach can range from having a clear vision and using the technology effectively to being misleading or even fraudulent.
For instance, back in 2017, during the Bitcoin craze, a soft drink company named Long Island Iced Tea Corp. changed its name to Long Blockchain Corp., causing its stock price to shoot up by 380%. This was later found to involve insider trading, as the Securities and Exchange Commission discovered.
The company hopped on the Bitcoin bandwagon, promising to integrate the technology into its business, even though it had no real connection to cryptocurrency or expertise in anything other than making iced tea.
Another example is from 2015 when a former associate dean and professor at MIT Sloan School of Business, along with his son, who was a graduate of Harvard Business School, misled investors by falsely claiming that their hedge fund used a “complex mathematical trading model,” essentially AI, developed by the former professor. In reality, the hedge fund did not use any such technology.
While cryptocurrency and blockchain turned out to be somewhat of a passing trend with limited applications, experts see generative AI as having long-lasting potential.
Many major tech companies are creating their own large language models, such as Microsoft’s Bing, Google’s Bard, Snapchat’s AI chatbot, and Meta’s new AI chatbots. Generative AI is expected to become a $1.32 billion market by 2032, according to a Bloomberg Intelligence report.
“It’s a new technology with a lot of promise and potential,” said Olivier Toubia, a professor at Columbia Business School who studies innovation.
“No one wants to be left behind.” Google, for instance, emphasised its commitment to AI at its annual developer conference, mentioning “AI” more than 140 times during its keynote presentation.
This signals that even companies with a strong track record in AI are eager to stay at the forefront of this generative AI wave, demonstrating their seriousness about technology, despite occasional setbacks like chatbot issues.
There's no substantial substance in the AI-flavored burger you're consuming
The Automators AI lawsuit is a classic example of AI washing, which means a company promotes messages that are more about appearances than substance, as described by Thurai.
Thurai explained that many companies claim to be “AI-enhanced, AI-infused, AI-driven, AI-augmented, and AI whatever else.” However, if you look closely, most of them lack a real foundation in AI.
AI itself is not a new concept, and it’s a rather vague term, as Toubia pointed out. He mentioned that there’s a wide range of things that could be labeled as AI or machine learning, from very simple statistical methods to more advanced techniques.
Generative AI, like ChatGPT from OpenAI, has gained a lot of attention in the past year due to its capabilities. However, generative AI can be quite complex and is challenging to patent, audit, or regulate.
This complexity contributes to AI washing because companies don’t have to disclose or explain the inner workings of their AI systems. This is often considered a trade secret, making it difficult to understand what’s happening under the hood.
Regulatory agencies like the FTC are making efforts to address issues in the AI industry with warnings and reports. However, Thurai is skeptical that their stern warnings and oversight will be effective in court due to the difficulty of proving wrongdoing.
What attracts businesses to generative AI is its potential to scale and automate routine tasks, making operations more efficient. Ironically, when a company falsely claims to use generative AI but doesn’t actually implement it, even if they attract more customers, they miss out on the actual benefits of the technology, such as increased efficiency and improved customer service.
How to watch out for AI washing
As companies increasingly incorporate generative AI into their operations, the risk of AI washing and false advertising becomes more significant.
To safeguard your investments & avoid falling for misleading claims, there are several crucial questions to vendors & important factors to consider:
Request a Deep Dive Demo: When assessing a product, ask for a detailed demonstration. Inquire about the algorithms used, the model training process, data preparation, drift monitoring, and operationalisation. By closely examining these aspects during a demo, you can discern whether the product is genuine or merely marketing hype.
Performance Speed and Scale: If a company genuinely utilises generative AI, its operations should reflect the speed and scale associated with this technology. If the purported AI tools don’t noticeably improve the speed or efficiency of operations, it could be a sign that they lack true AI capabilities.
Experiment and Test: When evaluating a generative AI tool, conduct experiments by trying different versions, adjusting wording or tasks, and observing how the tool’s results change. This experimentation can help reveal the tool’s true capabilities.
For business owners aiming to make responsible AI claims, the FTC provides guidance. They suggest asking critical questions, such as whether you’re exaggerating your AI product’s capabilities or claiming that it outperforms non-AI alternatives.
As awareness of AI scams grows and companies become more focused on genuine generative AI use cases, consumers are likely to become more discerning, making it easier to avoid deceptive schemes.
While there will always be individuals looking to capitalise on the AI trend, a more sophisticated market can help reduce the prevalence of AI washing and protect consumers from misleading claims.