ChatGPT: Game-Changer with Risks Ahead

Although incredibly powerful, AI tools that generate content also pose risks for people and companies who use them.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Josh Lefkowitz is the CEO and co-founder of Flashpoint, a company that specialises in risk intelligence. He used to work as a consultant for the FBI and has spent the last 20 years studying and analyzing terrorist and cyber threat groups.

OpenAI released ChatGPT less than a year ago, and it’s already gained 173 million active users. Many of these users have found ChatGPT to be a helpful tool for improving their work productivity. They use it for various tasks, such as answering questions, brainstorming ideas, and even creating documents from scratch.

Despite their undeniable usefulness, generative AI tools like ChatGPT, Dall-E, and Midjourney also come with risks for individuals and businesses that use them. While some people have raised concerns about the potential dangers of superintelligent machines taking over the world, we should focus on more immediate threats.

One of the risks with AI tools lies in how they are trained. Programs like ChatGPT learn from vast amounts of data collected from the internet. Developers deploy bots to gather information from various sources, including social media and websites, without thoroughly verifying its accuracy.

This means that when you ask ChatGPT a factual question, it might provide a correct answer, or it might not. While there is a disclaimer that ChatGPT can sometimes produce inaccurate information, millions of people still use it for research, sometimes with serious professional consequences.

In May, a lawyer in New York turned to ChatGPT for examples of similar cases to assist a client’s lawsuit against an airline. The lawyer presented the results in court, only to discover that they were entirely fabricated. This led the judge to demand an explanation from the lawyer, and now the lawyer himself may face legal consequences.

This situation serves as a valuable lesson: when it comes to tasks demanding accurate information, relying solely on AI-generated responses is risky. While ChatGPT can offer general guidance, it’s crucial to conduct thorough verification.

Another concern is that ChatGPT, much like many other major internet companies, stores substantial amounts of customer data. While OpenAI currently doesn’t sell user data, there’s an old saying in Silicon Valley: “If you’re not paying for the product, you are the product.” It’s not far-fetched to imagine that OpenAI, which currently provides its chatbot for free, might be tempted to monetise the data it collects about its users in the future.

Even if it doesn’t, there’s the potential for a data breach, which could expose every question ever asked to a broader audience, along with information about who asked those questions. Companies and their employees should be cautious about the information they share.

Furthermore, there’s a darker aspect to consider. Just as professionals from various fields are exploring how AI can simplify their work, so are criminals. While it’s entertaining to ask ChatGPT to compose a note in the style of a Shakespearean sonnet, inexperienced fraudsters can use it to create deceptive messages mimicking banks, government agencies like the IRS, or even specific individuals, making their fraudulent communications more convincing.

In the future, we may encounter AI-generated phishing emails that sound exactly like messages from our superiors or colleagues, going beyond the “Hello Dearest” emails from supposed “Nigerian princes” that once only fooled the most gullible individuals. This presents a new and more sophisticated challenge for security and cybersecurity.

Additionally, it’s important to acknowledge that ChatGPT can potentially assist hackers in planning cyberattacks. For instance, it can provide users with fully developed exploits, which are programs designed to exploit vulnerabilities in computer systems.

Cybercrime is already a significant issue, with over 16 billion personal records stolen online in 2022. With assistance of AI, we can anticipate an increase in such cyber thefts.

Moreover, there’s a broader threat to businesses that create copyrighted content, including designers, media organizations, and software developers. Just as AI doesn’t verify for accuracy, it also doesn’t consider copyright when regurgitating material.

For instance, journalist Francesco Marconi asked ChatGPT about the sources it was trained on and found that the program drew information from at least 20 news organisations without their copyright or authorisation.

Questions regarding the sources used to train AIs will likely be debated for many years, both in the public sphere and in the courts. In fact, legal disputes have already begun. In January, three artists filed a lawsuit against the companies Stability AI and Midjourney, alleging they “infringed the rights of millions of artists” by creating artworks using images obtained from the internet.

In the meantime, companies face genuine risks of having their copyrighted work stolen and used elsewhere for profit. Furthermore, individuals who use AI-generated information may inadvertently commit plagiarism by reproducing the work of others.

However, none of this implies that businesses should completely cease using generative AI. Generative AI holds immense potential across various industries, including software development and cybersecurity. The key is to navigate these risks and challenges while leveraging the benefits of this technology responsibly.

Indeed, there’s a growing proliferation of private AIs that function as smoothly as ChatGPT but with a crucial difference: they prioritise safeguarding their users’ data.

However, to safeguard against future theft and misuse, companies must approach these emerging tools with careful consideration and due diligence. It’s essential to strike a balance between harnessing the potential of AI while taking proactive steps to protect sensitive information and ensure responsible usage.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.