‘Historical’ AI Chatbots are not just inaccurate, but dangerous
Why we should think twice about AI Chatbots pretending to be Einstein or Gandhi.
In a long list of bad ideas, putting a paywall on the option to have “fun and interactive” conversations with an AI impersonating Hitler might not be the absolute worst, but it’s definitely up there.
Surprisingly, many people have actually tried it out through an app called Historical Figures, which is still available on Apple’s App Store. This app gained a lot of attention recently for offering a wide range of AI profiles, including Gandhi, Einstein, Princess Diana, and Charles Manson.
Even though it claims to be an educational app and is ranked 76th in its category, critics didn’t hold back from calling it a rushed and often inaccurate gimmick at best. At worst, they saw it as a cynical exploitation of the growing and already problematic technology known as ChatGPT.
Even Sidhant Chaddha, the 25-year-old Amazon software development engineer behind the app, admitted to Rolling Stone last week that the combination of ChatGPT’s confidence and inaccuracy can be risky. Users might wrongly think that the information it provides is well-sourced.
He explained, “This app relies on a small amount of data to make educated guesses about what historical conversations might have been like.”
Many historians, including Ekaterina Babintseva, an assistant professor who specialises in the History of Technology at Purdue University, strongly share this view. For her, the use of ChatGPT in historical education isn’t just distasteful; it could even be significantly harmful.
“When ChatGPT was first created, my immediate reaction was, ‘Oh, this is genuinely dangerous,'” she expressed during a Zoom conversation. For Babintseva, the concern doesn’t primarily revolve around worries of academic plagiarism. Instead, she’s more focused on the broader impact of AI on society and culture.
“ChatGPT represents another step towards undermining our ability to critically assess information and understand how knowledge is built,” she remarked.
She also highlights the opaque nature of major AI development by private companies, which are often driven by the desire to tightly control and profit from their intellectual properties.
“ChatGPT doesn’t even explain where this knowledge comes from. It blackboxes its sources,” she says.
OpenAI, the creator of ChatGPT and similar spin-offs like Historical Figures, has shared a lot of its research and foundational designs with the public. However, understanding the internet text repositories used to train AI is much more complex. Even asking ChatGPT to cite its sources only results in vague references to “publicly available sources” like Wikipedia.
Apps like Chaddha’s Historical Figures provide incomplete and sometimes incorrect narratives without explaining how they were constructed. In contrast, historical academic papers and journalism come with source citations & footnotes.
Historian Ekaterina Babintseva emphasizes that there are multiple perspectives in history, unlike single narratives seen in totalitarian states.
In the past, AI research focused on “explainable AI,” understanding how experts make decisions. But by the late 1990s, AI developers shifted to neural networks, which can reach conclusions even their creators can’t fully explain.
Babintseva and other scholars who study science and technology believe we should go back to using AI models that we can understand. Especially for systems that have a big impact on people’s lives.
She thinks AI should help with research and human thinking, not replace them. She hopes organizations like the National Science Foundation will support research in this direction by offering fellowships and grants.
Until that happens, apps like Historical Figures will probably keep appearing. They use unclear methods and sources while claiming to be new and educational. What’s even worse is that programs like ChatGPT rely on people to create the knowledge it uses but never give credit to those people.
Babintseva says it makes AI seem like it has a unique and mysterious voice, instead of being built on many different human experiences and understandings.
In the end, experts advise us not to see Historical Figures as anything more than a fancy digital trick. Douglas Rushkoff, a well-known futurist and the author of “Survival of the Richest: Escape Fantasies of the Tech Billionaires,” offered a somewhat positive perspective on the app.
In an email, he said, “Well, I’d prefer to use AI to explore ideas with historical figures who are no longer alive rather than trying to replace living people. That’s something we couldn’t do otherwise.”
“However, it seems like the choice of characters in the app is more about grabbing attention in the news than genuinely offering people a meaningful experience with historical figures like Hitler.”