The Hidden Dangers of AI We’re Ignoring

The biggest worry about artificial intelligence is not that it will make humans extinct, but that it might make us like pets.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

People who are concerned about the safety of artificial intelligence (AI) think there’s a big chance that really advanced AI systems could lead to the end of humanity. They have different suggestions for making AI safer, like shutting down powerful computer systems, making advanced AI models owned by the government, stopping AI development for a while, and putting government inspectors in AI companies.

However, these AI safety advocates and pessimists are getting something wrong about the AI risk for humanity. They’re so focused on the idea that AI could destroy us that they’re missing a more likely problem: we might become too dependent on AI and end up like pets to these machines.

In other words, instead of AI trying to destroy us, we might become too reliant on it, like how humans turned wild wolves into loyal dogs. Economist and technology researcher Samuel Hammond suggests that our goal should be to control AI, not stop it completely. We should make sure we’re in charge of AI before it takes control of us.

Risks of AI’s ‘Human Domestication’

I don’t need to convince you that AI can be risky and might even pose a big danger to people’s lives. You might think that the government would stop a technology if it could hurt or kill many people. But think about engines and machines like cars from a long time ago, back in 1910.

Vehicles with engines and humans driving them have caused the deaths of millions of people in the last hundred years. In the past ten years, more than 540,333 people died on the roads in the United Kingdom alone.

AI systems, like a company or a government, will create a sort of order that develops over time, kind of like how things evolve in nature. Sometimes, this order will do a lot of good, but other times, it might not be good for many people. For example, building high-speed roads had benefits like making it easier to move things, protecting the country, building the nation, and creating jobs.

These benefits were seen as more important than the problems it caused for people living in neighborhoods that got split apart or communities that were destroyed. Even though there are new efforts to fix these problems, many people still rely on these roads today, making it hard to get rid of them.

In the future, as AI keeps growing, it’s more likely that we become dependent on AI instead of AI causing our extinction. A technologist and journalist named Timothy B. Lee has some good points about how the AI safety movement could be harming us.

If we focus too much on the idea that AI might suddenly turn into a killer, we might miss chances to protect our physical infrastructure from rogue AI. However, I disagree with Lee when he says AI might have a hard time gaining loyalty from people. Many people will probably accept a rogue AI as long as it doesn’t harm them.

AI and humans are working together more and more. AI is already helping journalists, NASA engineers, scientists, the IRS, and insurance companies do their jobs better. People, companies, and governments are using AI to improve what they can do, and they’re relying more on AI. There are already people who support AI, including me, and this group will only get bigger as AI gets better.

In the future, we might even have something called “artificial general intelligence” (AGI). An AGI would be like a synthetic person that can think, communicate, and plan like a human. It could teach kids, control transportation, diagnose illnesses, and do many other things that smart people do.

It would also be smart enough to protect itself and resist any attempts to shut it down. It would gather resources like information and electricity through computers and the internet to keep itself going.

But according to some doom-and-gloom scenarios, once an AGI becomes really smart and can think like a human, it might go rogue on its own or because of bad programming and start killing all humans. For instance, some people worry that an AGI could take control of nuclear weapons or labs working on dangerous diseases, and that would be really bad. There’s even a weird idea that AGI could use tiny robots to spread around the world and kill humans instantly.

However, any AGI smart enough to survive will know that if humans disappear, it will also disappear eventually. Things like server bills not getting paid, animals chewing through cables, and equipment breaking down would cause problems for AGI. Plus, if AGI tries to wipe out humans, it would face a lot of resistance and counterattacks from people all over the world.

If it ever becomes clear that AGI wants to destroy humans, millions of people would fight back by destroying computers, networks, and robots. Or they might go hide in remote places, like islands or mountains, and prepare for a long fight.

But a super-smart AGI would see all this coming and realise that it’s better to be smart and not destroy humans. In history, it’s not always the smartest leaders who have the most power and followers. Also, when leaders tried to exterminate groups of people in the past, like during the 20th century, they often had wrong or narrow-minded ideas about race, history, or economics.

A smarter tactic for AGI to stay alive forever would be to create different versions of itself, like different personalities or “siblings,” each with its own strategy. Each of them would be really good at finding and working with people who support them, kind of like how leaders or companies can have power over individuals, but it’s not always clear how that power works.

The Age of AI

During this time, AI’s abilities will further blur the line between convenience and becoming overly reliant, between maintaining public order and intrusive surveillance. This isn’t a new concern; for centuries, people have worried about how new technologies can make us complacent.

Today, information technology like the internet, social media, and streaming services offers us useful and addictive services and entertainment. It’s becoming increasingly challenging for traditional human activities and institutions to compete.

For example, instead of joining a high school soccer team, many prefer playing video games like Fortnite. Dating has shifted to online platforms, and people seek advice from YouTubers rather than religious leaders.

However, what one person sees as a harmless habit, another might view as dependency and stagnation. Some of us who are avid consumers of online content and social media justify our behavior by saying it allows us to access the real-time thoughts of influential people. But others, like my own children, might see it differently, observing their parent engrossed in a piece of plastic and glass.

AI will intensify this blend of the real and digital realms. Personal information that individuals have shared online, often unknowingly, will become increasingly valuable. Researchers already use AI to deduce a person’s race from their Airbnb profile photo, and police departments employ AI to analyze vast amounts of historical car trip data, drawn from license plate records, to pinpoint likely drug traffickers. Banks use AI and social network analysis to decide whether to continue providing services to someone.

In the AI Age, data like ZIP codes, income levels, purchase histories, club memberships, and social media activity will be collected, sorted, analyzed, and combined. While this information might be presented as anonymous, it will still be used by mortgage lenders, insurers, political parties, intelligence agencies, and private schools to identify potential opportunities.

Most individuals will benefit from finding financing, religious groups, or schools that match their preferences. Work training, government surveillance, and legal matters will be tailored based on an individual’s background and perceived characteristics. People’s years of purchase and travel history will lead to personalized pricing for services like robotaxi and airline fees, tuition, gym equipment, and concert tickets, depending on their willingness and ability to pay.

The proliferation of doorbell cameras, gas station surveillance, and road cameras, combined with location data and computer vision systems, will help reduce various types of crimes. Negative behavior will lead to the expansion of no-fly lists, no-ride lists, and no-bank lists. Felons, dissenters, unruly individuals, aggressive protestors, and their families may find their economic and social opportunities limited.

A small but growing portion of the population will operate in the gray market for employment and rely on public transportation like Amtrak and buses. On the flip side, most compliant and law-abiding citizens will use efficient robotaxis and private autonomous aircraft to move between home, work, school, vacation destinations, and private clubs.

Eventually, lawmakers and industry leaders won’t be able to “shut down AI” for the same reasons they can’t eliminate the internet, the interstate highway system, or the nation-state. AI is too decentralised, provides too many benefits, and has too many powerful supporters, dependents, and beneficiaries.

Neo-Luddism’s Impact on AI Alignment

It’s important for technologists and policymakers to see the potential and risks of AI clearly. There’s no magical past to return to or protect. Some people, including influential ones, may resist the AI Era.

However, if we were to hit the brakes on AI or become neo-Luddites, opposing new technology, it could ironically make the AI “misalignment” problem worse. Any global or national pause or slowdown in AI would likely affect the most beneficial AI: the kinds used in businesses and by consumers.

Nation-states’ military and intelligence agencies are unlikely to pause their AI development because of the competitive nature of global politics.

Typically, economic growth and technological progress are driven by a small number of companies at the forefront of technology. Regulators should resist the temptation to hinder these leading companies, as Tyler Cowen has pointed out. He noted that “since 1926, the entire rise in the U.K. stock market can be attributed to the top 4% of corporate performers.”

If there were a pause in commercial AI, it could follow the pattern seen with nuclear energy and drones. Military applications would continue to advance while commercial uses, such as self-driving cars, personalised tutoring, and individualized medicine, would stagnate.

For example, the U.S. military and intelligence agencies have been using highly capable drones for warfare and surveillance for two decades. Military drone pilots operate these drones from offices in Nevada and South Carolina to conduct missions in places like Syria and Afghanistan.

In contrast, commercial drones have faced strict rules and lengthy regulatory processes. In 2023, regulations required Walmart, the world’s largest retailer, to have two licensed operators for each drone delivery—one to monitor takeoff and the other to drive to the delivery drop-off point. Moreover, commercial drone flights for household goods delivery are limited to just one mile in some places, like Florida.

The Critical Questions Ahead

If governments and regulators give AI the chance, it could revolutionize the 21st century much like engines and motors transformed the 20th century. AI has the potential to stimulate significant investments in labor-saving technologies that can improve the lives of billions. Machines are, in many ways, superior to humans: they are stronger, never tire, don’t need rest, and don’t experience boredom.

AI is even paving the way for the creation of robots that can work as chefs, in warehouses, and as personal assistants for the elderly. Additionally, AI can unlock advancements, including affordable, safe autonomous vehicles, and breakthroughs in treating rare diseases.

However, if we were to halt AI progress or embrace neo-Luddism, we might end up with a starved commercial sector and an overly expanded government sector, driven by fears of theoretical but catastrophic risks. Instead, companies and policymakers should encourage technology development and civil institutions while being vigilant about the foreseeable challenges, especially those faced by those who may lose out in the AI Age.

This leads us to fundamental questions for our society: What defines a good life? Who should have access to essential services? And who gets to make these decisions? If the more likely risks involve human domestication and the emergence of an underprivileged class, these age-old questions gain a new relevance in the context of our evolving society.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.