Difficulties with AI

Artificial intelligence (AI) has real-life, law, and moral things to think about. Laws can be a way to manage these issues.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

This article talks about the difficulties in getting money, creating, giving, and controlling artificial intelligence (AI). It focuses on narrow AI, which means AI systems made for specific tasks. It doesn’t discuss the idea of artificial general intelligence (AGI), which is a type of AI that might someday be as smart as or even smarter than humans.

Definition of AI

There isn’t one official definition for AI, but in the UK’s Industrial Strategy White Paper, they describe AI as “technologies that can do tasks usually done by people.”

AI makes choices by using sets of instructions or, in the case of machine learning, by looking at a lot of information to find patterns. Machine learning is more complex because machines learn and make their own patterns, which can be hard to understand compared to traditional computer programming.

Nowadays, AI is used in many areas like online shopping, ads, web searches, virtual assistants, language translation, homes, health, transportation, & manufacturing.

Risks & benefits of AI

AI has the potential to bring big benefits, like improving medicine, education, food distribution, public transportation, and fighting climate change. If used well, it can help us achieve the UN’s 2030 Sustainable Development Goals and make things faster, fairer, and more efficient. It’s a tech that could change history as much as the Industrial Revolution.

But, there are serious concerns about AI. Will it only make rich people richer? Will it make existing biases and discrimination worse? Can AI make society less caring? Should there be rules about what AI can do on its own, like driving a car or using weapons?

And when AI messes up, like if a self-driving car has an accident, who should take the blame? To make sure AI is safe and fair, we need strict and up-to-date rules.

Regulation of AI

AI brings about tough regulatory issues because of how it gets its money, research, and development.

Big businesses push AI forward, and governments depend on these big tech companies to create AI software, provide AI experts, and make important AI discoveries. This happens because these big tech companies have the money and know-how.

But, if there isn’t government control, the use of AI’s amazing potential will mostly depend on business interests. This doesn’t encourage using AI to solve big global problems like poverty, hunger, and climate change.

Government policy on AI

Right now, governments are trying to catch up with the fast development and use of AI. Even though AI is a global thing, there’s no common plan for regulating AI or handling data.

It’s important for governments to set some rules for how businesses use AI through good regulation. But in the US, where a lot of AI work happens, and in many other places, these rules aren’t firmly in place yet. This lack of rules has ethical and safety concerns for AI.

Some governments worry that strict rules might stop companies from investing and making new stuff in their countries, so they’re hesitant to put in too many regulations. This could lead to a race where countries compete to have the least amount of rules to attract big tech investments.

The EU and UK governments are starting to talk about rules, but it’s early days. The EU is thinking about a risk-based plan that would forbid the worst ways AI can be used, like AI that messes with how people act or tricks them in secret ways.

They would also make sure risky AI, like in important infrastructure, credit checks, hiring, law, and asylum, has rules and human supervision.

The UK is looking to create an AI quality assurance industry that would give AI a seal of approval for safety and ethics.

But there are still questions about how to decide what’s risky, what a rights-based approach to AI might look like, & how to make sure AI is fair & includes different voices.

AI ethical issues

AI comes with some big ethical problems. Since AI learns on its own, these issues might not show up until it’s already in use. AI’s history has examples of ethical problems like invading people’s privacy, showing bias, and making decisions that can’t be questioned.

So, it’s really important to find and fix these ethical problems when designing and developing AI, and keep checking for them once it’s being used.

But many AI creators work in a competitive, money-making world where being quick and efficient is important, and delays, like what rules and ethical checks can cause, are seen as expensive and not good.

These creators might also not have the right training or tools to spot and solve ethical problems. Most of them come from an engineering or computer background and don’t represent all the different people in society.

People who own shares and are in charge of a company don’t usually like criticism that could hurt their profits.

Once an AI system is made, it’s often sold to other companies to do a job (like picking job candidates), but those buyers might not understand how it works or what problems it might bring.

Ethical frameworks for AI

Some global organisations have tried to make rules for how AI should be ethical, like UNESCO’s Recommendation on the Ethics of Artificial Intelligence and the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. Some companies also have their own ethical plans.

But these ideas often overlap, are a little different, and are not mandatory. They give guidelines for making AI that follows ethical rules, but they don’t say what happens if the AI messes up.

People who work on ethics in the AI field could become an important job, but they don’t have enough money or support. Most people agree that ethics is important, but they don’t all agree on how to make sure it’s followed.

Government use of AI

It’s just as important for people to know and agree with how governments use AI, and it has to follow ethical rules and human rights.

When governments use AI secretly, it might make people think AI is used to control them.

In China, there are strict rules for AI in private companies, but the government uses AI to watch its citizens, and that raises concerns about people’s rights.

China is also sending AI technology to other countries, and this can lead to more international government surveillance.

Privacy & AI

The biggest challenge for the AI industry is balancing AI’s need for lots of structured data with the human right to privacy.

AI needs a ton of data to work well, but this clashes with current privacy laws and how we think about privacy. In the UK and Europe, the law limits how data can be shared and how much AI can make decisions on its own. These limits make it hard for AI to do its job.

For example, during the COVID-19 pandemic, there were worries that AI couldn’t be used to figure out who should get vaccines first. But these worries were eased because doctors helped make the decisions.

In a broader sense, some AI makers said they couldn’t help with the COVID-19 response because the rules stopped them from getting access to big health data. Having that data might have let AI make better choices about things like lockdowns and where to send vaccines.

We can make it easier for AI to get data and still protect privacy, but that means changing our rules. The EU and UK are thinking about what changes they need to make to data protection laws to help AI while keeping privacy safe.

Bias in AI

Bias is a big problem in AI. One example is in facial recognition, where many systems have been trained on pictures of mostly white men. This can make the systems less accurate for people who aren’t white or men.

Some databases, like Google’s, have been criticised for being too focused on the US and the West and for reinforcing stereotypes.

There’s also bias in AI used for important decisions. For instance, Uber faced legal action because its driver verification software was accused of being racially biased, which led to unfair treatment of drivers. Studies have shown that facial recognition doesn’t work as well on darker skin.

In 2018, Amazon had to admit that its AI for hiring was flawed. It learned from old resumes and ended up favoring male applicants because most of the resumes it saw were from men.

It’s hard to completely get rid of bias in AI. Old data will always have biases, and society will always have some biases too. Trying too hard to fix it might cause new, unexpected biases. So, the best way to deal with bias is to keep checking and reviewing AI models.

AI & climate change

AI can have both good and bad effects on the environment.

On the positive side, AI can help reduce carbon emissions by making manufacturing processes more eco-friendly, creating smarter power grids, and improving infrastructure to use energy more efficiently.

But AI also contributes to carbon emissions because it needs a lot of computing power. Some worry that current AI models use and store huge amounts of data, which adds to carbon emissions and electricity costs.

AI experts are trying to think more about emissions when they design their algorithms, but it’s hard to know exactly how much carbon is released in the whole process of developing and using AI.

Right now, AI ethics often don’t include taking care of the environment. Most AI plans focus on making money from AI, not protecting the planet.

AI & social media

The algorithms that decide what we see on social media are a type of AI. They often work based on what advertisers want, like getting more clicks.

Social media companies want people to click a lot, and they use AI to predict and even control what users do. This means they decide how people find stuff to buy and even how they talk about politics.

The problem is, these AI algorithms can make existing biases worse, show sensational and misleading things more, and keep people from seeing different opinions. This can change the way we think about things and even affect our democracies and societies.

We all need to be careful about what we read online because of this. Regulators and companies are trying to make things safer with new laws and rules. The media and groups in society have a role in checking facts and sharing real, diverse news and opinions.

How to build trust in AI

Building trust in AI is tough when there are no clear rules about how it gets money, is designed, and used. There are lots of problems with ethics, bias, privacy, and more.

People’s trust in big tech companies using data has already been damaged. Scandals like Cambridge Analytica using Facebook data showed how data can be used to manipulate people, and AI can make this even more effective.

There are also less-known threats. But if there’s global action to create good rules and make people and companies accountable, it can help make AI safer and more ethical.

Right now, AI is mostly understood by experts. Most of the talk about the risks and benefits of AI is done by technical and legal experts, and regular people often don’t know when they’re using AI or when AI affects their choices.

We need to start a conversation that includes everyone to make people aware of how AI is used now and decide together where it should and shouldn’t be used.

At the same time, we should all know the good things that come from AI when it’s safe and follows good rules. Trust in AI will come when we have clear rules that are made with input from the public.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.