Sam Altman Worries About ‘Under-Regulation’ for ChatGPT

Talking at a tech event in Taipei, the CEO of OpenAI, a startup company, mentioned that people in the tech world often don’t like rules. He also said he’s not too scared about the government making too many rules, but it’s possible.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Sam Altman, the CEO of OpenAI, a startup supported by Microsoft, expressed concerns about the lack of rules for AI. He stated, “In our industry, people often criticise having rules. We believe in having rules, mainly for the most powerful AI systems.”

“Models that are like 10,000 times the power of GPT4, models that are like as smart as human civilization, whatever, those probably deserve some regulation.”

During an AI event in Taiwan organised by the charitable foundation of Terry Gou, the founder of Foxconn, a major Apple supplier, Mr. Altman discussed the tech industry’s tendency to resist regulations. He mentioned that while he wasn’t overly concerned about excessive government regulations, he acknowledged the possibility of it occurring.

“Regulation has been not a pure good, but it’s been good in a lot of ways. I don’t want to have to make an opinion about every time I step on an airplane how safe it’s going to be, but I trust they’re pretty safe & I think regulation has been a positive good there,” he said.

“It is possible to get regulation wrong, but I don’t think we sit around and fear it. In fact, we think some version of it is important.”

Many countries are getting ready to make rules for AI. In the UK, they’re even having a big AI safety meeting in November at Bletchley Park, the place where they cracked codes during World War II.

Greg Clark, who leads the science and technology committee and is a Conservative MP, cautioned that the government might need to act faster to make sure their rules don’t become old-fashioned. This is because countries like the US, China, and the EU are also thinking about their own AI rules.”

The conference will be all about figuring out the dangers of this new technology and finding ways to create rules that work both in our country and around the world.

This follows remarks from Dr. Craig Martell, the head of computer intelligence at the US Pentagon, who mentioned that AI becoming a major threat is far from reality.

He explained that recent news about generative models like ChatGPT may have given people the wrong idea about their capabilities.

Dr. Martell emphasised that AI isn’t a magic solution for success, nor is it an immediate danger when others have it. It’s neither a cure-all nor a source of trouble.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top

Can’t get enough?

Never miss any important news. Subscribe to our newsletter.