Biden administration mulls AI rules as ChatGPT’s popularity stokes safety fears
The Biden administration on Tuesday inched toward possible regulations targeting advanced AI systems such as ChatGPT as fears grow about the risks the technology poses to society.
In a precursor to potential formal regulation of the quickly growing field, the Commerce Department issued a request for public comment on accountability measures that would help ensure AI tools “work as claimed – and without causing harm.”
The request came on the same day that officials in China signaled they will require advanced AI tools to submit to a security review before launch – and as Chinese tech giants Alibaba, Baidu and SenseTime each roll out their own AI chatbots to compete with OpenAI’s ChatGPT and Google’s Bard.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” said Alan Davidson, head of the Commerce Department’s National Telecommunications and Information Administration (NTIA)..
“For these systems to reach their full potential, companies and consumers need to be able to trust them,” Davidson added.
ChatGPT’s runaway success has sparked intense scrutiny of AI technology in recent months. The chatbot has wowed users with its lifelike responses to user prompts – stoking optimism about the technology’s benefits, as well as concerns about its potential to cause harm.
The NTIA will accept public responses on potential governance measures for the next 60 days, according to the agency’s website. Officials are seeking commentary on potential “trust and safety testing” for AI firms, among other questions.
The agency compared potential AI safety assessments to audits of public companies’ financial statements to ensure trust in their numbers.
When asked last week about the potential threat posed by the technology, President Biden said it “remains to be seen” whether AI is dangerous to the public.
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” Biden said.
The risks posed by AI have also caught the attention of regulators in China. The Cyberspace Administration of China, a key oversight agency, unveiled draft guidelines for so-called “generative AI” systems.
Aside from conducting security reviews of new AI services launched by Chinese firms, the agency said companies that launch such tools are responsible for the accuracy of their content. The firms must also be transparent about the data sets used to train advanced AI tools, Bloomberg reported.
Companies that don’t comply with the guidelines could face fines or even criminal scrutiny, according to Reuters.
Last month, billionaire Elon Musk joined with more than 1,000 experts in an open letter urging a six-month pause in advanced AI development given “profound risks” ranging from the unchecked spread of disinformation to job losses to “loss of control of our civilization.”
Others experts have argued a pause would not address the underlying concerns associated with AI development. Former Google CEO Eric Schmidt said a six-month pause would “simply benefit China” in the race to gain an advantage in the burgeoning sector.
OpenAI, the Microsoft-backed firm responsible for developing ChatGPT, has signaled that it supports government regulation for AI.
“We believe that powerful AI systems should be subject to rigorous safety evaluations,” OpenAI said in an April 5 blog post. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”
With Post wires