Italy bans ChatGPT citing ‘privacy’ concerns, kids safety
Italian regulators have temporarily blocked AI-powered chatbot ChatGPT, citing privacy concerns following a reported data breach and also raising concerns about children’s safety.
The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data.
The agency said Microsoft-backed OpenAI, the Silicon Valley-based company behind ChatGPT, had “no legal basis” for harvesting user data that was being gathered “to train the algorithms that power the platform.”
The Italian government said it launched an investigation against OpenAI, which has 20 days to demonstrate that it is abiding by European Union privacy rules.
Failure to do so could result in fines of either 4% of the firm’s global annual revenue or $21.8 million — whichever is higher.
The agency also flagged what it said was OpenAI’s lack of a filter to verify that children under the age of 13 were not using ChatGPT, according to the Financial Times.
The regulator alleged that kids were being exposed to content that was unfit for their “level of self-consciousness.”
The Post has sought comment from OpenAI.
The rise of ChatGPT shook the tech world after the AI-powered bot demonstrated advanced conversational abilities that mimicked those of humans.
The technology has been shown to be capable of composing emails, essays, and software code — stoking fears that it could replace people who work in knowledge-based industries.
Some school districts have banned students from using ChatGPT due to concerns it could be exploited to cheat on exams.
The rapid advancements in AI have led some prominent tech observers to urge caution.
Elon Musk, the Tesla mogul who co-founded OpenAI nearly a decade ago, and Apple co-founder Steve Wozniak are among scores of tech entrepreneurs who signed an open letter calling for a pause in AI development and research.
The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says.
“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”