The company behind ChatGPT will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot, regulators said Thursday.
The Italian data protection authority, known as Garante, last week blocked San Francisco-based OpenAI’s popular chatbot, ordering it to temporarily stop processing Italian users’ personal information while it investigates a possible breach of European Union data privacy rules.
Experts said it was the first such case of a democracy imposing a nationwide ban on a mainstream AI platform.
In a video call late Wednesday between the watchdog’s commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns. Those remedies have not been detailed.
The Italian watchdog said it didn’t want to hamper AI’s development but stressed to OpenAI the importance of complying with the 27-nation EU’s stringent privacy rules.
The regulators imposed the ban after some users’ messages and payment information were exposed to others. They also questioned whether there’s a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and raised concerns the system could sometimes generate false information about individuals.
So-called generative AI technology like ChatGPT is “trained” on huge pools of data, including digital books and online writings, and able to generate text that mimics human writing styles.
These systems have created buzz in the tech world and beyond, but they also have stirred fears among officials, regulators and even computer scientists and tech industry leaders about possible ethical and societal risks.
Other regulators in Europe and elsewhere have started paying more attention after Italy’s action.
Ireland’s Data Protection Commission said it’s “following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter.”
France’s data privacy regulator, CNIL, said it’s investigating after receiving two complaints about ChatGPT. Canada’s privacy commissioner also has opened an investigation into OpenAI after receiving a complaint about the suspected “collection, use and disclosure of personal information without consent.”
In a blog post this week, the U.K. Information Commissioner’s Office warned that “organizations developing or using generative AI should be considering their data protection obligations from the outset” and design systems with data protection as a default.
“This isn’t optional — if you’re processing personal data, it’s the law,” the office said.
In an apparent response to the concerns, OpenAI published a blog post Wednesday outlining its approach to AI safety. The company said it works to remove personal information from training data where feasible, fine-tune its models to reject requests for personal information of private individuals, and acts on requests to delete personal information from its systems.
READ ALSO: