in

Italy Bans ChatGPT Over Privacy Concerns, Company May Faces $21.7M Fine

Italy has become the first European country to ban the American-based software ChatGPT in its region, following the lead of many Eastern countries.

The ban was put in place due to privacy concerns, as announced by the Italian cyber security department. The department revealed that they would not only ban ChatGPT but also investigate whether the program complies with the standards set by the General Data Protection Regulation (GDPR). The GDPR is a regulatory body in the European Union that monitors the use, processing, and storage of personal data.

The decision to ban ChatGPT was made after a data breach that occurred on March 20 involving user conversations and payment information. The watchdog authority stated that there was no need for the system to collect such sensitive data for training the model. Additionally, the watchdog authority pointed out that the model provided inappropriate answers to minors as it did not take into account the age and mental level of users. These concerns raised by the Italian watchdog mean that the company OpenAI must respond to the allegations, or else face a fine of up to 4% of their annual revenue, which is equivalent to $21.7 million.

The Irish data protection commission, responsible for safeguarding individuals’ personal data in the EU, stated to reporters that they are investigating the Italian regulator’s reasoning for the ban and will work with other EU data protection authorities. The UK’s independent data regulator, the Information Commissioner’s Office, announced their support for AI developments while also pledging to enforce compliance with data protection laws. According to Dan Morgan, from the cybersecurity ratings provider SecurityScorecard, this ban highlights the significance of regulatory compliance for businesses operating in Europe. He emphasized that compliance with regulations is not a choice and companies must prioritize protecting personal data and comply with the EU’s strict data protection regulations.

BEUC, a consumer advocacy group, is urging EU and national authorities, including data-protection watchdogs, to investigate ChatGPT and similar chatbots. This comes after a complaint was filed in the US. BEUC is concerned that without sufficient regulation, consumers could be at risk of harm from this technology. Although the EU is working on the world’s first legislation on AI, it could take years before the AI Act takes effect. Ursula Pachl, deputy director general of BEUC, warns that AI can cause harm and that society is currently not adequately protected. She is calling for greater public scrutiny of these AI systems and for public authorities to reassert control over them, as there are growing concerns about how ChatGPT and similar chatbots might deceive and manipulate people.

After its launch, millions of people have been utilizing ChatGPT. This chatbot has spread like wildfire around the globe due to several reasons, including its ability to respond like a human. Its database is based on the internet as it was in 2021. Microsoft has invested billions of dollars in ChatGPT, and it was recently integrated into Bing. In addition to the investment, Microsoft has incorporated this AI technology into its office suite, Microsoft 365, and named this incorporation “Microsoft Copilot.”

As with any new technology, concerns have been raised about potential risks associated with artificial intelligence (AI). These risks include the loss of jobs, and the spreading of misinformation and bias, among others. Key figures in tech, such as Elon Musk, have called for a suspension of AI systems like ChatGPT, citing fears that the race to develop them may be getting out of control.

Italy became a successor in blocking the ChatGPT of many countries, i.e., Iran, North Korea, China, and Russia.

Written by Muhammad Tanveer