In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
OpenAI has successfully reassured Italian regulators to lift a temporary ban on ChatGPT, the chat robot, in the past week, but the artificial intelligence research company's battle with European regulators is not over, and more challenges are just beginning.
Earlier this year, OpenAI's popular but controversial chat robot ChatGPT encountered a major legal hurdle in Italy when the Italian data Protection Agency (GPDP) accused OpenAI of violating EU data protection rules. In an attempt to solve the problem, the company agreed to restrict the use of the service in Italy.
When ChatGPT was back online in Italy on April 28, OpenAI easily addressed the concerns of the Italian data Protection Agency without making major changes to its services. This is a clear victory for OpenAI.
While the Italian data Protection Agency "welcomes" the changes made by ChatGPT, the legal challenges facing OpenAI and other companies that develop chatbots may only just begin. Regulators in several countries are investigating the way these artificial intelligence tools collect data and generate information for reasons such as the collection of unauthorized training data and the tendency of chatbots to send error messages.
The European Union has begun to implement the General data Protection regulations (GDPR), one of the most powerful privacy legal frameworks in the world, and its impact is likely to extend far beyond Europe. At the same time, EU lawmakers are working on a law specifically on artificial intelligence, which is also likely to usher in a new era of regulation of systems such as ChatGPT.
ChatGPT becomes the target of multi-concern ChatGPT is one of the most concerned applications in generative artificial intelligence (AIGC), which covers a variety of tools such as generating text, image, video and audio according to user prompts. It is reported that just two months after the launch of ChatGPT in November 2022, the number of monthly active users reached 100 million, making it one of the fastest growing consumer applications in history.
With ChatGPT, people can translate text into different languages, write college papers, and even generate code. But some critics, including regulators, point out that ChatGPT's output is unreliable, copyrighted and flawed in protecting data.
Italy was the first country to take action against ChatGPT. On 31 March, the Italian data Protection Agency accused OpenAI of violating the General data Protection regulations: allowing ChatGPT to provide inaccurate or misleading information, failing to inform users of data collection practices, failing to comply with regulations on personal data processing, and failing to adequately prevent children under the age of 13 from using the service. The Italian data Protection Agency ordered OpenAI to immediately stop using personal information collected from Italian citizens in ChatGPT training data.
At present, other countries have not taken similar big action. But since March, at least three EU countries-Germany, France and Spain-have launched their own investigations into ChatGPT. Meanwhile, on the other side of the Atlantic, Canada is assessing the privacy of ChatGPT under its personal Information Protection and Electronic documents Act (PIPEDA). The European data Protection Committee (EDPB) has even set up a special working group to coordinate the investigation. If these institutions ask OpenAI to make changes, it could affect the way the company provides services to users around the world.
Regulators have two big concerns. Regulators' biggest concerns about ChatGPT fall into two main categories: where do the training data come from? How does OpenAI deliver information to users?
To support ChatGPT,OpenAI, you need to use GPT-3.5 and GPT-4 large language models (LLM), which are trained for large amounts of manually generated text. OpenAI has always been cautious about which training texts are used, but says it uses "a variety of authorized, publicly available data sources, which may include publicly available personal information."
According to the General data Protection regulations, this may cause huge problems. The law, enacted in 2018, covers all services for collecting or processing data on EU citizens, regardless of the headquarters of the organization providing the services. The General data Protection regulations require companies to obtain the explicit consent of users before collecting personal data, to collect such data with legal justification, and to be transparent about how the data is used and stored.
European regulators claimed that the confidentiality of OpenAI training data meant they could not confirm whether the personal information they used had the initial consent of users. The Italian data Protection Agency argued that OpenAI had no "legal basis" to collect this information from the beginning. So far, OpenAI and other companies have come under little scrutiny.
Another problem is the "right to be forgotten" in the General data Protection regulations, which allows users to require companies to correct their personal information or delete it completely. OpenAI updated its privacy policy in advance to make it easier to respond to these requests. However, given that once specific data is entered into these large language models, separation can be very complex, and it is always controversial whether it is technically feasible.
OpenAI also collects information directly from users. Like other Internet platforms, it collects a range of standard user data, such as names, contact information and credit card details. But more importantly, OpenAI records users' interactions with ChatGPT. As stated on the website, OpenAI employees can view this data and use it to train their models. Given the personal questions people ask ChatGPT, such as treating the robot as a therapist or doctor, this means the company is collecting sensitive data.
Information about minors may be included in these data. Although OpenAI's policy states that it "does not deliberately collect personal information from children under the age of 13", there is no strict age verification threshold. This is not in line with EU regulations, which prohibit the collection of data from minors under the age of 13, and in some countries require the consent of their parents to collect information on minors under the age of 16. On the output side, the Italian data Protection Agency claims that ChatGPT's lack of an age filter gives minors "absolutely inappropriate responses in terms of their level of development and self-awareness".
OpenAI has a lot of freedom to use the data, which worries many regulators, and there are security risks in storing the data. Companies such as Samsung and JPMorgan Chase have banned employees from using AIGC tools for fear that they will upload sensitive data. In fact, before the Italian ban, ChatGPT suffered a serious data breach, which led to the disclosure of a large number of users' chat history and email addresses.
In addition, ChatGPT's tendency to provide false information can also cause problems. The General data Protection regulations stipulate that all personal data must be accurate, which was emphasized by the Italian data Protection Agency in its announcement. This can cause trouble for most AI text generators because these tools are prone to "hallucinations" that give incorrect or irrelevant responses to queries. This has raised real problems elsewhere, such as an Australian mayor threatening to sue OpenAI for libel because ChatGPT lied that he had been jailed for bribery.
Special regulatory rules are about to be introduced ChatGPT is particularly easy to become a regulatory target because of its popularity and the dominant position of the artificial intelligence market. But like Google's Bard and competitors and partners such as Microsoft and OpenAI-backed Azure AI, there is no reason why it should not be censored. Before ChatGPT, Italy banned chat robot platform Replika from collecting information about minors. So far, the platform has been banned.
Although the General data Protection regulations are a powerful set of laws, they are not designed to solve the problems unique to artificial intelligence. However, special regulatory rules may be imminent. In 2021, the EU submitted the first version of its artificial Intelligence Act (AIA), which will be implemented in conjunction with the General data Protection regulations. The artificial Intelligence Act will regulate artificial intelligence tools according to their risk, from "minimum risk" (such as spam filters) to "high risk" (artificial intelligence tools for law enforcement or education) to "unacceptable risks" (such as social credit systems).
After the explosive growth of large language models such as ChatGPT, lawmakers are now scrambling to add rules to "basic models" and "general artificial intelligence systems (GPAI)". These two terms refer to large-scale artificial intelligence systems, including LLM, and may be classified as "high-risk" services.
The provisions of the artificial Intelligence Act go beyond the scope of data protection. A recently proposed amendment would force companies to disclose any copyrighted material used to develop AIGC tools. This could expose previously confidential data sets and make more companies vulnerable to infringement lawsuits that have affected some services.
For now, it may take some time to implement a special AI law or be passed by the end of 2024. EU lawmakers reached an agreement on a temporary artificial intelligence bill on April 27, but a committee is still needed to vote on the draft on May 11, and the final proposal is expected to be released in mid-June. The Council of Europe, the European Parliament and the European Commission will then have to resolve any remaining disputes before implementing the law. If all goes well, it could be adopted in the second half of 2024.
For now, the dispute between Italy and OpenAI gives us a glimpse of how regulators and artificial intelligence companies might negotiate. The Italian data Protection Agency said it would lift the ban if the OpenAI met several proposed resolutions by April 30.
The resolutions include telling users how ChatGPT stores and uses their data, requiring explicit consent from users to use them, facilitating the correction or deletion of false personal information generated by ChatGPT, and requiring Italian users to confirm that they are over 18 when signing up for an account. Although OpenAI did not meet these requirements, it has met the requirements of Italian regulators and restored access to Italy.
OpenAI still needs to meet other conditions, including establishing a stricter age threshold by September 30, filtering out minors under the age of 13, and requiring older minors to obtain parental consent. If it fails, OpenAI may be blocked again. However, OpenAI seems to have set an example that Europe believes the behaviour of artificial intelligence companies is acceptable, at least until the new law is introduced.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.