In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Generative artificial intelligence (Generative AI, hereinafter referred to as "AIGC") technology is developing at a high speed, and has come into our life and work through a variety of software and terminal devices. However, AIGC not only brings convenience to the society, but also brings corresponding legal risks. Seven departments, including the State Internet Information Office, issued the interim measures for the Administration of generative artificial Intelligence Services (hereinafter referred to as the "measures"), which carried out special supervision over generative artificial intelligence, which was formally implemented on August 15, 2023. In terms of data security, what are the risks that generative artificial intelligence providers and users need to pay attention to, what protective measures should be taken, and what legal responsibilities do they have?
NetEase Shield security experts will analyze and interpret these aspects according to the "measures".
01. Scope of application
In accordance with Article 2 of the Administrative measures:
These measures shall apply to the use of generative artificial intelligence technology to provide services for generating text, pictures, audio, video and other contents to the public within the territory of the people's Republic of China.
This means that overseas AIGC service providers (whether model layer or application layer), whether they provide relevant services directly to China, or provide "indirect" services through API interfaces or other forms of "encapsulation", will be limited by the relevant provisions of the management measures.
This management method introduces an exception to the "safe harbor", that is, "the provisions of these measures shall not apply to the R & D and application of generative artificial intelligence technology, such as industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc., if generative artificial intelligence services are not provided to the public within the territory." Therefore, if you only develop or use relevant technologies internally and do not provide services to the outside world, as long as you obtain the authorization of the technology provider and abide by the relevant laws and regulations such as network security, data and personal information protection, it is in line with the provisions of the measures. This alleviates the compliance concerns of many companies accessing generative artificial intelligence services to improve productivity and other internal applications, reflecting the regulatory thinking that the "approach" is prudent and inclusive and encourages innovation.
02. Classified and graded supervision rules
In accordance with Article 3 of the Administrative measures:
Inclusive, prudent and classified supervision of generative artificial intelligence services.
Article 16 further provides that:
The relevant competent departments of the state shall, in the light of the technical characteristics of generative artificial intelligence and its service applications in relevant industries and fields, improve the scientific supervision methods suited to innovation and development, and formulate corresponding classified and hierarchical supervision rules or guidelines.
Although the management measures do not further expand the specific rules of hierarchical supervision, it is expected that the relevant content will be stipulated in the forthcoming artificial Intelligence Law. Because of the generality of generative artificial intelligence, the regulatory idea of "inclusive prudence and hierarchical classification" helps "measures" as the basic Law in the field of generative artificial intelligence to retain certain flexibility. regulatory departments, industry authorities and standardization organizations can also formulate more detailed classification rules for generative artificial intelligence on this basis. And formulate more stringent specifications for specific industries, specific applications or some high-risk generative artificial intelligence services.
In addition, in view of some major application scenarios of generative artificial intelligence services, the "measures" stipulate that the use of generative artificial intelligence services to engage in news and publishing, film and television production, literary and artistic creation, and other activities should abide by the regulatory regulations in relevant fields, and dock with the existing system.
03. Provisions on algorithm and content security
In accordance with Article 4 of the Administrative measures:
The provision and use of generative artificial intelligence services shall abide by laws and administrative regulations and respect social morality and ethics.
In order to comply with laws and regulations and reflect socialist core values, for real-time interactive content audit and the use of AI to generate content, it is necessary to strengthen the audit of sensitive content, such as politics, pornography, violence and so on, in order to ensure the security and compliance of information. However, special prompt instructions may bypass AI's own security mechanism and increase the difficulty and complexity of auditing.
To solve this problem, NetEase Yi Dun can select an appropriate audit strategy to meet the compliance requirements of different scenarios according to the machine audit capability in AIGC+UGC scenarios.
For example, in the conversation scene, there are two roles: the real user and the intelligent robot, which requires the computer review to quickly identify bad information and ensure the real-time performance of human-computer chat. AI machine review technology can configure different tightness audit policies in UGC content and AIGC generated content according to different business scenarios, taking into account user experience and content security compliance.
In the process of algorithm design, training data selection, model generation and optimization, and providing services, effective measures should be taken to prevent discrimination in nationality, belief, country, region, sex, age, occupation, health and so on.
The measures specifically point out that there are all kinds of discrimination, and effective preventive measures should be taken. According to the analysis of NetEase Shield security experts, large models may encounter the risk of discriminating against content data in the process of training, testing and production, which is due to deviations or deficiencies in the training data. it leads to unfair or discriminatory results in dealing with individual data of different groups. In order to avoid the risk of discrimination in the production of large-scale models, it is necessary to consider the balance and representativeness of the training data in the process of collecting and processing the training data, so as to ensure that the training data fully cover the data distribution and characteristics of different groups. Avoid discrimination or neglect of certain groups.
In addition, a series of model evaluation and monitoring measures are needed to detect and correct discrimination in the model in a timely manner. These measures include but are not limited to the design of fairness indicators, sensitivity analysis, interpretability of the model, data privacy protection and so on. This can help ensure that the results generated by the large model are fair and reasonable and avoid the negative impact of the risk of discrimination on specific groups.
04. Obligations of service providers
In the draft, AIGC service providers are required to undertake an extremely high obligation to "ensure the authenticity, accuracy, objectivity and diversity of data". Considering the actual situation of the development of AIGC technology, the management method has been rationalized, that is, "take effective measures to improve the quality of training data and enhance the authenticity, accuracy, objectivity and diversity of training data". The management measures no longer require service providers to implement a real-name system for users. In addition, the consultation draft required the service provider to have an obligation to prevent the recurrence of inappropriate content within three months, including through model optimization training, but taking into account the natural potential persistence and uncertainty of inappropriate content generated by AIGC, the management approach removed this obligation, only requiring the service provider to optimize the model and report to the competent authorities in a timely manner. Finally, in the part of punishment, the provision of "terminating the provision of services and imposing a fine of not less than 10,000 yuan and not more than 100,000 yuan" was deleted.
However, the management approach has also added some new obligations, such as the service provider should sign a service agreement with the user to clarify the rights and obligations of both parties, the supervision of users' illegal activities has added "warning", "restriction function" and "record keeping" and "reporting" obligations. However, these new obligations are more reasonable and will not significantly increase the additional burden on service providers.
05. Security assessment and algorithm filing
Before providing services to the outside world, the draft for soliciting opinions indiscriminately required AIGC service providers to apply for security assessment to the national Internet information department in accordance with the provisions on Security Assessment of Internet Information Services with the attribute of Public opinion or the ability of Social Mobilization, and to perform the algorithm filing procedures in accordance with the regulations on the recommendation Management of Internet Information Service algorithms. The management method has been significantly changed to make it clear that only service providers who "provide generative artificial intelligence services with the attribute of public opinion or the ability of social mobilization" need to carry out security assessment and algorithm filing.
Although the scope of application has been reduced, the management method has not further clarified the identification standard of "generative artificial intelligence service with the attribute of public opinion or the ability of social mobilization". Judging from the perspective of reasonable evaluation, combined with the "Internet information service security assessment regulations with the attribute of public opinion or the ability of social mobilization", news, social, live broadcast, education, writing, chat and other AIGC-related technologies are more likely to be identified.
06. Overseas service provider
Foreign manufacturers in the AIGC industry, including OpenAI, are undoubtedly in a leading position in technology, and foreign investors are also more active in this field. The administrative measures are clear in principle for the first time: "if the provision of generative artificial intelligence services within China from outside the people's Republic of China does not comply with the provisions of laws, administrative regulations and these measures, the state network information department shall notify the relevant institutions to take technical measures and other necessary measures to deal with them." Combined with the relevant provisions of the scope of application of Article 1 above, domestic service providers who "nest" and "encapsulate" overseas AIGC technology will face the risk of interruption of underlying overseas technical support at any time.
In addition, the administrative measures point out for the first time that "generative artificial intelligence services for foreign investment shall comply with the provisions of relevant laws and administrative regulations on foreign investment". Considering that the current regulations on foreign investment in AIGC are not clear, it is suggested that we should continue to pay attention to the relevant legislative developments, and it is expected that different restrictions on foreign investment will be applied according to the classified and hierarchical regulatory rules to be issued in the future.
07. Policies and measures conducive to innovation
The measures put forward a series of policy incentives for the research and development and application of generative artificial intelligence, including:
L encourage the innovative application of generative artificial intelligence technology in various industries and fields, generate positive, healthy and positive high-quality content, explore and optimize application scenarios, and build an application ecosystem.
L support cooperation among industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, and relevant professional institutions in generative artificial intelligence technology innovation, data resource construction, transformation and application, risk prevention, etc.
L encourage independent innovation of basic technologies such as algorithms, frameworks, chips and supporting software platforms of generative artificial intelligence, carry out international exchanges and cooperation on the basis of equality and mutual benefit, and participate in the formulation of international rules related to generative artificial intelligence.
L promote the construction of generative artificial intelligence infrastructure and public training data resource platform. We will promote the classification and orderly opening of public data, and expand high-quality public training data resources. Promote the collaborative sharing of computing resources and improve the efficiency of the utilization of computing resources (articles 5 and 6).
NetEase Shield security experts said that with the implementation of the "interim measures for the Management of generative artificial Intelligence Services", the regulatory system has been gradually improved, and the generative artificial intelligence industry is about to usher in the outbreak of applications. It also further encourages the innovative application of generative artificial intelligence technology in various industries and fields. However, in order to meet the requirements of the compliance of training data, the safety, accuracy and reliability of generated content, and the transparency of generative artificial intelligence services, enterprises need to put forward feasible solutions by combining technical and legal forces. in line with the safety requirements of regulators, we will jointly promote the ecological development of generative artificial intelligence applications.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.