Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the moral boundary of artificial intelligence development? | CNCC 2019

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

From October 17 to 19, CNCC 2019, hosted by CCF, the Management Committee of Suzhou Industrial Park and Suzhou University, was successfully held in Suzhou. With the theme of "Intelligence + leading Social Development", this year's conference will be held at Suzhou Jinjihu International Conference Center. Lei Feng's AI Science and Technology Review will be followed up as a strategic cooperation media. In addition to 15 invited reports, the 79 technical forums of CNCC this year, as another important part of the meeting, are also the focus of attendees. Among them, "where is the moral boundary of artificial intelligence development?" the technical forum should be the "fiercest" of all forums, with dozens of experts in various fields gathered together to discuss fiercely the ethical issues of AI all afternoon.

"No title, intense speculation" is a feature of YOCSEF. YOCSEF, whose full name is "Youth computer Science and Technology Forum of China computer Society", is a forum that focuses on discussing computer-related industry-university-research and other related issues in a speculative way. The rule of YOCSEF is that anyone who participates in this forum can speak, and title is not mentioned in the speech, and everyone calls them by their first names and speaks on an equal footing, so every YOCSEF event will be "filled with war" and collide with sparks of wisdom.

The theme of this forum is to discuss the issue of "moral boundaries of artificial intelligence development", bringing together professionals from computer science, artificial intelligence, philosophy, ethics, law and other industries. With regard to the corresponding topics, the experts have made in-depth discussions on the ethics of artificial intelligence from their respective professional point of view.

I. the boundary of AI ethics

In recent years, as artificial intelligence technology has been widely used in various industries, the impact of AI on human society can be described in all aspects. For example, the data security problems caused by the recent popular face-changing technology, the "familiar" problems caused by big data, the safety responsibility of autopilot, the privacy issues caused by face recognition, and so on.

Among these many problems, what can be done? What can't be done? And to what extent can we do it? This is no small challenge for us human beings, because it is a brand new problem that we human beings have faced for millions of years. The answer to these questions is to draw an insurmountable boundary for AI ethics.

Professor Deng Xiaoping tie, Peking University

Professor Deng Xiaoping tie of Peking University pointed out in his leading speech that the rise of any new technology will cause a series of disputes, and AI is no exception. So when the application of AI technology causes ethical risks in human society, how to define the boundary between AI and people and how to make a contract with AI is very important. He proposed that the demarcation of the ethical boundary of AI will no longer be limited to people like ethics in the past, but now there are some more dimensions, and it also involves the ethical boundary between people and AI, AI and AI. This is a whole new area of thinking for us. He also pointed out that when we make a contract with AI, we need to follow three basic principles: enforceable, verifiable and enforceable. As a leading speaker, Professor Deng Xiaoping also raised four problems in the process of interacting with AI:

Code is law . If code is king, what kind of program should we build to supervise programmers?

Who can enter into a contract on behalf of AI? Is it the maker of AI or AI itself?

The impossible triangle designed by AI ethics. When AI is concerned with ethics, how is the human mechanism designed, how is AI implemented, and how is it accepted by the market?

Is it possible to allow AI to make AI? At this time, do human beings still have the ability to manage and monitor AI?

But then a professor from Central South University pointed out that there was a loophole in Professor Deng's thinking, that is, in fact, our whole academic circle has not been clear about what the word "intelligence" is, so it is impossible to figure out what artificial intelligence is. Only when we really understand the concept of "intelligence" and what are the limitations of intelligence and human beings, will it be possible for us to discuss the ethical issues of AI and human beings. Or rights and obligations. During the debate, several people in the field of philosophy or the legal profession pointed out the following characteristics of the ethical boundary of AI:

The attribute of the times of ethics. Wu Tianyue, an associate professor at Peking University, pointed out that any ethics is marked by the times, so although there are many principles at present, we still need to have a sense of history and foresight to prevent our own moral prejudices from hindering the development of mankind in the future.

The boundary of morality should be dynamic. Professor Luo Xun of Tianjin University of Technology believes that any governance principle cannot be changed once it is formulated, because ethics always changes with the times and regions, so AI ethics should be dynamic and should not be described statically.

The attribute of human in AI ethics can not be ignored. The existing AI is still not a moral subject, so the moral boundary discussed at present is mainly aimed at the definition of the boundary of human behavior.

II. Ethical principles of artificial intelligence

As the second leading speaker, Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, introduced the current progress in artificial intelligence principles at home and abroad, and made a detailed analysis of the ethical principles of artificial intelligence in various countries.

Zeng Yi, researcher, Institute of Automation, Chinese Academy of Sciences

According to researcher Zeng Yi, at least 53 ethical principles of artificial intelligence have been published worldwide, but none of these principles can cover all the problems brought about by the current artificial intelligence technology. Take the more comprehensive "artificial intelligence Beijing consensus" as an example, it does not cover "lethal weapons" and "fairness". The reason is that publishers have different positions and perspectives and face different social problems.

Of course, although there are so many AI ethical principles, even some enterprises (such as Google, Microsoft, etc.) have formulated ethical principles to restrict themselves, most of the ethical principles have not been well implemented, or it can be said that so far no enterprise will not violate the principles. For example, respect for privacy, when users request to delete data, although the data itself can be deleted, due to the limitations of the current AI technology, the trained model still contains the results of the original data training, which is difficult to erase in the machine learning model.

Zeng Yi also pointed out a problem faced by the current ethical principles of AI, that is, the main goal of artificial intelligence is still to promote economic development, but not enough attention has been paid to the "social and ecological impact of AI". Therefore, he pointed out that the issue of artificial intelligence ethics is not only how many experts in the field are involved in the discussion, but also that enough people should pay attention to the issue and participate in it.

Professor Huang Tiejun, Peking University

With regard to the ethical principles of AI (and even possible laws in the future), one scholar at the meeting proposed how to ensure that technical regulation will not limit the development of artificial intelligence. Professor Huang Tiejun of Peking University answered this question from a broader perspective. As human beings, thinking about AI ethics will inevitably stand from the perspective of human beings and uphold the concept of people-centered. Professor Huang Tiejun pointed out that at present, the artificial intelligence we refer to is actually a form embedded in human society, and the development of artificial intelligence follows the basic moral consensus of human beings as a whole. But machine intelligence surpasses human intelligence, which is a trend of evolution in the future. Scientific and technological innovation and development should not be limited by human limitations, and we should treat the development of artificial intelligence with a more open mind.

III. Ethical Supervision of AI

In the face of the ethical principles of AI, a very important thing is how to ensure that human beings (individuals or organizations) and AI abide by the corresponding ethical principles?

Wu Tianyue, Associate Professor of Peking University

Professor Wu Tianyue believes that professional ethics and professional ethics are very important for AI developers. Technology, as a means, serves a specific purpose, so in order to prevent man-made AI from doing evil, it is necessary to strengthen the ethical awareness of AI developers and curb the risks brought by AI technology from the source, not just after the product comes out. This includes several aspects: first, the algorithm should be interpretable and transparent; secondly, the data should be balanced and non-discriminatory; as a tool, the algorithm should be tested exclusively. Professor Wu Tianyue stressed that as an AI developer, we must realize that the implementation of AI technology is to serve a series of human core moral values, not just for capital and rights. He gave the example of all Google employees signing against Google's cooperation with the US military in 2018.

On the other hand, in addition to self-discipline, Professor Wu Tianyue believes that external supervision is also needed. This requires the setting up of a so-called "ethics committee". At present, major enterprises have announced the establishment of internal ethics committees or similar functional organizations for supervision. However, different scholars hold different views on whether the "ethics committee" can really play a role. For example, as we mentioned earlier, almost no company can really act according to its own ethical principles, which is more or less offended. Zeng Yi points out that, in fact, it is good for a company to really achieve 80% of all principles. Xiao Jing, chief scientist of Ping an Group, stressed that "if colleagues in the formulation of a series of constraints can provide better help for enterprises and guide enterprises how to reasonably carry out AI technology development under the basic requirements of ensuring 'survival', they can work together and win-win."

Legislation is a means to draw a clear red line. But there are many problems: 1) whether the strict law restricts the development of artificial intelligence? 2) at what stage of the development should the law intervene? For example, at present, China's legal restrictions on data privacy protection are not so strict compared with the West, which has become an important factor that our country can quickly occupy a high position in the era of artificial intelligence. Once we make strict restrictions on data privacy, it is conceivable that it will greatly restrict the development of technology.

IV. Tech for Good

Science and technology is a kind of ability, and doing good is a choice. The development of science and technology must serve the progress and development of human society in order to become the progress of science and technology. Nuclear energy, for example, can be used to make yuanzidan, but it can also be used as nuclear power to generate electricity. Only when this power is used to serve mankind as a whole can it be called technological progress.

Wang Juhong, vice president of Tencent

Wang Juhong, vice president of Tencent, pointed out that in the past two decades, the whole world (especially China) has experienced the PC Internet era and the mobile Internet era, which has subverted the whole society to a great extent, while the next wave of science and technology represented by AI, big data, and life sciences is being formed, superimposed, and gradually built into a new digital era. As a technology company, it is not only the promoter of technological development and the beneficiary of the scientific and technological era, but more importantly, it is necessary to have the ability and obligation to face the social problems brought about by science and technology, to solve social problems with science and technology, and to create more beneficial social value. For example, Tencent puts forward and advocates the concept of "Sike" (knowable, controllable, available and reliable) in the development of artificial intelligence, and practices the concept of "Tech for Good" from the level of technology and practice.

Related articles:

CNCC popular style Technology Forum, hosted by Shen Shengmei and Chen Xilin, West Lake Li Ziqing and King Ali just explained the CV intelligent city from multiple perspectives.

Alibaba Wang Gang: there is no free lunch for autopilot | CNCC 2019

The list of winners of CCF series awards was announced. Hu Shimin and Wang Tao won the CCF King Award | CNCC 2019

Data mining giant Yu Shilun: there is more than one real data source. Learning should have both depth and breadth. | CNCC 2019

Didi Zhangbo announces the opening of urban traffic index dataset to help scientific research and development in the field of transportation | CNCC 2019

How can deep learning be industrialized on a large scale? In-depth interpretation of Baidu CTO Wang Haifeng's latest speech | CNCC 2019

Hong Xiaowen, dean of MSRA: as the first generation of human beings symbiotic with AI, it should be coevolved harmoniously with AI and HI | CNCC 2019

The change of artificial Intelligence Application trend-- from Manufacturing to Intelligent creation | CNCC 2019

Micro-expression Research in Deep Learning: difficulties, Progress and Trends | CNCC 2019

Https://www.leiphone.com/news/201910/8QqXBDNUw4RW0YSp.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report