In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Introduction: with Q & An and Scientific Research Progress of Innovation Workshop in 2019
Lei Feng net AI Technology Review Note: on September 4, NeurIPS 2019, which is regarded as one of the top conferences in the field of machine learning and neural networks, unveiled the list of included papers, and the paper "Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder" (Deep Perplexity: a method of generating malicious training samples using self-encoders) by the Institute of artificial Intelligence Engineering of Innovation works was accepted. The three authors of this paper are Feng Jie (Executive Dean of Nanjing International artificial Intelligence Research Institute of Innovation works), Cai Qizhi (Research fellow of Nanjing International artificial Intelligence Research Institute of Innovation works) and Zhou Zhihua (Dean of artificial Intelligence School of Nanjing University).
This paper focuses on the security of the artificial intelligence system at the present stage. specifically, this paper proposes an efficient method to generate countermeasure training samples, DeepConfuse, which destroys the performance of the corresponding learning system completely by weakly disturbing the database, so as to achieve the purpose of "data poisoning". The research of this technology is not only to reveal the threat of similar AI intrusion or attack technology to system security, but to formulate a perfect plan to prevent "AI hacker" on the basis of in-depth study of relevant intrusion or attack technology, which plays a positive guiding role in promoting and developing the frontier research direction of AI security attack and defense.
NeurIPS, the full name of Neural Information processing Systems Conference (Conference and Workshop on Neural Information Processing Systems), has a history of 32 years since it was born in 1987. It has been highly concerned by academia and industry. The conference is held in December every year and is hosted by the NIPS Foundation. In the ranking of international academic conferences of the Chinese computer Society, NeurIPS is a Class A conference in the field of artificial intelligence, and it is also one of the most famous annual conferences in the field of artificial intelligence, with tickets often sold out in minutes.
For a long time, NeurIPS is famous for attaching importance to the quality of papers and maintains a relatively low acceptance rate. This year, the number of papers contributed to the NeurIPS conference reached a new high. A total of 6743 papers were received and 1428 papers were accepted, with an acceptance rate of 21.2%.
At present, this paper is not in the final state, and Camera Ready will release the version through the official NeurIPS channel one month later. Now let's introduce the main contents of the paper.
The thesis of "data poisoning" in Innovation Workshop was selected as the top meeting NeurIPS.
In recent years, the popularity of machine learning continues to rise, and gradually solve a variety of problems in different application fields. However, few people realize that machine learning itself is vulnerable and that the model is not as indestructible as imagined.
For example, in the two processes of training (learning stage) or prediction (reasoning stage), machine learning models are likely to be attacked by opponents, and the means of attack are varied. AI Engineering Institute of Innovation works has specially set up an AI security laboratory for this purpose, and has carried out in-depth evaluation and research on the security of artificial intelligence systems.
The main contribution of the paper "Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder" is to propose one of the most advanced methods to generate confrontation training data efficiently-DeepConfuse. Through the training process of hijacking the neural network, the noise generator is taught to add a bounded disturbance to the training samples, so that the generalization ability of the machine learning model trained by the training samples is as poor as possible in the face of test samples. Very skillfully realized the "data poisoning".
As the name implies, "data poisoning" means that the training data is "poisoned". The specific attack strategy is to interfere with the training process of the model and affect its integrity, thus causing a deviation in the subsequent prediction process of the model. ("data poisoning" and "countering sample attacks" are different attacks, which exist in different threat scenarios: the former makes the model "poisoned" by modifying the training data, while the latter "cheats" the model by modifying the samples to be tested. )
For example, if a company engaged in the development of robot vision technology wants to train robots to recognize objects, people, vehicles and so on in real scenes, but the intruders accidentally tamper with the training data using the methods mentioned in the paper. When R & D personnel visually inspect the training data, they usually do not feel abnormal (because the noise data that "poison" the data is difficult to be recognized by the naked eye at the image level), and the training process is as smooth as ever. However, at this time, the trained deep learning model will be greatly degraded in generalization ability, and the robot driven by such a model will be completely "ignorant" in the real scene and fall into an awkward situation in which nothing can be recognized. What's more, attackers can carefully adjust the noise data used in "poisoning" to make the trained robot vision model "deliberately admit" something, such as recognizing obstacles as pathways, or marking dangerous scenes as safe scenes.
In order to achieve this purpose, this paper designs a self-encoder neural network DeepConfuse that can generate anti-noise. By observing the training process of a hypothetical classifier, it updates its own weight and produces "toxic" noise, which brings the lowest generalization efficiency to the "victimized" classifier, and this process can be reduced to a non-convex optimization problem with nonlinear equality constraints.
From the experimental data, it can be found that on different data sets such as MNIST, CIFAR-10 and reduced version of IMAGENET, there is a great difference in classification accuracy between the system model trained by "unpoisoned" training data set and "poisoned" training data set, and the effect is very considerable.
At the same time, from the experimental results, the anti-noise generated by this method is universal, even in non-neural networks such as random forest and support vector machine. (blue is the test performance of generalization ability of the model trained with "unpoisoned" training data, and orange is the test performance of generalization ability of the model trained with "poisoning" training data.)
The performance on CIFAR and IMAGENET data sets is similar, which proves that the confrontation training samples generated by this method have high migration ability in different network structures.
In addition, the method proposed in this paper can also be effectively extended to the case of specific tags, that is, attackers hope to misclassify the model through some pre-specified rules, such as classifying "cat" errors as "dogs" and making the model according to the attacker's plan. an error occurred in the direction.
For example, the following figure shows the performance of the confusion matrix on the MINIST dataset under different scenarios, namely, the clean training dataset, the training dataset without a specific label, and the training dataset with a specific label.
The experimental results strongly prove the effectiveness of making corresponding settings for training data sets with specific labels, and there is an opportunity to modify the settings to achieve more specific tasks in the future.
The research on data "poisoning" technology is not only to reveal the threat of similar AI intrusion or attack technology to system security, but more importantly, only by deeply studying the relevant intrusion or attack technology, can we formulate a perfect plan to prevent "AI hackers". With the gradual popularization and promotion of AI algorithm and AI system in the fields related to the national economy and people's livelihood, researchers must thoroughly master the cutting-edge technology of AI security attack and defense, and develop the most effective protection means for self-driving, AI auxiliary medical care, AI auxiliary investment and other fields related to life safety and wealth security.
Federal Learning sets New goals for AI Security Research and Development
In addition to security issues, the data privacy of artificial intelligence applications is also one of the key issues concerned by the AI Security Lab of Innovation works. In recent years, with the rapid development of artificial intelligence technology and the strengthening of the demand for privacy protection and data security from all walks of life, federal learning technology came into being and began to attract more and more attention from academia and industry.
Specifically, the federated learning system is a distributed machine learning framework with multiple participants, and each federated learning participant does not need to share his or her own training data with the rest of the party. however, it can still use the information provided by the other participants to better train the joint model. In other words, all parties can share the knowledge generated by the data without sharing data, so as to achieve win-win results.
AI Engineering Academy of Innovation works is very optimistic about the great application potential of federal learning technology. In March this year, Feng Han, author of the paper on "data poisoning" and executive director of Nanjing International Institute of artificial Intelligence of Innovation works, was elected vice chairman of the IEEE Federal Learning Standard-setting Committee on behalf of Innovation works, and began to promote the development of the first international standard in the field of AI collaboration and big data security. Innovation works will also be a direct participant in the federal learning "legislation" of this technology.
Answer to the thesis of "data poisoning"
On the morning of September 5, the AI Engineering Academy of Innovation works organized a question-and-answer session to answer questions raised by various media, such as Lei Feng's AI Science and Technology Review, about this "data poisoning" paper. CTO of Innovation works, Wang Yonggang, Executive Dean of Institute of artificial Intelligence Engineering, and Feng Jie, lead author of this paper and Executive Director of Nanjing International artificial Intelligence Research Institute of Innovation works, gave online answers.
Q: what is the purpose of the "data poisoning" study?
Wang Yonggang: this is similar to the study of hacker intrusion technology and attack technology by network security engineers. Only when we have a comprehensive and full understanding of attack technology can we formulate effective preventive measures and develop corresponding security standards and security tools.
Feng Yan: the aim is to make a technical assessment of the security of the artificial intelligence system, assuming that the database has been maliciously tampered with, how bad the corresponding system will be. Another purpose of this work is to call for attention to the issue.
Q: in this study, it is assumed that an attacked model can start to generate adversarial training samples for the hypothetical model, and these generated samples also have a significant effect on other models. In other words, if this method is actually used, I don't even need to know what kind of model others are using to have the opportunity to do harm to others. Is that right for me to understand?
Feng Jing: yes. The poisoned person does not need to know what model the other party uses, but only needs to get database permission to destroy it.
Q: are there any other effective means to protect AI algorithm and AI system?
Wang Yonggang: the current AI system attack and defense is in a very early stage of research and development. Compared with the relatively mature methodologies, algorithms, tools and platforms in the traditional security field, AI security attack and defense is still in the exploratory stage. At present, the mainstream attack methods, such as sample attack and data poisoning attack, have some preventive ideas, but both attack technology and security protection technology are under development.
Feng Yan: at present, the protection technology is still in its infancy, similar to network security, there is no "vaccine" that can cure all diseases. For artificial intelligence enterprises, we suggest that we need to establish a special security team to protect our own system in all directions.
Q: at present, have you actually used this method to tap loopholes in areas such as self-driving systems (like Keen Security Lab's successful attack on Tesla's system)?
Wang Yonggang: this is not very difficult. In fact, there are many highly skilled research institutions or laboratories that can produce similar results. It can be said that many of today's self-driving systems use the AI algorithm, in the design and implementation, is less concerned about security protection. On the other hand, new AI security attack methods and threat forms are also emerging. Black-box mode forges traffic signs externally to attack, white-box mode attacks specific models, intrusion mode "poisons" data pollution, and there will be more and more attack methods. My feeling is that the industry's overall awareness and attention to AI security is not enough. In this case, once a large number of AI systems related to personal safety or property safety are put into operation, a large number of security incidents will be exposed. We suggest that we should conduct a thorough study of AI security protection as soon as possible and invest sufficient resources in the research and development of AI security protection tools and technologies as soon as possible.
Feng Yun: there are attacks against unmanned vehicles, but what has been announced so far is mainly to generate confrontation samples. The work of data poisoning has only been pushed out for 24 hours and no application has been seen yet. We need to remind readers that this technology is very destructive and ask readers not to engage in illegal and criminal activities.
Q: "data poisoning" is a wake-up call to AI security. At present, AI technology has been applied in many fields. Is this application far ahead of the security research of AI technology?
Feng: yes, like any new technology, the current application is ahead of security. we think that whether it is AI security or AI privacy protection, it will be paid more attention than traditional computer security in the future.
Q: are there many security incidents against artificial intelligence systems at present?
Wang Yonggang: the recent case of using AI analog voice to defraud money is a relatively serious incident of AI security. AI technology is bound to be used in a variety of core business areas, even involving property or life security (such as medical care, autopilot, finance, etc.). In the future, with the development of AI attack technology, there will be more and more related events.
Feng: at present, security incidents are not as common as viruses in traditional computer systems, but there is reason to believe that with the passage of time, this will become an independent industry in the future. in addition, laws on security or data privacy will gradually come out, such as the GDPR Act of the European Union.
Q: what impact will AI security have on the landing and development of technology?
Feng Yan: I think the security and privacy guarantee of AI system is the only way for the development of artificial intelligence. Similar to the initial stage of the development of computer network / computer system, there are not many viruses at that time, but with the passage of time, a series of AI security industry will be born. We believe that the threat to AI security is much more serious than the current computer virus.
Q: what is the gap between domestic and international research on AI security, and what is the gap?
Wang Yonggang: the theoretical research level of AI security: there is not a big gap between domestic and international. Domestic team Zhou Zhihua of Nanjing University has very cutting-edge research results in the core theory of machine learning robustness and safety.
The engineering application level of AI security: it should be said that both domestic and international are in a very early stage. From the perspective of using systems, giants such as Google and Facebook have certain first-mover advantages in the use of AI security technologies in engineering, products and systems. For example, Google has applied federated learning and other technologies to protect data security in several specific client and server products. However, with the gradual attention to AI security in China, it is believed that domestic application-level R & D will slowly catch up.
Feng Yan: at present, the research on AI security is very novel, and everyone is almost on the same starting line, which is embodied in the development of the most cutting-edge technology, and China and the United States are evenly split. We believe that security is no small matter and the country needs to pay attention to it.
Q: what do you think of the controversial ZAO recently? What does Innovation Workshop think is the boundary of artificial intelligence security?
Wang Yonggang: let's not talk about specific ZAO issues. But in essence, this kind of problem is a comprehensive problem of how to protect intellectual property rights and user privacy when developing and making use of AI technology. Today's development of AI technology must consider the issue of legal and moral compliance, and must not violate the user's bottom line, just as AI in Europe must comply with GDPR norms. The technology related to AI security attack and defense can provide sufficient technical support for legal and moral compliance, but this is only a technical matter. In fact, the security of artificial intelligence must be maintained by technical means, legal means, moral means, industry norms and so on.
Feng Yan: the security threat caused by users' privacy data will be paid more and more attention in the era of artificial intelligence, and there will be more and more threats. ZAO is an example. Federal learning technology is actually a solution to this kind of problem. Similar to the "white hat" in the security field, we call for the emergence of more AI security "white hat" to jointly evaluate and analyze the security vulnerabilities of artificial intelligence systems.
Q: is it possible to establish safety standards for some industries in the field of artificial intelligence?
Wang Yonggang: yes, the field of artificial intelligence is not only possible, but also should establish a series of industry safety standards to regulate the use of artificial intelligence technology. These security standards may include: evaluation standards for the robustness and security of AI systems, data security standards for data exchange in AI systems, privacy protection standards for AI systems involving user privacy data, mandatory industry standards for AI systems involving personal security, and so on. The IEEE Federal Learning Standards Committee, currently a member of the AI Engineering Institute of Innovation works, is one of the standards for AI data and privacy security.
Feng Yun: it is currently being done, including technical federal learning for the protection of user data privacy, which is the first international standard on artificial intelligence collaboration launched by IEEE, and Innovation works is responsible for the security assessment part of it.
AI Engineering Academy of Innovation works won a number of international top conferences.
With its unique VC+AI (the combination of venture capital and AI R & D), Innovation works is committed to acting as a bridge between cutting-edge scientific research and AI commercialization. Innovation works carried out extensive scientific research cooperation in 2019, and papers cooperated with other international scientific research institutions came to the fore in a number of top international conferences. In addition to the "data poisoning" papers introduced above were selected into the NeurlPS, there are also 8 papers included in the top five academic conferences.
1. Two papers were selected as the top ICCV in the field of computer vision.
ICCV, the full name of the International computer Vision Conference (IEEE International Conference on Computer Vision), sponsored by IEEE, together with the computer Vision pattern recognition Conference (CVPR) and the European computer Vision Conference (ECCV), is called the three top conferences in the direction of computer vision.
This year, 2 papers were included by AI Engineering School of Innovation works in cooperation with the University of California, Berkeley and Tsinghua University.
Disentangling Propagation and Generation for Video Prediction
The main work of this paper focuses on the task of video prediction, that is, in a video, given the pictures of the first few frames, predict the next frame or frames of pictures.
The dynamic scene in the video can be divided into the following two situations: the first is a relatively smooth moving picture, which can be obtained by using a relatively simple prediction method from the previous frame; the second is the occurrence of occlusion. It is usually difficult to get the picture directly by extrapolation. Previous work on this kind of video prediction can only consider the interpolation of previous images, or so that all pixels are obtained by the generation model.
In this paper, a combined model is proposed to accomplish this task, which decouples the video prediction task into two tasks: motion-related picture propagation and motion-independent picture generation, and accomplishes these two tasks through optical flow prediction and image generation, respectively. Finally, a confidence-based image conversion operator is proposed to integrate the two operations.
Experiments show that the methods proposed in this paper can produce more accurate occlusion areas and sharper and real images in both animated scenes and real scenes.
Joint Monocular 3D Vehicle Detection and Tracking
In this paper, a new joint framework of online 3D vehicle detection and tracking is proposed, which can not only correlate the vehicle detection results with time, but also estimate the 3D vehicle information using the two-dimensional movement information obtained by monocular cameras.
On this basis, the paper also proposes a depth-based 3D detection frame matching method, and uses 3D trajectory prediction to re-identify occluded targets. This method can use 3D information to achieve more robust trajectory tracking.
In addition, a motion prediction model based on long-term and short-term memory network is designed, which can predict long-term motion more accurately.
Based on simulated data, experiments on KITTI and Argoverse data sets verify the robustness of the method. At the same time, it is found that on the Argoverse data set, the performance of the method using only visual input is significantly better than that of the baseline method based on lidar input for objects within 30m.
2. A paper was selected into the top international conference IROS in the field of robotics and automation.
IROS, the full name of the International Conference on Intelligent Robots and Systems (International Conference on Intelligent Robots and Systems), is one of the two most influential academic conferences in the field of robotics and automation.
Since the beginning of the development of robot technology in 1988, IROS has been held once a year, and so far it has been the 30th. Every year, experts and people from the world's top robot research institutions gather at this event to discuss and showcase the most cutting-edge technologies in the robot industry.
This year, the AI Engineering Institute of Innovation works and the University of California, Berkeley and other units in cooperation with a paper included.
Monocular Plan View Networks for Autonomous Driving
In general, the convolution neural network method on monocular video can effectively capture the spatial information of the picture, but it is difficult to use the depth information effectively, which is one of the difficulties to be overcome in the industry.
Aiming at the problem of end-to-end controlled learning, this paper proposes a perspective transformation of current observation, which is called planning perspective, which transforms the current perspective into a bird's-eye view. Specifically, in the case of autopilot, pedestrians and vehicles are detected in the first-person perspective and projected to an overlooking perspective.
This paper believes that this artificial design representation can provide an abstraction of environmental information, so that the neural network can more effectively infer the location and orientation of objects.
The experimental results on the GTA 5 simulator show that the collision rate of a neural network using both the planning perspective and the positive perspective as input is an order of magnitude lower than that of the baseline method based solely on the positive perspective, and compared with the previous method based on detection results, the collision rate of the proposed method is reduced by half.
3. Three papers were selected as the top EMNLP in the field of natural language processing.
EMNLP, the full name of the Conference on empirical methods in Natural language processing (Conference on Empirical Methods in Natural Language Processing), is a top conference in the field of natural language processing.
This year, three papers were included by the AI Engineering Institute of Innovation works in cooperation with the Hong Kong University of Science and Technology, the Institute of Computing of the Chinese Academy of Sciences, Tsinghua University and the University of the Chinese Academy of Sciences.
Multiplex Word Embeddings for Selectional Preference Acquisition
The main work of this thesis is jointly completed with the Hong Kong University of Science and Technology.
The traditional word vector model usually uses static vectors to represent the co-occurrence relationship between words, but this model can not well capture the different relationships between words (in different scenarios). This kind of static vector can not effectively distinguish whether "food" should be the subject or object of "eating".
In order to solve this problem, a multiplex word vector model is proposed. In this model, for each word, the vector consists of two parts, the principal vector and the relational vector, in which the principal vector represents the overall semantics, and the relational vector is used to express the characteristics of the word in different relations. The final vector of each word is obtained by the fusion of these two vectors.
In order to effectively use this multi-vector representation, the model proposed in the text also includes a vector compression module, which can compress the vector to 1/10 of the original size without losing the effect.
The model proposed in this paper has been proved to be effective in many experiments, especially in some scenarios where syntactic information is needed.
It can be said that text representation has always been an important basic technology and frontier of natural language understanding in the era of deep learning. In recent years, the extensive use of pre-training model and its excellent performance in most tasks have proved that it can better express the semantics of a text in a specific context. However, as the basic unit of language expression, words have always been an important basis for semantic research and understanding, especially for many complex scenes that need the support of syntax and all kinds of relational information. the pre-training model can not well express the lexical semantic information in these texts.
Therefore, this paper continues the research of traditional word vector, adds relational information to the process of word vector modeling, shows the different representations of words in different scenes, and proves its effectiveness in a series of tasks. at the same time, with the help of the compression module in the model, the scale of the word vector can be reduced to 1/10 of the original size, which will greatly improve the resource requirements of the running environment using the word vector.
What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues
The main work of this thesis is completed jointly with Hong Kong University of Science and Technology and Tsinghua University.
In practical language use, linking a pronoun to the object it refers to requires the support of a variety of knowledge. For example, when two people are talking, when they see an object together, they may refer to it directly with a pronoun (such as "it") rather than describe it in the text first.
This phenomenon brings great challenges to the existing anaphora resolution models. Therefore, this paper proposes a new model (VisCoref) and a matching data set (VisPro) to study how to integrate pronoun anaphora with visual information.
Among them, in the data set part, this paper randomly selects 5000 conversations from a dialogue data supported by visual information, and then invites the annotators on the crowdsourcing platform to mark the relationship between pronouns and the noun phrases they refer to. After a series of cleaning, high-quality tagging data is obtained. In the model part, in order to integrate the text information in the dialogue and the information in the picture, we first extract the information from the text and the picture, and obtain the corresponding vector expressions respectively. then these vectors are used to integrate the extracted image information based on attention mechanism, and the results are used to predict visual and text-based scores through a fully connected neural network to predict reference relations.
This study shows that the addition of visual information can effectively help the task of pronoun anaphora resolution in dialogue.
In fact, multimode has always been a research hotspot in various fields of artificial intelligence. Especially for human communication scenarios (dialogue), many signals that need to be used and generated in this process are not just text, visual information occupies an important part of it. As an important task in natural language understanding, anaphora resolution is also highly dependent on visual signals.
In order to study this problem, this paper proposes joint modeling of visual signals and pronouns and referenced nouns in anaphora resolution for the first time, and adds visual information to the classical anaphora resolution task, and proves its effectiveness. At the same time, this paper also constructs a data set of anaphora resolution with visual signals, which provides a benchmark object for academia and industry to help the future research in this field.
Reading Like HER: Human Reading Inspired Extractive Summarization
The main work of this thesis is completed jointly with the Institute of Computing of the Chinese Academy of Sciences, and this study re-examines the problem of abstract summaries of long documents.
Generally speaking, the summary of text semantics can be divided into two stages: 1) to obtain the summary information of the text through rough reading, and 2) to select key sentences to form the summary through meticulous reading.
In this paper, a new abstraction summary method is proposed to simulate the above two stages. in this method, the document extraction summary is transformed into a Dobby problem with context, and the strategy gradient method is used to solve the problem.
First of all, the convolution neural network is used to encode the main points of the paragraph to simulate the rough reading stage. Then, a decision strategy with adaptive termination mechanism is used to simulate the careful reading stage.
Experiments on CNN and DailyMail datasets show that the proposed method not only outperforms the current best extractive abstracts in terms of ROUGE-1, 2, L, but also can extract high-quality abstracts with different lengths.
For a long time, simulating human behavior and performing natural language processing tasks has always been the direction of the academic circles of NLP and AI, especially for text summaries, which are advanced and complex tasks for human beings, which require strong natural language understanding and text organization skills.
This paper makes a useful attempt in this aspect, divides the reading comprehension process into two stages similar to human reading for modeling, and proves that this can achieve a better effect of abstract generation.
And, the paper "sPortfolio: Strati visualization ed Visual Analysis of Stock Portfolios" is selected as the top international journal IEEE TVCG in the field of computer graphics and visualization. This paper mainly focuses on the visual analysis of investment portfolio and multi-factor models in the financial market. The paper "Monoxide: Scale Out Blockchain with Asynchronized Consensus Zones" was selected into the top academic conference NSDI of computer network, which is the first time that the international mainstream academic circles have recognized the related research of the blockchain expansion scheme, and it is the only paper related to the blockchain accepted by the conference this year.
The unique idea of "Scientific Research promotes Business" in Innovation Workshop
The most unique feature of the "VC+AI" model of Innovation works is that through extensive scientific cooperation and its own research team, the AI Engineering Institute of Innovation works can closely track the research direction in the cutting-edge scientific research field that is most likely to be transformed into future commercial value. This idea of "scientific research promotes business" strives to find academic research with future commercial value as soon as possible, and then actively cooperate with relevant scientific research parties on the premise of protecting intellectual property rights and commercial interests of all parties. At the same time, the product research and development team of AI Engineering Academy tries the possible product direction and product prototype of this technology in different business scenarios. And the business development team promotes the landing testing of the products in the real business field, which in turn brings valuable opportunities for the venture capital team of Innovation works to identify and invest in high-value tracks early.
"Scientific research to boost business" is not simply to find promising scientific research projects, but to integrate multi-dimensional work such as technology tracking, talent tracking, laboratory cooperation, intellectual property cooperation, technology transformation, rapid iteration of prototype products, business development, and financial investment into a unified resource system, guided by market value, to connect academic research and business practice in a planned way.
At present, the high and new technology represented by AI is entering the in-depth development period of giving priority to commercial landing, and the industrial environment urgently needs the organic combination of cutting-edge scientific research technology and actual business scenes. With its rich experience in the field of venture capital and the advantages of technical personnel accumulated in the process of establishing AI Institute of Engineering, Innovation works is especially suitable to play the role of bridge between scientific research and commercialization.
Innovation works established the Innovation works artificial Intelligence Engineering Institute in September 2016 to plan the R & D direction and set up the R & D team with the mode of "scientific research + engineering laboratory". At present, there are research and development laboratories facing cutting-edge technology and applications, such as medical AI, robots, machine learning theory, computational finance, computer awareness, etc., and have successively set up Nanjing International artificial Intelligence Research Institute and Greater Bay area artificial Intelligence Research Institute of Innovation works, dedicated to training high-end scientific research and engineering talents of artificial intelligence, and developing cutting-edge artificial intelligence technologies with machine learning as the core. And combine with various industry fields to provide first-class products and solutions for industry scenarios.
Innovation works carries out extensive scientific research cooperation with famous scientific research institutions at home and abroad. For example, on March 20 this year, the Hong Kong University of Science and Technology and Innovation Workshop announced the establishment of a joint laboratory for computer perception and intelligent control (Computer Perception and Intelligent Control Lab). In addition, Innovation works also actively participates in the formulation of international related technical standards. For example, in August this year, the 28th International Joint Conference on artificial Intelligence (IJCAI) was held in Macau, China, during which the third meeting of the working Group on IEEE P3652.1 (Federal Learning Infrastructure and applications) standards was held. IEEE Federal Learning Standard, initiated by WeBank, with the participation of dozens of international and domestic technology companies, including Innovation works, is the first international project to set standards for the framework of artificial intelligence collaborative technology. The research team of Innovation works is deeply involved in the formulation of federal learning standards, and hopes to contribute to the security and availability of AI technology in real situations, as well as the protection of data security and user privacy.
Https://www.leiphone.com/news/201909/cf5ViXGjSskcxGzD.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.