In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
2020-01-08 18:00:29
Selected from venturebeat
Machine heart compilation
Participation: devil, Yiming
How do the most outstanding minds in the field of AI sum up the technological progress in 2019 and predict the trend in 2020? This paper introduces the viewpoints of Soumith Chintala, Celeste Kidd, Jeff Dean and others.
Artificial intelligence is not going to change the world, but is changing the world. At the start of the New year and the start of the new decade, VentureBeat interviewed the best minds in artificial intelligence to review the progress of artificial intelligence in 2019 and look forward to the future of machine learning in 2020. Respondents included Soumith Chintala, the father of PyTorch, Celeste Kidd, a professor at the University of California, Jeff Dean, head of Google AI, Anima Anandkumar, head of machine learning research at Nvidia, and Dario Gil, director of IBM research.
Some of them predicted progress in sub-areas such as semi-supervised learning and neural symbolic methods, while almost all respondents agreed that the Transformer-based natural language model had made great progress in 2019, and that the discussion of controversial technologies such as face recognition would continue. In addition, they expect the AI field to no longer win or lose based solely on accuracy.
Soumith Chintala, the father of PyTorch
Soumith Chintala, director, chief engineer and creator of PyTorch
By any measure, PyTorch is now the most popular machine learning framework in the world. PyTorch is a derivative of the Torch open source framework released in 2002. The initial version was released in 2016, and its expansion and library are growing steadily.
At the PyTorch developer Conference in the fall of 2019, Facebook released version 1.3 of PyTorch, which supports quantification and TPU support. The interpretable tools Captum and PyTorch Mobile for deep learning were also released at the meeting. In addition, there is the robot framework PyRobot and the code sharing artifact PyTorch Hub, which encourage machine learning practitioners to embrace reproducibility.
Speaking at the PyTorch developer conference, Chintala said: there was almost no breakthrough in machine learning in 2019.
"I don't think there has been any breakthrough since Transformer. CNN won the ImageNet competition in 2012, ushering in a moment of highlight, and 2017 is Transformer. This is my personal opinion. "he said.
He believes that the contribution of DeepMind's AlphaGo to reinforcement learning is a breakthrough, but the results are difficult to achieve in practical tasks in the real world.
Chintala also believes that the evolution of machine learning frameworks such as PyTorch and TensorFlow has changed the way researchers explore new ideas and do research. "these frameworks make researchers an order of magnitude or two faster than before, and from this point of view, this is a huge breakthrough. "
In 2019, both Google and Facebook's open source frameworks introduced quantification to speed up model training. Chintala predicts an "explosion" in the importance and scope of adoption of tools such as PyTorch's JIT compiler and neural network hardware accelerator (such as Glow) in 2020.
"from PyTorch and TensorFlow, you can see the convergence trend of the framework. The reason for quantification and a large number of other lower-level features is that the next battle in the framework battle is the compiler-- XLA (TensorFlow), TVM (Chen Tianqi team), Glow (PyTorch), and a lot of innovation is coming. Over the next few years, you will see how to quantify more intelligently, integrate better, use GPU more efficiently, and perform automatic compilation for new hardware. "
Like most of the respondents to this article, Chintala predicts that in 2020 the AI community will use more metrics to measure the performance of the AI model, not just accuracy. The community turns its attention to other factors, such as the amount of electricity needed to create a model, how to explain the output to humans, and how to make AI better reflect the society that humans want to build.
Looking back over the past five or six years, we only focus on accuracy and raw data, such as "is Nvidia's model more accurate, or Facebook's model more accurate?" "I think in 2020 we will think (in a more complex way) that if the model is not well explainable (or meets other criteria), what if the accuracy is 3% higher? "Chintala said.
Celeste Kidd, professor at the University of California
Celeste Kidd, a developmental psychologist at the University of California, Berkeley.
Celeste Kidd, director of the Kidd Lab at the University of California, Berkeley, and her team are dedicated to exploring the way children learn. Their insights can help neural network creators who try to train models in a way similar to nurturing children.
"Human babies don't need to label datasets, but they can learn well," Kidd said. The key to this is that we need to understand the principle. "
She believes that when you make a comprehensive analysis of babies' behavior, you do see evidence that they understand something, but they are not perfect learners. The saying that babies can learn a lot of things automatically is an over-beautification of their abilities.
"babies are great, but they also make a lot of mistakes. I see people make random comparisons to idealize the behavior of babies. I think people will pay more attention to how to link current research with future research goals. "
In the field of AI, the word "black box" has been born for many years, and it is often used to criticize the lack of interpretability of neural networks. But Kidd believes that this understanding of neural networks may not occur again in 2020.
"the idea of a black box is false. The brain is also a black box, and we have made great progress in understanding how the brain works. "
In the process of disenchantment for the "black box" theory, Kidd read the research by Aude Oliva, executive director of MIT-IBM Watson AI Lab.
"We talked about it at the time. I thought the system was a black box, but she criticized me, saying that of course it was not a black box. Of course you can split it up, see how it works, and run experiments, just like we do when we understand the cognitive process. "
Last month, Kidd delivered a keynote speech at the opening ceremony of NeurIPS 2019. Her speech focused on how the human brain adheres to its own opinions, attention system and Bayesian statistics.
She noticed how content recommendation systems manipulate human ideas. The pursuit of a system that maximizes user participation has a significant impact on how humans form ideas and opinions.
In 2020, she would like to see more people realize the impact of technological tools and technological decisions on real life and reject the view that tool creators are not responsible for the actions and consequences of tool users.
"I hear too many people defend themselves with the words" I'm not a guardian ". I think more people must realize that this is dishonest. "
"as members of society, especially those who develop these tools, we need to face up to the responsibilities that follow. "
Jeff Dean, head of Google AI
Jeff Dean, head of Google AI
Jeff Dean, who has worked at Google for 20 years and has led Google AI for nearly two years, was the architect of many of Google's early search and distributed web algorithms, and an early member of Google's brain.
Jeff Dean gave two presentations at the NeurIPS 2019 conference on the use of machine learning to design ASIC semiconductors (ML for Systems) and AI communities to help address climate change (Tackling Climate Change with ML). He believes that the latter is one of the most important issues of our time. In his talk on climate change, Dean discussed how AI can become a zero-carbon industry and how AI can be used to help change human behavior.
Speaking about his expectations for 2020, Dean said he would like to see progress in the field of multi-model learning. In this field, multimodal learning relies on multimedia data for training, while multitask learning allows the network to complete multiple tasks through one training.
There is no doubt that one of the most significant machine learning trends in 2019 is the development and growth of Transformer-based natural language models (which Chintala also sees as one of the biggest breakthroughs in AI in recent years). In 2018, Google opened up BERT, a Transformer-based model. A large number of top performance models in 2019 (such as Google's XLNet, Microsoft's MT-DNN, Facebook's RoBERTa) are based on Transformer. In addition, a Google spokesman told VentureBeat,XLNet 2 that it would be released at the end of this month.
Jeff Dean said of the progress of Transformer, "the machine learning model based on Transformer can perform more complex NLP tasks than before. From this point of view, the research in this field is fruitful." But he added that there is still room for development in this area. "We still hope to make the model understand the context more. Now models such as BERT can well deal with the context of hundreds of words, but not if the context contains 10000 words. This is an interesting research direction. "
Dean says he wants the community to focus less on minor SOTA developments and more on how to create more robust models.
Google AI will promote new initiatives, such as Everyday Robot, an internal project launched in November 2019, which aims to create robots that perform common tasks in the home and work environment.
Anima Anandkumar, head of machine learning research at Nvidia
Anima Anandkumar, head of machine learning research at Nvidia
Nvidia's AI research revolves around a number of areas, from federal learning for health care to autopilot, supercomputers and graphics cards.
In 2019, one of the priorities of Anandkumar, which is responsible for machine learning at Nvidia, is the reinforcement learning simulation framework. At present, such frameworks are becoming more and more popular and mature.
In 2019, we saw Nvidia develop the autopilot platform Drive and robot simulator Isaac, as well as models and GAN for generating synthetic data based on simulations.
For example, AI models such as StyleGAN and GauGAN were in the limelight last year. Last month, Nvidia also released StyleGAN2.
The neural network GAN is used in this. This is a technology that "blurs the boundaries between real and virtual", and Anandkumar believes it can help solve the challenges facing the AI community, such as grip robotic arms and self-driving.
Anandkumar predicts that there will be new developments in iterative algorithms (iterative algorithm), self-supervision and self-training methods in 2020. The so-called self-training means that the model uses unsupervised data and is improved through self-training.
"I think iterative algorithms are the future, because if you only do a feedforward network, its robustness may be a problem. If you try to do multiple iterations-debugging iterations based on data types or accuracy requirements, the chances of achieving the goal will be greatly increased. "
Anandkumar believes that the AI community will face a number of challenges in 2020, such as the need for the AI community to work with domain experts to create models for specific industries. Policy makers, individuals, and the AI community also need to address the issue of feature representation and ensure that the datasets used for model training represent different groups.
"I think the problems in face recognition are easy to find, but in many areas, people do not realize that the use of data involves privacy issues. Anandkumar said that face recognition has received the most attention because it is easy to understand how face recognition harms personal privacy, and the AI community will face more ethical issues in 2020.
"We need to review the process of data collection and use more carefully. This is what Europe is doing, but it should be even more so in the US. For good reason, organizations such as the National Transportation Safety Board (NTSB) and the Federal Public Transportation Administration (FTA) will do more of this. "
According to Anandkumar "s, one of the surprises of 2019 is the rapid development of the text generation model.
"2019 is the year of language models, isn't it? Now, for the first time, we have a more coherent text generation that is as long as the entire paragraph, which would never have been possible before, which is great. "
In August 2019, Nvidia released the Megatron natural language model. The model has 8 billion parameters and is considered to be the largest Transformer model in the world. Anandkumar said she was shocked by the way people began to classify models by whether they had personality or personality. She looks forward to seeing text models that are more industry-specific.
"We still haven't reached the interactive dialogue generation stage. At this stage, we can track and have a natural conversation. I think there will be more attempts in this direction in 2020. "
It is more difficult to develop a framework that controls text generation than to develop an image recognition framework. And the text generation model will encounter the challenge of defining facts for the neural model.
Dario Gil, Director of Research, IBM
Dario Gil, Director of Research, IBM
A team of researchers led by Dario Gil provides active guidance to the White House and global businesses. He believes that important advances in the field of machine learning in 2019 include advances in generative models and language models.
He predicts continued progress in using lower-precision architectures to train models more efficiently. Developing a more efficient AI model is the focus of NeurIPS, and IBM Research introduced deep learning techniques using 8-bit precision models at the meeting.
In general, it is still inefficient to train deep neural networks using existing hardware and GPU architecture. Therefore, it is very important to fundamentally rethink. We have improved the computational efficiency of AI, and we will do more. "
Gil cites research saying that the demand for machine learning training doubles every three and a half months, much faster than Moore's Law predicts.
Gil is excited that AI is accelerating new scientific discoveries, but he says the focus of the IBM research institute will be on neural symbolic methods.
In 2020, Gil wants AI practitioners and researchers to focus on metrics other than accuracy and consider the value of deploying models in production environments. The shift in the AI domain to building trusted systems rather than accuracy will be the key to the continued adoption of AI.
Some people in the community may say, "Don't worry, you just need to improve your accuracy." People will get used to the black box. Or they think that human beings sometimes make decisions without giving an explanation. I think it's very, very important to focus the intelligence of the community on things that are better than accuracy. In mission-critical applications, AI systems cannot be black boxes. "
AI can only be done by a few machine learning geeks, and more people with data science and software engineering skills only need to use it. Gil believes that this perception should be abandoned.
"if we keep AI mysterious, only PhD in this field can study it, which is not good for the application of AI. "
In 2020, Gil was particularly interested in the neural symbol AI. IBM will look for neural symbolic methods to empower such things as probabilistic programming (letting AI learn how to program) and models that can share the reasons behind decisions.
By using the neural symbol method, we can combine learning and reasoning, that is, the symbolic dimension is embedded in the learning program. In this way, we have shown that we can use some of the required data for learning. Because you have learned the program, your final output is explainable, because with these explainable outputs, the system is more reliable. "
Fairness, data integrity and dataset selection are still the focus of attention. The same is true in areas related to biometric technology. Face recognition has received a lot of attention, and this is just the beginning. With the increase of the sensitivity of speech data, other forms of biometric features will be paid more and more attention.
"work related to human identity and biometric characteristics, as well as the use of AI to analyze this information, remains at the core of the research. "
In addition to MIT-IBM Watson Lab's main projects, neural symbols and common sense reasoning, Gil said that the IBM Institute in 2020 will also explore quantum computing for AI and AI simulation hardware beyond the lower-precision architecture.
Summary
Machine learning will continue to shape business and society, and the researchers and experts interviewed in this article have found the following trends:
The development of neurolinguistic models is a major event in 2019, and Transformer is a great help behind it. More BERT variants and Transformer-based models will appear in 2020. The AI industry should look for model output metrics other than accuracy. In 2020, progress may be made in sub-areas such as semi-supervised learning, neural symbols, multitask learning, multimodal learning and so on. Ethical challenges related to biometric data, such as voice recordings, may continue to be the focus of controversy. Methods such as compilers and quantization may be more popular in machine learning frameworks such as PyTorch and TensorFlow as ways to optimize model performance.
Reference link:
Https://thenextweb.com/podium/2020/01/02/ai-creativity-will-bloom-in-2020-all-thanks-to-true-web-machine-learning/
Https://www.toutiao.com/i6779509797449368067/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.