Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Where will artificial intelligence go in 2020?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

2020-01-07 19:21 introduction: artificial intelligence is no longer ready to change the world one day, it is changing the world

From left to right: Google AI Director Jeff Dean, University of California, Berkeley Professor Celeste Kidd,Pythorch Director Soumith Chintala,Nvidia Machine Learning Research Director Anima Anandkumar, and IBM Research Director Dario Gil

Artificial intelligence is no longer ready to change the world one day-it is changing the world. At the beginning of the new decade, VentureBeat interviewed the sharpest minds in the artificial intelligence world to re-examine the progress made in 2019 and look forward to the mature path of machine learning in 2020. We interviewed Soumith Chintala, founder of PyTorch, Celeste Kidd, professor at the University of California, Jeff Dean, director of Google AI, Anima Anandkumar, director of machine learning research at Nvidia, and Dario Gil, director of IBM research.

Everyone has predictions for the coming year, but these people have created today's future-individuals with authority in artificial intelligence who have made brilliant achievements in the past. While some predict progress in sub-areas such as semi-supervised learning and neural symbolic methods, virtually all think tanks that have spoken to VentureBeat agree that transformer-based natural language models have made great strides in 2019 and predict that facial recognition of similar technologies will continue to be controversial. They also hope that the evaluation criteria of artificial intelligence technology are not just accuracy.

Soumith Chintala

Director, Chief engineer and founder of Soumith Chintala,Pythorch.

At present, PyTorch is the most popular machine learning framework in the world. PyTorch, a derivative of the Torch open source framework launched in 2002, was launched in 2015 and has grown steadily in expansion.

In the fall of 2019, Facebook released PyTorch 1.3, which includes quantification and TPU support, as well as Captum (an interpretable tool for deep learning) and PyTorch Mobile. Of course, there are other things, such as PyRobot and PyTorch Hub, to share code and encourage ML practitioners to accept replicability.

In a conversation between Pythorch developers and VentureBeat, Chintala said he thought there would be few breakthroughs in machine learning in 2019.

"actually, I don't think we've made any breakthrough. It's basically been like this since transformer. 2012 is prime time for ConvNets, while for transformer it is around 2017. This is my personal opinion. "

He went on to say that DeepMind's AlphaGo's contribution to reinforcement learning was groundbreaking, but he said it was difficult for these results to be used to deal with practical tasks in the real world.

Chintala also believes that the development of machine learning frameworks, the most popular tools among today's ML practitioners, such as Pythorch and Google's TensorFlow, have changed the way researchers explore their ideas and work.

"in a sense, this is a breakthrough, and they make the development of technology an order of magnitude or two faster than in the past," he said. ".

This year, Google and Facebook's open source framework introduced quantification to improve the speed of model training. Over the next few years, Chintala expects "explosive growth" in the importance and adoption of neural network hardware accelerators such as Pythorch's JIT compiler and Glow.

"according to PyTorch and TensorFlow, you will find that the framework is somewhat convergent. The reason for quantification and other less efficient technologies is that the next war on the framework's compilers-XLA, TVM, Pythorch-is already excellent, and a lot of innovation is waiting to happen. In the next few years, you will see how to quantify more intelligently, how to better integrate, how to use GPU more effectively, and how to use new hardware to compile automatically. ".

Like most other industry leaders interviewed by VentureBeat for this article, Chintala predicts that by 2020, the AI community will pay more attention to performance beyond the accuracy of AI models and begin to turn its attention to other important factors, such as the cost of creating models, how to explain output to humans, and how artificial intelligence better reflects the kind of society people want to build.

"if you look back at the past five or six years, we only focus on accuracy and raw data, such as whether Nvidia's model is more accurate and how accurate Facebook's model is. But in fact, I think 2020 will be the year when we start thinking in a more complex way, and if your model doesn't have a good interoperability mechanism (or meets other standards), it doesn't matter whether it's more accurate by 3%. "

Celeste Kidd

CelesteKidd is director of the Kidd Lab at the University of California, Berkeley, where she and her team explore how children learn. They are trying to train models in a way that is not similar to raising children, and this study helps build neural networks.

"Human babies don't have tagged data sets, but they do well and we have to understand how this happens," she said. ".

One of the things that surprised Kidd in 2019 was that many neural network creators casually belittled the work of her or other researchers, believing they couldn't do what babies could do.

'when you average a baby's behavior, you'll see evidence that they understand something, but they're definitely not perfect learners, 'she says.

'Human babies are great, but they make a lot of mistakes, 'she said.' many of the comparative experiments she sees are done casually by people who idealize baby behavior at the demographic level. "I think more and more attention will be paid to the connection between what you know now and what you want to know next. "

In artificial intelligence, the word "black box" has existed for many years. It is used to criticize the lack of interpretability of neural networks, but Kidd believes that 2020 may mean the end of the idea that neural networks are unexplainable.

"the debate about the black box is false, the brain is also a black box, and we have made great progress in understanding how the brain works," she said. ".

To unravel this understanding of neural networks, Kidd turned his attention to people like Aude Oliva, executive director of MIT-IBM Watson AI Lab.

"when we were talking about this, I said something about the system being a black box, but she said they were definitely not black boxes. Of course, you can take them apart, see how they work, and experiment with them, just as we understand cognition. "Kidd said.

Last month, Kidd delivered an opening keynote speech at NeurIPS, the world's largest artificial intelligence research conference. Her speech focused on how the human brain adheres to beliefs, processes attention systems and conducts Bayesian statistics.

The golden zone of communication, she says, lies between a person's interest and understanding of what surprises them. People often don't like content that is too amazing.

She went on to say that in the absence of a neutral technology platform, she turned her attention to studying how manufacturers of content recommendation systems manipulate people's beliefs. Based on the pursuit of maximum participation, the system can have a significant impact on the beliefs and views formed by people.

At the end of the speech, Kidd talked about the misunderstanding of some men in machine learning that spending time alone with female colleagues leads to sexual harassment charges and the end of men's careers. She says this misunderstanding can hurt women's careers in this field.

Kidd, along with other women, was named "people of the time" in 2017 for publicly publicizing sexual misconduct at the University of Rochester, who helped contribute to our current MeToo campaign for equal treatment of women. But at the time, Kidd thought it would end her career.

In 2020, she would like to see an increase in awareness of the real impact of technological tools and technological decisions, and that tool manufacturers are accountable for people's use of tools.

"I hear a lot of people trying to defend themselves, saying,'I'm not the manipulator of the truth,'" she said. I think it is necessary to raise people's awareness of such dishonesty. "

"as a social person, especially those who are developing these tools, we really need to be aware of the responsibilities that come with it. "

Jeff Dean

Dean has led Google AI for nearly two years, worked at Google for 20 years, was the architect of many of the company's early search and distributed web algorithms, and was an early member of Google Brain.

Dean gave a speech at NeurIPS last month on machine learning and artificial intelligence community solutions to climate change in ASIC semiconductor design, which he said was the most important issue of our time. When talking about climate change, Dean believes that artificial intelligence can strive to become a zero-carbon industry, or it can be used to help change human behavior.

He expects to make progress in the areas of multimodal learning and multitask learning by 2020.

There is no doubt that one of the biggest machine learning trends in 2019 is the continued growth and proliferation of Transformer-based natural language models, which were previously described as one of the biggest breakthroughs in artificial intelligence in recent years. In 2018, Google opened up BERT, a Transformer-based model. According to the GLUE rankings, some of the best-performing models released this year, such as Google's XLNet, Microsoft's MT-DNN and Facebook's RoBERTa, are all based on Transformer.

Dean pointed to the progress that has been made, saying: "I think considerable results have been achieved in actually generating machine learning models that allow us to do more complex NLP tasks than in the past. But he added that there is still room for growth. "We still want to be able to do more models of context types. As now, BERT and other models can handle hundreds of words well, but not 10000 words as context. So this is an interesting direction. "

Dean said he would like to see small advances in the latest technology not taken too seriously, which helps to create more robust models.

Google AI will also work on new research, such as DailyRobot, an internal project launched in November 2019 to build robots that can perform common tasks at home and in the workplace.

Anima Anandkumar

Anima Anandkumar, director of machine learning research at Nvidia, is also the chief scientist at AWS. Nvidia's artificial intelligence research covers many areas, from joint learning in health care to autopilot, supercomputers and graphics.

One of the key research areas of Nvidia and Anandkumar in 2019 is the simulation framework for reinforcement learning, which is becoming more and more popular and mature.

In 2019, we saw the rise of Nvidia's Drive autonomus driving platform and Isaac robot simulator, as well as models for generating synthetic data from generating confrontation networks.

Last year, artificial intelligence like StyleGAN (a network that makes people question whether they are looking at computer-generated faces or real people) and GauGAN (which can use paintbrushes to generate landscapes) also began to rise. StyleGAN2 made its debut last month.

GANs is a technology that blurs the boundaries of reality, and Anandkumar believes they can help solve the major challenges that the artificial intelligence community is trying to solve, such as grasping robotic hands and autopilot.

Anandkumar also hopes to see advances in iterative algorithms, self-supervision, and model self-training methods in the coming year, which can be improved by using untagged data for self-training.

"I think different iterative algorithms are the direction of the future, because if you only do a feedforward network, then robustness is a problem. If you try to iterate multiple times and adjust the iterations according to the data type or precision requirements you want, you are much more likely to achieve this goal. ".

Anandkumar believes that the artificial intelligence community faces many challenges in 2020, such as the need to work with domain experts to create industry-specific models. Policy makers, individuals and artificial intelligence communities also need to address representative issues and the challenges of ensuring that the data sets used to train models meet the needs of different populations.

Facial recognition has attracted the most attention because it is easy to realize that it is an invasion of privacy, but there will be other moral issues facing the artificial intelligence community in 2020, Anandkumar said.

According to Anandkumar, one of the biggest surprises of 2019 is the rapid development of text generation models.

"2019 is the year of language models, and now, for the first time, we have achieved our goal of producing more coherent text in paragraph length, which was previously impossible. ".

In August 2019, Nvidia launched the Megatron natural language model. Megatron has 8 billion parameters and is called the largest artificial intelligence model based on Transformer in the world. Anandkumar said she was surprised that people began to portray models as people with personality, and she looked forward to seeing more industry-specific text models.

"We have not yet achieved dialogue generation, dialogue generation is interactive, can track the context and have a natural dialogue. Therefore, I think a more serious attempt will be made in this direction in 2020. ".

Developing text generation control frameworks will be more challenging than developing image frameworks that can be used to identify people or objects. Text generation models may also face other challenges, such as defining facts for neural models.

Finally, Anandkumar said she was heartened to see Kidd's speech at NeurIPS received a standing ovation and signs of growing maturity and tolerance in the machine learning community.

"I think it's a watershed moment and it's hard to make even small changes at first. I hope we can maintain this momentum and carry out greater structural reforms. "

Dario Gil

As director of IBM Research, Gil leads a team of researchers who actively advise the White House and businesses around the world. He believes that the major leaps in 2019 include advances around the generation model and the continuous improvement in the quality of generating trusted languages.

He predicts that the training will continue to move forward in a more effective direction while streamlining the structure. Developing a more effective artificial intelligence model is a key point of NeurIPS. IBM Research introduces the deep learning technology of 8-bit accurate model in NeurIPS.

"the way we train deep neural networks with existing hardware and GPU architecture is still so inefficient," he said. Therefore, it is very important to make a really fundamental reflection on this point. We must improve the computational efficiency of artificial intelligence in order to do more. ".

Gil cites a study that shows that demand for ML training doubles every three and a half months, much faster than Moore's Law predicts.

Gil is also interested in how artificial intelligence can help accelerate scientific discovery, but IBM's research will focus on neural symbolic methods of machine learning.

In 2020, Gil hopes artificial intelligence practitioners and researchers will develop an indicator that goes beyond accuracy to consider the value of deploying models in production. Shifting the domain to building trusted systems rather than giving priority to accuracy will be the central pillar of the continued adoption of artificial intelligence.

"some community members may go on to say, don't worry, just be accurate. It doesn't matter. People get used to the fact that human beings sometimes don't explain some of the decisions we make. I think it is very important for us to focus on the efforts of the community to do better in this regard. Artificial intelligence systems cannot become black boxes for mission-critical applications. ".

Gil believes that if artificial intelligence is to be used by more people with skills in data science and software engineering, it is necessary to get rid of the idea that only a few machine learning wizards can do this.

"if we regard artificial intelligence as a mythical field that can only be entered by selected PhDs who work in this field, then it does not really promote the application of artificial intelligence," he said. ".

Gil is particularly interested in neural symbolic artificial intelligence. In the coming year, IBM will seek neural symbolic methods to enhance functions such as probabilistic programming, where artificial intelligence learns how to operate programs and models that can share reasoning behind decisions.

"by using a hybrid approach of contemporary new methods, combining learning and reasoning through these neural symbolic methods, and embedding symbolic dimensions into learning programs, we have shown that you can learn with a small amount of data you need," he said. You learn a program and eventually get something to explain, because you have something to explain, you get something more credible. "

He said that issues such as fairness, data integrity and the selection of data sets will continue to receive widespread attention, as will "any issues related to biometrics". Facial recognition has received a lot of attention, but this is just the beginning. Voice data will become more and more sensitive, as will other forms of biometrics.

In addition to neural symbols and common sense reasoning, Gil said that in 2020, IBM Research will also explore quantum computing for artificial intelligence and analog hardware for artificial intelligence, beyond precision-reducing architectures.

The last thought

Machine learning continues to shape business and society, and VentureBeat sees some upcoming trends in interviews with researchers and experts:

The major advance in 2019 is the development of natural language models, and Transformer has driven a huge leap forward in this field. There will be more models based on BERT and Transformer in 2020.

The artificial intelligence industry should look for ways to evaluate the output of the model, not just the accuracy.

Sub-areas such as semi-supervised learning, neural symbolic methods of machine learning, and multitasking and multimodal learning are likely to make progress in the coming year.

The ethical challenges associated with biometric data, such as voice recordings, may continue to be controversial.

For machine learning frameworks such as PyTorch and TensorFlow, compilers and methods such as quantification may become more and more popular as methods to optimize the performance of models.

Via: https://venturebeat.com/2020/01/02/top-minds-in-machine-learning-predict-where-ai-is-going-in-2020/

Https://www.leiphone.com/news/202001/8euePyy4QiN9kHfo.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report