Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Year-end heavy stocktaking: 6 breakthroughs in computer science in 2022! Cracking quantum encryption and the fastest matrix multiplication are on the list.

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

What great events will happen in the computer field in 2022? Here comes the year-end inventory of Quanta Magazine.

In 2022, many epoch-making events will take place in the computer field.

This year, computer scientists have learned to transmit secrets perfectly, Transformer has made rapid progress, and with the help of AI, decades-old algorithms have been greatly improved.

The Great computer event of 2022 now, computer scientists can solve a wider and wider range of problems, so their work is becoming more and more interdisciplinary.

This year, many achievements in the field of computer science have helped other scientists and mathematicians.

Such as cryptography, which involves the security of the Internet as a whole.

Behind cryptography, there are often complex mathematical problems. There was a very promising new cryptographic scheme, which was considered to be strong enough to resist attacks from quantum computers. However, this scheme was overturned by the mathematical problem of "the product of two elliptic curves and their relationship with Abelian surfaces".

A different set of mathematical relationships in the form of one-way functions will tell cryptographers whether there is really secure code.

Computer science, especially quantum computing, also has a great overlap with physics.

A major event in theoretical computer science this year is that scientists have proved the NLTS conjecture.

This conjecture tells us that the ghostly quantum entanglement between particles is not as subtle as physicists once thought.

This not only affects our understanding of the physical world, but also affects the countless cryptographic possibilities brought about by entanglement.

In addition, artificial intelligence has always been complementary to biology-in fact, the field of biology draws inspiration from the human brain, which may be the ultimate computer.

Computer scientists and neuroscientists have long wanted to understand how the brain works and create brain-like artificial intelligence, but these seem to be pipe dreams.

But incredibly, the Transformer neural network seems to be able to process information like the brain. Every time we learn more about how Transformer works, we learn more about the brain, and vice versa.

Maybe that's why Transformer is so good at language processing and image classification.

Even, AI can help us create a better AI, and the new hypernetworks can help researchers train neural networks at a lower cost and faster speed, and help scientists in other fields.

Top1: the answer to Quantum entanglement

Quantum entanglement is a property that closely connects distant particles, and it is certain that a completely entangled system cannot be fully described.

But physicists believe that systems that are close to complete entanglement are easier to describe. But computer scientists believe that these systems cannot be calculated either, and this is the quantum PCP (probability detectable proof, Probabilistically Checkable Proof) conjecture.

To help prove the quantum PCP theory, scientists have proposed a simpler hypothesis, called the "non-low-energy ordinary state" (NLTS) conjecture.

In June this year, three computer scientists from Harvard University, University College London and the University of California, Berkeley proved the NLTS conjecture for the first time in a paper.

Paper address: https://arxiv.org/ abs / 2206.13228 this means that there is a quantum system that can keep the entangled state at higher temperatures, and it also shows that even if it is far away from extreme cases such as low temperature, the entangled particle system is still difficult to analyze and difficult to calculate the ground state energy.

Physicists are surprised because it means that entanglement is not necessarily as fragile as they think, and computer scientists are happy to be one step closer to proving a theorem called quantum PCP (probability detectable proof).

In October, researchers successfully entangled three particles over considerable distances, enhancing the possibility of quantum encryption.

Top2: changing the way AI is understood

Over the past five years, Transformer has revolutionized the way AI processes information.

In 2017, Transformer first appeared in a paper.

People develop Transformer to understand and generate languages. It can process every element in the input data in real time, so that they have "overall vision".

Compared with other language networks that adopt a piecemeal approach, this kind of "overall vision" greatly improves the speed and accuracy of Transformer.

This also makes it incredibly versatile, and other AI researchers also apply Transformer to their own fields.

They have found that the same principle can be used to upgrade image classification and process multiple data at the same time.

Paper address: https://arxiv.org/ abs / 2010.11929Transformers has quickly become a leader in applications such as word recognition that focus on analyzing and predicting text. It sparked a wave of tools, such as OpenAI's GPT-3, which trains hundreds of billions of words and produces consistent new text to an unsettling degree.

However, these benefits come at the expense of more training for Transformer than for non-Transformer models.

These faces were created by a Transformer-based network after training a data set of more than 200000 celebrity faces.

In March, researchers studying how Transformer works found that it was so powerful in part because of its ability to attach greater meaning to words, rather than a simple memory model.

In fact, Transformer is so adaptable that neuroscientists have begun to use Transformer-based networks to model human brain function.

This shows that artificial intelligence and human intelligence may be of the same origin.

Top3: cracked Quantum encryption algorithm

With the emergence of quantum computing, many problems that need to consume a large amount of computation have been solved, and the security of classical encryption algorithms has also been threatened. Therefore, the academic circles put forward the concept of post-quantum cryptography to resist the cracking of quantum computers.

As a highly anticipated encryption algorithm, SIKE (Supersingular Isogeny Key Encapsulation) is an encryption algorithm that uses elliptic curve as a theorem.

However, in July this year, two researchers from the University of Leuven in Belgium found that the algorithm could be successfully cracked with a 10-year-old desktop computer in as little as an hour.

It is worth noting that the researchers solve this problem from a purely mathematical point of view, attacking the core of algorithm design, rather than any potential code vulnerabilities.

The researchers said that only if you can prove the existence of a "one-way function" is it possible to create a provable security code, that is, a code that can never fail.

Although it is still not known whether they exist, the researchers believe that this problem is tantamount to another problem called Kolmogorov complexity. Only when the complexity of a certain version of Kolmogorov is difficult to calculate, one-way functions and true cryptography are possible.

Top4: training AI with AI

In recent years, the pattern recognition skills of artificial neural networks have injected vitality into the field of artificial intelligence.

But before a network can start working, researchers must first train it.

The training process can take months and require a lot of data, during which billions of potential parameters need to be fine-tuned.

Now, researchers have a new idea-- let machines do it for them.

This new type of "hypernetwork" is called GHN-2, and it can process and spit out other networks.

Paper link: https://arxiv.org/ abs / 2110.13100 it is very fast and can analyze any given network and quickly provide a set of parameter values, which are as effective as those in a traditionally trained network.

Although the parameters provided by GHN-2 may not be optimal, it still provides a better starting point, reducing the time and data required for comprehensive training.

Back propagation training through parameters predicted on a given image dataset and our DEEPNETS-1M schema dataset

This summer, Quanta magazine also studied another new way to help machine learning-embodied artificial intelligence.

It allows algorithms to learn from a responsive 3D environment, rather than through still images or abstract data.

Whether it's exploring agents in the simulated world or robots in the real world, these systems have fundamentally different ways of learning, and in many cases, they are better than systems trained using traditional methods.

Top5: improvement of algorithm

Improving the efficiency of basic computing algorithms has always been a hot topic in academic circles, because it will affect the overall speed of a large number of computing, resulting in a domino effect on the field of intelligent computing.

In October this year, in a paper published on Nature, the DeepMind team proposed the first AI system-AlphaTensor, which is used to discover novel, efficient and correct algorithms for basic computing tasks such as matrix multiplication.

Its emergence has found a new answer to a mathematical problem that has been unsolved for 50 years: to find the fastest way to multiply two matrices.

Matrix multiplication, as one of the basic operations of matrix transformation, is the core component of many computing tasks. It covers computer graphics, digital communication, neural network training, scientific computing and so on, and the algorithms discovered by AlphaTensor can greatly improve the computational efficiency in these fields.

Address: https://www.nature.com/ articles / s41586,022-05172-4 in March this year, a team of six computer scientists proposed a "ridiculously fast" algorithm, which made a breakthrough in the oldest "maximum flow problem" of computers.

The new algorithm can solve this problem in "almost linear" time, that is, its running time is basically proportional to the time required to record the details of the network.

Https://arxiv.org/ abs / 2203.00671v2 maximum flow problem is a combinatorial optimization problem, which discusses how to make full use of the capacity of the device to maximize the transport flow and achieve the best results.

In daily life, it has many applications, such as Internet data flow, airline scheduling, and even matching job seekers with vacant positions and so on.

"I was convinced that there could not be such an efficient algorithm for this problem," said Daniel Spielman from Yale University, one of the authors of the paper. "

Top6: a New way to share Information

Mark Braverman, a theoretical computer scientist at Princeton University, has spent more than 1/4 of his life working on new theories of interactive communication.

His work enables researchers to quantify terms such as "information" and "knowledge", which not only gives people a better understanding of interaction in theory, but also creates new technologies to make communication more efficient and accurate.

Braverman loves to think about quantitative puzzles on the couch in the office. For this achievement, among other things, the International Mathematical Union awarded the Braverman IMU Abacus Medal in July, one of the highest honors in theoretical computer science.

IMU's award message points out that Braverman's contribution to information complexity has given people a deeper understanding of the different measures of information costs when two parties communicate with each other.

His work paves the way for new coding strategies that are not easily affected by transmission errors and new ways to compress data during transmission and operation.

The problem of information complexity comes from Claude Shannon's pioneering work-in 1948, he developed a mathematical framework for one person to send a message to another through a channel.

The greatest contribution of Braverman lies in the establishment of a broad framework that clarifies general rules for describing interactive communication boundaries-rules that propose new strategies for compressing and protecting data when sending data online through algorithms.

Https://arxiv.org/ abs / 1106.3595 the question of "interactive compression" can be understood as follows: if two people exchange 1 million text messages but learn only 1000 bits of information, can the exchange be reduced to 1000 bits of conservation?

Research by Braverman and Rao shows that the answer is no.

Braverman not only cracked these problems, but also introduced a new perspective that allows researchers to clarify them first and then translate them into the official language of mathematics.

His theory lays the foundation for exploring these problems and determining new communication protocols that may appear in future technologies.

Reference:

Https://www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2022-20221221/

Https://mp.weixin.qq.com/s/ALpgkM6jg_-xA8UYg0O5GA

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era), edited by Aeneas, sleepy.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report