Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

From cloud to end, Google's AI chip 2.0

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

He who gets the chip wins the world. We can extend this sentence to say that those who get AI chips will have the future.

For smart terminal manufacturers, being able to develop their own SoC chips seems to be a symbol of top strength. As we all know, Samsung, Huawei and Apple, which are among the top three smartphones in the world, all have self-developed SoC chips.

(2020 smartphone chip running score data TOP10)

Now, after years of experience in assisting AI chips, Google is finally entering the core hardware of smart terminals-SoC processor chips.

According to a report by foreign media Axois, Google has made remarkable progress in developing its own processors, and recently its independently developed SoC chip has been successfully released.

It is reported that the chip is jointly developed by Google and Samsung and is manufactured by 5nm technology, an 8-core CPU cluster designed with "2-2-2-4" architecture, and GPU with the new ARM public architecture, while integrating Google Visual Core AI vision processors on ISP and NPU. This allows Google's terminal chips to better support AI technology, such as greatly improving the interactive experience of Google assistants.

In the listing plan, Google's SoC processor chip is expected to be the first to be deployed in the next generation of Pixel phones and Google laptop Chromebook.

Google's move is seen as a close approach to Apple's self-developed processor model, from "native system + the most mainstream flagship chip" to "native system + self-developed chip." Google's intention is not only to get rid of the clamp of Qualcomm chips. more importantly, it wants to achieve a better combination of hardware and software through self-developed chips, so that Android can play a greater performance advantage in its own hardware.

In fact, we know that self-developed chips can not bring more value to Google in terms of hardware profits, among which the most valuable place is to combine the advantages of Google AI with software and hardware to get better applications on smart terminals.

As we all know, Google was the first to enter the game on the AI chip and was strong. However, how strong is the technology of AI chip, and what is the mutually reinforcing relationship between AI technology and chip research and development? I believe that many people are still unknown, and this is what we are going to explore in depth.

From the cloud to the end, the road to the advancement of Google AI chips

Before Google's TPU (Tensor Processing Unit, Tensor processing Unit) processor was introduced, most machine learning and image processing algorithms ran on two general-purpose chips, GPU and FPGA. Google, which proposed the deep learning open source framework TensorFlow, has made such a special chip designed for TensorFlow algorithms.

TPU was born in this way, but it was in the man-machine go match between AlphaGo and Lee se-dol that TPU became famous. It is said that Google actually played another big game for TPU at that time. Because before challenging Lee se-dol, AlphaGo ran on 1202 CPU and 176GPU to compete with chess player Fan Kui. This makes Lee se-dol, who has seen the process of the game, very confident. However, a few months before the game, AlphaGo's hardware platform was replaced with TPU, which made the strength of AlphaGo grow quickly, and Lee se-dol suffered a lot from the later battle situation.

(Google TPU chip)

TPU is an application-specific integrated circuit (ASIC). As an AI chip specifically used in Google Cloud, its mission is to accelerate the landing of Google's artificial intelligence. On the second-generation TPU announced by Google in 2017, its floating-point computing power is as high as 180 trillion times per second, which can be used for both reasoning and training. By the 2018 version of TPU3.0, its computing performance has improved eightfold compared with TPU 2.0, reaching 1000 trillion floating-point calculations per second.

Since then, Google's AI layout has gradually moved to the edge. At Google's annual cloud services conference in 2017, it officially unveiled its edge technology and launched Google Edge TPU.

Edge TPU is an ASIC chip designed by Google to run TensorFlow Lite ML models on the edge. Edge TPU can be used in more and more industrial scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, speech recognition, as well as local deployment, health care, retail, smart space, transportation and other fields.

Because of its small size and low power consumption, Edge TPU is only responsible for AI acceleration discrimination and acceleration calculation, and only plays the role of accelerator and auxiliary processor. It can deploy high-precision AI at the edge and complements CPU, GPU, FPGA and other ASIC solutions that run AI on the edge.

Google also launched a range of development hardware based on Edge TPU chips last year, as well as localized AI platform Coral to provide a high-quality, easy-to-deploy AI solution for the edge side.

Although TPU and Edge TPU are mainly auxiliary servers that accelerate computational reasoning for deep learning, we can still see Google's layout ambitions on AI chips. From the cloud to the edge and mobile smart terminals, it is precisely to understand the inherent logic of Google AI chips.

(Pixel Visual Core)

Since 2017, Google has launched customized camera chips "Pixel Visual Core" and "Pixel Neuro Core" on smartphones and used them in Pixel 2, Pixel 3 and Pixel 4.

Pixel Visual Core, an image processing unit (IPU) and the first mobile chip developed by Google, is designed to accelerate the camera's HDR+ computing. It uses machine learning and computational photography to intelligently repair imperfect parts of photos and make image processing smoother and faster. This is why many people say that Google phone photos are not taken, but calculated.

Last year, Google used a dedicated Pixel Neural Core processor instead of Pixel VIsual Core on Pixel 4. The neural network algorithm can make the camera lens of Google mobile phone recognize the object, and then either give the data to the image processing algorithm to optimize, or output the data to Google assistant for recognition. At the same time, Pixel Neural Core also allows Google assistants to engage in more complex man-machine conversations and offline voice text translation.

If Google didn't have AI algorithms and development software such as TensorFlow, Halide and compilers, many of the designs of Google's AI chips would obviously not play much of a role. The combination of hardware and software makes Google's chip design more thorough and rigid.

The combination of hardware and software, the hard background of the rapid iteration of Google AI chip

There is no doubt that Google is at the forefront of the core-making track of Internet companies.

It is reported that as early as 2006, Google considered deploying GPU or FPGA, or application-specific integrated circuits, in its data center. Since there were not many applications to run on specialized hardware at that time, the extra computing power of Google's large data centers could meet the computing requirements.

By 2013, Google had begun to launch a voice search technology based on DNN voice recognition, and user demand doubled the computing needs of Google's data center, making CPU-based computing particularly expensive. As a result, Google plans to use ready-made GPU for model training and quickly develop a dedicated integrated circuit chip for reasoning.

We later learned that this custom chip was TPU, and the rapid development cycle was only 15 months. Google is not the only one based on software cores, but compared with Amazon and Facebook, Google has been launching continuous chip products. Google can be so fast and high-frequency "hardware" output, that naturally has its "hard" reason.

First of all, it must be strategic importance. Google CEO Pichai has previously stressed that Google has never been hardware for hardware, the logic behind must be AI, software and hardware, the real solution to the problem depends on the trinity.

The second is the importance of talents. Take the current SoC chip on Google's consumer side as an example. The project has long been an open "secret" to the outside world. Since the end of 2017, Google has been poaching high salaries from Apple, Qualcomm, Nvidia and other companies, including John Bruno, a famous R & D engineer for Apple's A-series processors. But it wasn't until last February that Google officially announced the formation of a "gChips" chip design team in Bangalore, India, dedicated to Google's smartphone and data center chip business, and will open a new semiconductor factory there in the future. Consumer-grade chips seem to be just one foot in the door.

Of course, the most important factor is Google's innovative advantage in AI chips. We know that the research and development of AI chip itself is a long cycle and expensive project. The cycle from chip design to finished product may not catch up with the development of AI algorithm. How to achieve the balance between hardware design, algorithm and software of AI chip has become the key advantage of Google chip design.

The solution proposed by Google is more commendable, that is to use AI algorithm to design AI chip.

Specifically, there are the following problems in AI chip design. First of all, the placement of 3D chips, the deployment of hundreds to thousands of components across levels in restricted areas, engineers need to be manually designed to configure, and through automated software for simulation and performance verification, which usually takes a lot of time. Secondly, the design architecture of the chip can not catch up with the development of machine learning algorithms or neural network architecture, resulting in the poor effect of these algorithm architectures on the existing AI accelerators. In addition, although the design process of chip layout planning is accelerating, there are still limitations in the optimization capabilities of multiple objectives, including chip power consumption, computing performance and area.

To meet these challenges, Google senior research scientist Mirhoseini and team researcher Anna Goldie proposed a neural network that transforms chip layout modeling into reinforcement learning problems.

Unlike typical deep learning, reinforcement learning systems do not use a large number of labeled data for training. On the contrary, the neural network will learn while doing, and when successful, adjust the parameters in the network according to the effective signal. In this case, the effective signal becomes an alternative index to reduce power, improve performance and reduce area combination. As a result, the more designs the system performs, the better it will work.

After studying the chip design for a long time, it can complete the design for Google Tensor processing unit in less than 24 hours, and the power consumption, performance and area all exceed the design results of human experts for weeks. The system also teaches some new skills to human counterparts, the researchers said.

Eventually, the Google team hopes to achieve the goal of "designing more chips at the same time, faster, lower power consumption, lower cost and smaller form factor" like this AI system.

In the future, Google's AI ambitions for SoC chip integration

This time Google's self-developed terminal processor SoC chip is essentially an extension of Google AI chip.

Careful people should have found that this time the SoC chip is not entirely from the Google research and development team, but chose to cooperate with Samsung. From the media exposure, Google's mobile phone master control will use the 5nm process, Cortex-A78 core, up to 20 cores of the new GPU, and these happen to be the characteristics of Samsung Exynos 1000. Therefore, the main "Google element" of this Sanxing stack chip is the application of Google's self-designed AI chip on ISP and NPU.

(Google Pixel5 spy photo)

This choice naturally has Google's full consideration and some obvious advantages, but there are also some adverse effects.

The most intuitive benefit is to speed up the development of Google's mobile-side SoC chip, reduce the dependence on Qualcomm processors, and quickly apply it to the next generation of Google pixel phones.

Another benefit is that Google's leading chip design will allow Google to build its own closed system like Apple. Google's hardest core is that it has huge data and AI algorithms. With the increasingly rich data experience and AI experience at the application level, such as the function of real-time voice transcription in flight mode, the hardware performance of the mobile phone and the compatible support of the system may become the performance ceiling of the smartphone. Probably no one knows better than Google how to maximize processor performance in Android.

After all, the market performance of the previous Google Pixel phones is lukewarm, and although it has great advantages in shooting algorithms and applications such as AI assistant, there has always been a "deficiency" in the terminal's appearance design, screen, camera, battery and other hardware configuration, which is difficult to compete with the flagship models of several mainstream terminal players around the world. Presumably, the pricing of the new Pixel model with the latest generation of SoC chips will also be very "high-end", but the "skew" in hardware may still affect its overall market performance.

In addition, because this is a new "non-mainstream" chip, it will no longer be the preferred test model for "software development prototype" for games and software developers.

In any case, this SoC chip, which integrates deep learning performance, will prepare Google to compete for the future AI market, help Google maximize the performance of AI applications such as voice recognition and image processing on mobile terminals, and occupy the leader position of real intelligent terminals one step ahead of time.

In any case, Google's move to create a "core" will certainly have a positive impact on upstream chip manufacturers and smart terminal manufacturers. If the success of Google's "core-building" strategy is proved by "Whitechapel", how far is Google from Apple?

Self-developed chips and Android stack the latest AI computing power, and if you make up for the shortcomings of the hardware configuration, then Google is very likely to create a closed-loop system with a perfect fit of software and hardware in the Android ecosystem.

Finally, we found a more puzzling detail. The chip is codenamed "Whitechapel" and called "Whitechapel". If you are familiar with British and American TV series, you may have seen a British play called "the murder of Whitechapel". If we don't have to read it too much, we can understand that some important developer likes this thriller suspense, so it's named after it. If you have to read it too much, Google may be trying to use a century-old "mystery" to herald the beginning of the dispute over AI apps for smart terminals.

Of course, this answer may not be known until Google's new Pixel phone is available.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report