In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
If there is any chip product that attracts the most attention of digital talent, it must be GPU.
GPU is a very busy market. Playing around GPU / graphics cards has also become a fun after-dinner for digital enthusiasts: "ultra-low power consumption, extreme color, curved surface subdivision", "one card and one building, two cards destroying the earth, three cards galaxy, four cards Genesis". It once challenged or even surpassed the CPU of the same period, it has made countless gamers crazy, and it has extended its tentacles to deeper and wider areas. [1]
Because of the long-term monopoly of foreign manufacturers, domestic expectations for independent GPU are getting stronger and stronger.
In this article, you will learn about the relationship between GPU and graphics card, the domestic and foreign market situation and localization layout of GPU, GPU and the thinking behind it.
1. Those easily confused concepts GPU (Graphics Processing Unit, graphics processor), also known as display core, visual processor, display chip, is a microprocessor designed for parallel processing, which is very good at handling a large number of simple tasks, including graphics and video rendering. GPU can be used in desktops, laptops, workstations, game consoles, embedded devices, data centers and other scenarios that require rendering graphics or high-performance computing.
In life, we generally call GPU a graphics card. But in fact, there is a slight difference in terminology between GPU and graphics card. GPU refers to the chip that handles various tasks, and video card refers to the card that brings together GPU chips, video memory, interfaces, and so on.
According to the way of accessing the system, GPU is divided into integrated GPU (Integrated GPU,iGPU) and discrete GPU (Discrete GPU, dGPU). The former is what we call integrated graphics card / core graphics card, and the latter is what we call independent graphics card. The two types of GPU have their own characteristics and usage scenarios.
The two categories of GPU, tabulation, in the integrated GPU of fruit shell hard technology, GPU is embedded next to CPU, and there is no separate memory group for graphics / video, which will share system memory with CPU. Because the integrated GPU is built into the processor, it usually consumes less power and generates less heat, thus extending the battery life.
The discrete GPU appears entirely as a stand-alone card and is usually connected to the PCI high-speed slot, just as the motherboard contains the CPU. In addition to GPU chips, discrete GPU includes a large number of components needed to allow GPU to run and connect to the rest of the system. Discrete GPU has its own dedicated memory, as well as its own memory source and power supply, so its performance is higher than that of integrated GPU. However, because it is separated from the processor chip, it consumes more power and generates a lot of heat. [2] [3] [4]
2. From dedicated to general-purpose to modern, GPU has two major functions: one is to act as a powerful graphics engine, and the other is to be used as a highly parallel programmable processor to handle a variety of neural networks or machine learning tasks.
Graphic computing is a unique skill of GPU. When we drag the mouse, GPU calculates the graphic content that needs to be displayed and presents it on the screen; when we open the player to watch the movie, GPU decodes the compressed video information into the original data; when we play the game, GPU calculates and generates the game picture. Behind the click of the mouse is a complex process, including vertex reading, vertex rendering, element assembly, rasterization, pixel rendering and so on. [5]
Graphics GPU is widely used in games, image processing, cryptocurrency and other scenes, focusing on the number of frames, rendering fidelity, real scene mapping and other parameters. [6]
The different stages of hardware acceleration of the pipeline defined by graphic API, tabulation, shell hard technology, reference materials, "computer Architecture Foundation" [5] General computing is the best embodiment of the advantages of GPU parallel computing. Scientists and engineers have found that as long as the data exists in graphic form and adds some general computing power to GPU, GPU can be competent for a variety of high-performance modular computing tasks, which is what the industry calls generic GPU (GPGPU,General-Purpose Graphics Processing Unit). In essence, generic GPU is still a GPU, but it will be customized and close to high-performance computing, AI development, and many other amazing breakthroughs, resulting in larger training sets, shorter training times, lower classification / prediction / reasoning power, and less infrastructure footprint. [7]
General GPU is mainly used in large-scale artificial intelligence computing, data center and supercomputing scenarios to support larger data volume and concurrent throughput. [6]
Behind the two major functions is a long history of development.
In 1962, Ivan Sutherland's paper "SketchPad: graphical Human-computer Communication" and his recorded Sketchpad operation video became the basis for defining modern computer graphics [8]. In the following 20 years, due to the limitations of accuracy and running strength, the graphics card at that time only translated the graphics generated by CPU calculation into display signals, so it can only be called graphics adapter (VGA Card) [9]. It wasn't until IBM launched two 2D graphics cards, MDA and CGA, in 1984 that the industry took shape. Although the two products can only be regarded as ugly ducklings, it marks the beginning of GPU's path to compete with CPU.
In the 1990s, the rise of 3D graphics accelerated. After the emergence of the first real 3D graphics acceleration card Voodoo in history, S3 launched the first graphics card S3 Virge with both 2D and 3D graphics processing capabilities. Since then, the industry began to blossom, gradually giving birth to NVIDIA's NV1, Matrox's Mlennium, Mystique, PowerVR's PCX1 and other excellent products, once showing the grandeur of a hundred schools of thought contend. After prosperity, it is cruel big fish swallowing small fish mergers and acquisitions and industry integration, the formation of Nvidia, AMD two dominant pattern. Since then, GPU has also opened the way for leapfrog iterations.
The history of independent graphics card development, tabulation (fruit shell hard technology), reference materials (IEEE Computer SOCIETY [11], Invida official website [12]), and the versatility of open data GPU are gradually revealed in the iteration. From 1990s to the beginning of the 21st century, in order to deal with more complex and large number of graphics computing problems, GPU mode is no longer a fixed graphics pipeline mode, and the programmability of vertex processors, geometry processors, pixel and sub-pixel processors in the graphics pipeline has been enhanced, showing general computing power. Then, in order to solve the problem of GPU on-chip load balancing, Unified rendering processor (Shader Processor) replaces various programmable components. At the same time, the application of stream processor (a computing architecture that fully considers concurrency and communication on the stream computing model) lays the foundation for general GPU computing. [13]
The rapid growth of programmability and computing power of GPU has attracted the attention of a large number of research groups, scrambling to map a large number of complex computing problems to GPU, and position GPU as an alternative to traditional microprocessors in future high-performance computer systems [14]. The Tesla architecture developed by Nvidia officially marks the development of GPU towards general GPU, which lays a foundation for extensive application in the field of deep learning. [15]
The road of GPU from graphic display to general computing [16] back to the present, the specificity of GPU in graphic computing and the versatility for artificial intelligence have caused a debate in the scientific community whether to split the AI and 3D functions of GPU into two kinds of DSA. GPU is designed for graphic computing with high efficiency, but only supports a few specific algorithms and models. General-purpose computing has good compatibility, but low efficiency and high power consumption. [17]
At present, the consistent view in the industry is that the "dual personality" shown by GPU in graphical computing and general computing will gradually merge, there will be no functional boundaries in the future, and GPU will also have the ability of native differentiability and tensor acceleration. [18]
So, what happens after that? From the perspective of the conference in recent years, GPU will develop in three directions: high performance computing (GPGPU), artificial intelligence computing (AI GPU), and more realistic graphics display (Ray Tracing GPU, ray tracing GPU). AI is the key, GPU hardware / software interface will make GPU "the CPU of the AI world", and AI-based rendering will make tensor acceleration become the mainstream of GPU. [18]
The two major functions and applications of GPU [16] 3. GPU, the predator of GPU and CPU, is easy to use, but it can not be separated from CPU. On the one hand, GPU cannot work alone and relies on CPU to control calls; on the other hand, the architectures of the two are very different and their construction purposes are different.
The CPU will contain 4, 8, 16 or even 32 or more powerful kernels, and almost all functions such as arithmetic logic unit (ALU), floating point processing unit (FPU), address generation unit (AGU), memory management unit (MMU) and so on will be encapsulated in one kernel. Generally speaking, the ALU of the cell in CPU is about 25%, the logical control is 25%, and the cache Cache is 50%. In contrast, in GPU, the cell ALU is usually 95%, and the cache Cache is 5%. [19]
Originally, GPU was designed as dedicated hardware to help CPU speed up graphics processing. Graphics rendering has strong parallelism, which requires very intensive computing and huge data transmission bandwidth, so GPU is designed to contain thousands of smaller cores. The kernel of each GPU can perform some simple calculations in parallel, and the kernel itself is not very intelligent, but unlike CPU, which has eight core problems, GPU can simultaneously use all kernels to perform deep learning calculations such as convolution, ReLU and pooling. In addition, GPU adopts flexible storage hierarchy design and two-level programming and compilation model. [20] [21]
The differences between GPU and CPU [22] different structural designs make GPU have its own expertise. The frequency of GPU is only 1/3 of that of CPU, but in each clock cycle, it can perform nearly 1/3 times more calculations than CPU in parallel. In a large number of parallelism tasks, GPU is much faster than CPU, and it appears much slower for those tasks with low parallelism. In addition, there is usually 5 to 10 times the memory bandwidth compared to CPU,GPU, but there is a longer delay in accessing data, which results in GPU doing better at predictable calculations but worse at unpredictable calculations. [23]
Thus it can be seen that CPU and GPU are complementary and non-conflicting. The former focuses on serial computing while the latter focuses on parallel computing. For example, CPU can be understood as a doctor. He not only has profound knowledge, but also studies many problems so deeply that many difficult problems cannot be solved without him. And GPU is tens of thousands of junior high school students, only simple arithmetic, but no matter how powerful the doctor is, it is impossible to calculate tens of thousands of simple arithmetic operations in an instant. [24]
The difference between CPU and GPU [22] A brief history of computing has given birth to a rich variety of digital chips, each of which has a long history of development. Behind the computer is the computing problem, there are no more than scalar, vector, matrix, space several data types, GPU and other digital chips will inevitably produce intersection and overlap. For now, CPU is still the same CPU,GPU, but not GPU.
For a long time, there have been disputes between GPU, FPGA and ASIC, which can respectively constitute "CPU+GPU", "CPU+FPGA" and "CPU+ASIC" heterogeneous computing systems. at the same time, FPGA and ASIC manufacturers often compare their own products with GPU computing power in parallel. For example, the NVIDIA Tesla A100 often becomes a "combat strength measurement unit", and the marriage grabbers of CPU are talking about their advantages.
Rationally speaking, GPU, FPGA and ASIC are all good experts for CPU computing, and their characteristics are completely different for manufacturers or downstream users. Although they may show stronger computing power or better power consumption in some application scenarios, the deployment process inevitably takes into account TCO (total cost of ownership), construction difficulty, system compatibility and so on, so it is difficult to judge which is strong and which is weak.
Compared with different computing devices, watchmaking shell hard technology, however, GPU is relatively mature, peak computing power is excellent, and its position in the graphic display is unshakable, so it is only natural to catch up with the semiconductor craze and become the darling of the market.
Data show that in the AI training stage, GPU accounts for about 64 per cent of the market, while FPGA and ASIC account for 22 per cent and 14 per cent respectively; in the reasoning stage, GPU accounts for about 42 per cent of the market, while FPGA and ASIC account for 34 per cent and 24 per cent, respectively. [25]
Performance requirements and specific indicators of AI chips in different application scenarios [25] the pattern of foreign monopoly GPU is not only a broad business at present, but also has unlimited potential in the future.
According to Verified Market Research, from 2021 to 2030, GPU will grow from $33 billion to $477.3 billion at a compound annual growth rate of 33.3%. [26]
GPU will be made into various specifications according to the different power load requirements of the platform. For example, the typical power consumption of GPU in mobile phones is 5W, the typical power consumption in notebook computers is 150W, the desktop can reach 400W, and the data center strives for performance. According to the power consumption, the market is mainly divided into desktop-level and mobile-level applications.
The two markets are tripod: the desktop GPU market is monopolized by Nvidia, AMD and Intel, and the mobile GPU market is monopolized by Arm, Imagination and Qualcomm. At the software level, the above-mentioned foreign companies also provide support for a series of heterogeneous computing standards such as CUDA and OpenCL. [27]
In terms of desktop products, graphics cards for PC or games account for most of the market, accounting for more than 50% of the data center.
According to Jon Peddie Research (JPR) data, the number of GPU shipments (including integrated and stand-alone graphics cards) used by Q2Magi PC in 2022 is 84 million, of which Intel GPU market share is as high as 68%, mainly due to Intel's massive nuclear display in desktop / notebook CPU integration; AMD ranks second with a 17% share, the company has both verification and unique display, but nuclear display obviously accounts for the majority, accounting for only about 3% of the overall PC market. Nvidia focuses on the dominant market, so although it seems to have only 15% of the market share, it basically dominates the market. [28]
Q2 PC market GPU supply in 2022 [28] Nvidia is the absolute leader in independent GPU in the world. In the early days, Nvidia focused on PC graphics processing business, and then expanded to intelligent terminals, autopilot, AI algorithms and other fields after the upsurge of GPU. According to the 2022 Q2 results, Nvidia's main business includes game GPU, data center GPU, professional visual design GPU, intelligent driving GPU, OEM and other businesses, accounting for 30.5%, 56.8%, 7.4%, 3.3% and 2%, respectively. [29]
In order to better cope with the competition, the architecture design of each generation of Nvidia graphics cards has changed greatly. According to the statistics of the architecture of each generation of Nvidia, the two core elements of performance improvement, the stream processor (Streaming Multiprocessor,SM) and cache (Cache), have been greatly changed in order to constantly adjust the configuration ratio of various components under the limited area and power consumption of the chip, and to seek the optimal solution through process iteration. [30]
Nvidia Architecture change [30] Nvidia is the originator of the concept of GPU, and almost every product will be discussed on a large scale by game enthusiasts and designers. Especially in the 40 series using the new Ada Lovelace architecture, using TSMC 4N custom technology, shader capacity up to 83TFlops, effective ray tracing computing power up to 191TFlops, 2.8 times higher than the previous generation. In addition, the tensor processing performance of the fourth generation Tensor Cores,FP8 is as high as 1.32PFlops, which is 5 times higher than that of the previous generation. [31]
Nvidia 30 series and 40 series graphics card collection, tabulation (shell hard technology) at the same time, Nvidia is also an advocate of GPU in the data center. Not only the first general GPU products were launched in the industry, but also the parallel programming model CUDA was released in 2006. The software and hardware base composed of general GPU and CUDA forms the foundation of Nvidia leading AI computing. [6]
However, the past few months have been difficult for Nvidia. Affected by the continuous decline in demand in the semiconductor industry, there was a financial avalanche and a sharp drop in stock prices. The newly released 40 series graphics card is also controversial, causing Huang Renxun to cancel the RTX 4080 12GB version. [32]
AMD's GPU takes performance-to-price ratio as its main competitiveness. On stand-alone GPU, the price of similar products is generally about 30 per cent lower than that of Nvidia, and on integrated GPU, the APU product that contains the check is cheaper than the Intel CPU that contains the check. [33]
In terms of nuclear display, according to Tom's Hardware test data, the nuclear display of the AMD Ruilong series performs well in many games. [34]
Core graphics card part of the performance comparison [34] unique aspect, AMD has always been the pursuer of Nvidia, only from the floating-point calculation, there is a certain gap with Nvidia; from the actual performance, it is on a par with Nvidia. No one can tell whether N card (Nvidia) or A card (AMD) is better or weaker. [35]
Independent graphics card part performance comparison [35] in everyone's understanding, Intel and GPU seem to have nothing to do with it, but in fact it is the real leader in GPU shipments, thanks to its CPU market share in the global PC market will be nearly 70% (including mobile notebooks, desktops, servers), its core display has also been incidentally into thousands of industries.
Q2 2009 2022 Q1 global PC graphics processing unit (GPU) shipments share (by supplier) [36] but stronger than Intel, but also repeatedly failed in independent GPU.
Intel is definitely not a novice or amateur at GPU. The company has the best GPU engineers in the industry, the best fab, bank accounts that others can only imagine, and brands around the world. It has even won the title of the world's largest GPU seller, with more shipments than its competitors combined. It may be enough for other companies to have such an achievement, but Intel has been frustrated by its repeated frustrations with independent GPU over the past 20 years. [12]
In 1998, Intel released a product, Intel i740, which has a good 3D performance, but it can only be considered qualified among ATI, Nvidia, S3 Graphics and other products, so it has no choice but to give up its unique way temporarily.
Then, in 2009, Intel did not give up its dream of being unique and planned to build a Larrabee GPU. You know, at that time, GPU was a combination of simple small computing cores, and Intel happened to have the Pentium first-generation processor core P54C. It sounds easy to integrate the 20-year-old core into a graphics card, but it is clear that the Larrabee research project caused a lot of trouble for Intel, and the plan failed after countless ticket jumps and news of insufficient research funding. However, Intel on the basis of Larrabee research, developed a multi-core architecture (MIC) Xeon Phi coprocessor, and was selected by Tianhe 2, so Intel this time is not in vain. [37]
In 2020, Intel was reborn, betting everything about independent graphics cards on the new Xe architecture. In 2022, Intel Arc series graphics cards were born, covering mobile, desktops, workstations and data centers. Whether Intel can succeed this time depends on the follow-up market feedback.
The story of mobile products is not as colorful as desktop-level GPU, especially on mobile phones, tablets, and wearable devices, where GPU is highly bound to architecture, and IP architectures such as Arm, Imagination, Qualcomm Adreno and other IP architectures have their own fans, and the pattern may not change dramatically. [38]
From a product point of view, most of the GPU IP used by MediaTek and Samsung's mobile phone SoC comes from Arm; Apple and Qualcomm's GPU IP is self-developed (Apple's GPU is largely inherited from Imagination), while the Ziguang sharp phone SoC uses Imagination's GPU IP. [39]
Smartphone and tablet GPU benchmark test ranking [40] 4, what is the opportunity for domestic GPU? "the price of Nvidia's data center GPU is astonishingly expensive, and domestic can not be replaced." The Economic Observer Network previously quoted practitioners as saying that the price of the Nvidia A100 GPU is about US $3,000, and there is no alternative, and in June this year, Nvidia announced a 20% price increase for the A100 80G GPU chip.
The industry has long been a bitter monopoly. In the past two years, there has been a wave of GPU financing in China, and projects have been financed one after another.
Since 2020, the total financing of the GPU industry has exceeded 20 billion yuan. From 2020 to 2021 alone, there were nearly 20 financing events in the general GPU field, and these companies are mainly pursuing a desktop-level independent graphics card market. According to Verified Market Research, Chinese mainland's independent GPU market will be US $4.739 billion in 2020 and is expected to exceed US $34.557 billion in 2027. [41]
Why do domestic startups only love independent graphics cards? On the one hand, integrated GPU and CPU are highly bound, basically CPU manufacturers for design and production, such as Intel and AMD two companies, such as the domestic CPU manufacturer Godson 7A2000 internal integration of self-research GPU [42]; on the other hand, independent graphics card is a high-performance device race track, not only technology leading integrated graphics card, and a wider range of applications, on the contrary, integrated graphics card is mostly used as a bright card or low-load daily card.
At present, the start-up companies that have been financed, such as Core pupil Semiconductor, Core Technology, Moore Thread, Tiantian Intelligence Core, and Wall Wisdom Technology, have launched products one after another, and even entered some complete machines. Godson Zhongke, Haiguang Information, Cambrian, and Xinyuan shares are also listed companies that continue to cultivate GPU business (including Jixian and Duxian).
But on the whole, domestic GPU products are still in the initial stage, lack of application scenarios, product performance and Nvidia, AMD products have a certain gap, software and ecology are difficult to compete. Although the advantage is not obvious, driven by international force majeure factors, China has to consider the problem of domestic replacement.
Domestic GPU financing listing situation, tabulation? fruit shell hard technology, reference materials (Science and Technology Innovation Board Daily [43], Capital shares [44] Why does GPU attract so much money? Because GPU is really difficult to design and manufacture, it is called the two most difficult chips with CPU. People in the industry agree that building a GPU is more difficult than building a CPU, and it requires extremely high computational performance, security, and stability, and it can only be completed with a complex and complete system design. [45]
What are the difficulties and opportunities of domestic GPU? The fruit shell hard technology team believes that:
Determine what to do first. in fact, GPU has different requirements in different application scenarios, and it is very important to choose a good entry point. At present, it mainly includes three types of products: AI artificial intelligence, FP double-precision floating-point operation and graphics rendering, among which graphics rendering is the most difficult. [46]
In addition, the calculation cost should also be considered. In today's manufacturing process of a few nanometers, there must be a yield problem in semiconductor production, and it is difficult to be exactly the same. Considering that the smaller the nano process is, the more difficult the contract manufacturing is, it is not realistic to pursue the best and most stable. At the same time, the final cost will also be reflected in the consumer side. If you want to gain a foothold in the market, you have to consider the calculation cost and provide a variety of options for customers with different needs. [47]
Nvidia is famous for its precision. Its GPU will scan the bad areas of the stream processor and close these circuits at the initial stage of production, which can be divided into three or nine grades according to the number of bad areas. the core with high quality and stability is the higher-priced data center processor, and the ones with good quality but relatively second are shipped to 4090 and 4080 respectively [30]. This advantage is that it can not only achieve full coverage of data centers, workstations and personal computers, but also provide different cost options for different needs.
Intel, AMD, Nvidia official website shows that the three products not only have a clear classification of price stalls, but also cover many scenes. In contrast, domestic GPU manufacturers are mainly divided into data center GPU and consumer GPU, but the initial stage can not cover all scenarios.
It is more difficult than CPU. Why is it difficult to break GPU made in China?
First of all, GPU patent barriers are extremely high, and the focus of the global layout of patents is in the United States. International giants can share R & D costs through economies of scale, constantly burying mines on patents and restricting the development of competitors.
Secondly, because the GPU has no controller and needs to rely on the CPU control call, it can not work alone, so the domestic GPU must resonate with the domestic CPU at the same frequency.
From the point of view of the difficulty of technical implementation, GPU is a kind of chip that is more difficult to develop than CPU. There is a lack of domestic leaders and engineers, and an experienced engineer has to train in a big factory for at least 10 years. From the current situation of domestic enterprises, the founding team basically has Nvidia, AMD work experience. [25]
In addition, software ecology is also another threshold for GPU. Software determines the upper limit of GPU ecological capacity, and is also a necessary condition for fully releasing hardware capabilities [47]. Intel has a similar view, saying that the software ecology based on GPU will provide solutions for different load development chips, taking into account the needs of many areas such as high-performance computing, artificial intelligence and games. Software ecology needs to evolve in a highly collaborative way. [47]
Chip programmability is not a decisive factor [48], some people regard chip programmability as an important indicator of chip popularity, and say that chips that are not easy to program will not be successful in the market. Judgment logic is a simple "poor programming = difficult to use = fewer people = small market = failure".
In fact, processor chips such as DSP, NPU and GPU represented by CUDA all have a threshold in programming, but this does not prevent them from shipping tens of millions of pieces each year and tens of billions of dollars in market capacity.
Programming is a problem that professionals should consider. For GPU, the difficulty of programming will not directly affect the scale of the market demand, performance, power consumption, performance-to-price ratio is the key to win the market.
The downward impact of consumer electronics demand on the semiconductor industry has recently entered the 17th downward stage, the market demand for GPU is weakening, Nvidia, AMD independent GPU are greatly affected.
In addition, the reason for GPU's skyrocketing prices and out-of-stock whirlpool before, on the one hand, is the rise of the online office model, on the other hand, it is not dedicated to the application of mining. Looking back at the current development trend, the online office dividend period has long been over, coupled with the end of the cryptocurrency mess, AMD also said frankly in the financial report that its independent GPU business was greatly affected by mining.
According to this logic, most of the mass production time of domestic GPU is in the downward cycle, and there is no opportunity for large-scale application, so it will meet a large market test.
What's the explanation?
One solution is to deviate from the edge of the sword and reverse investment. Fruit shell hard technology has mentioned in the historical article "Semiconductor running into the era of Big surplus" that there is a strategy of reverse investment in the semiconductor industry. For example, Samsung Semiconductor has invested in reverse three times in the face of a weakening global semiconductor market, expanding production capacity, beating players in the United States, Japan, and Europe, and winning more than 40% of the DRAM chip market.
Another solution is to seize the existing space and hold on until the market rises. Nowadays, computing power has become an important productive force, doubling every 12 months. At the same time, every 1 yuan invested in computing power can lead to a GDP economic growth of 3 to 4 yuan. That is why there is such an important strategy of counting east and west. China needs to seize the existing opportunities and look forward to the next semiconductor uplink cycle. [49]
Domestic GPU needs more time to precipitate. At the same time, there are some interesting phenomena in domestic GPU.
The Science and Technology Force has pointed out that in order to promote the Chinese Super League and Yingwei, there is a field bogey horse-racing competition in the domestic GPU, such as a GPU that is touted as exceeding the international flagship computing power, but does not support double-precision floating-point computing and can only be used in the direction of artificial intelligence. [50]
Asked the core Voice pointed out that the so-called domestic GPU does not live up to the name, one is the built-in AI accelerator to run the scores of individual performance indicators, and to promote more than Nvidia, but in fact, the AI application covers thousands of lines of industry, not just to run one or two performance indicators, the key to a good chip is versatility [51]; the other is the use of a third-party GPU IP license, and claims to be self-developed and independently controllable. [52]
In fact, the semiconductor industry has never been a short-term transaction with an impetuous mentality, but a process that requires long-term technical precipitation and a small fish-like reshuffle. For the extremely difficult GPU, China needs to be more calm, and surpassing Invida is not an easy task in a day or two.
References:
[1] Jingdong Cloud developer: sharing | GPU computing in modern enterprises. 2019.3.14. Https://mp.weixin.qq.com/ s/0Uh0uGLSvUKiAv8lj2i7pg
[2] Intel:What Is a GPU?. https://www.intel.cn/content/www/cn/zh/products/docs/processors/what-is-a-gpu.html
[3] Intel:What Is the Difference Between Integrated Graphics and Discrete Graphics?.2021.7.7. https://www.intel.cn/content/www/cn/zh/support/articles/000057824/graphics.html
[4] Gigabyte. https://www.gigabyte.com/Glossary/gpu
Hu Weiwu, Wang Wenxiang, Su Menghao, Zhang Fuxin, Wang Huandong, Zhang Longbing, Xiao Junhua, Liu Su, Chen Xinke, Wu Ruiyang, Li Xiaoyu, Gao Yanping. The Foundation of computer Architecture [M]. Machinery Industry Press. January 3, 2022. Https://www.loongson.cn/ pdf / computer.pdf
[6] China Electronic News: high-end GPU chip: Nvidia's one-man show? .2022.9.19. Https://mp.weixin.qq.com/ s / JvexnFXvtXlppkWfTvZGbA
Guo Liang, Wu Meixi, Wang Feng, et al. Data Center Computing Evaluation: status quo and opportunities [J]. ICT and Policy, 2021, 47 (2): 79.
Sutherland I E. Sketchpad (1962): "A Man-Machine Graphical Communication System". Phil. Diss [J]. 1962.
[9] Chinese Journal of computer Science: summary: 25-year historical changes of graphics cards. 2010.6.9. Https://it.sohu.com/ 20100609 / n272680735.shtml
[10] Journal of computer Science: summary: 25-year Historical changes of graphics cards. 2010.6.9
[11] Nvidia: NVIDIA history. Https://www.nvidia.cn/ about-nvidia / corporate-timeline/
[12] IEEE Computer SOCIETY:Famous Graphics Chips: Intel's GPU History. https://www.computer.org/publications/tech-news/chasing-pixels/intels-gpu-history
Wang Haifeng, Chen Qingkui. A summary of the research on the key technologies of general computing for GPUs [J]. Journal of computer Science, 2013, 36 (4): 757-772. Http://cjc.ict.ac.cn/ quanwenjiansuo / 2013-4 / whf.pdf
[14] Owens J D, Houston M, Luebke D, et al. GPU computing [J]. Proceedings of the IEEE, 2008, 96 (5): 879,899.
Gao Guihai, Lu Wenyan, Li Xiaowei, et al. Comparative analysis of special processors [J]. Chinese Science: information Science, 2022 http://scis.scichina.com/ cn / 2022 / SSI-2021-0274.pdf
[16] Xiong Tinggang. The development course, future trend and development practice of GPU [J]. Micro / Nano Electronics and Intelligent Manufacturing, 2020, 2 (2): 36-40.
[17] Semiconductor industry observation: the turning point of the GPU market. 2022.8.15. Https://mp.weixin.qq.com/ s/72eiCjK5qz-DHHYDf53S9w
[18] CP Lu, PhD:Will The GPU Star in A New Golden Age of Computer Architecture?.2021.7.22. https://medium.com/m/global-identity?redirectUrl=https%3A%2F%2Ftowardsdatascience.com%2Fwill-the-gpu-star-in-a-new-golden-age-of-computer-architecture-3fa3e044e313
[19] Wan Xueyi, Xu Bulu. Research on Patent situation of GPU [J]. Integrated Circuit applications, 2017, 34 (07): 6-9.
[20] MATLAB: https://mp.weixin.qq.com/s/J3tEZH1hHoJpoBlNshjn9w
Ma Anguo, Cheng Yu, Tang Yuxing, et al. Research on storage hierarchy and load balancing strategy in GPU heterogeneous systems [J]. Journal of University of Defense Science and Technology, 2009, 5. Http://journal.nudt.edu.cn/ publish_article / 2009/5/200905008.pdf
[22] NVIDIA:What's the Difference Between a CPU and a GPU?.2009.12.16. https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/
[23] Thompson N C, Spanuth S. The decline of computers as a general purpose technology [J]. Communications of the ACM, 2021, 64 (3): 64-72. Https://doi.org/10.1145/3430936
[24] Imagination Tech: what is CPU / GPU that is easy to understand? .2017.10.31. Https://mp.weixin.qq.com/ s / l9KCh_WstDDiIpKo0pzdaA
[25] Wisdom: GPU depth report, three giants, fourteen domestic players understand the article [attached download] | Internal reference of Zhidong. 2021.3.14. Https://mp.weixin.qq.com/ s / tvwt8R02dc4TFUQHeyyAvA
[26] Verified Market Research.Graphic Processing Unit (GPU) Market Size And Forecast.2022.4. https://www.verifiedmarketresearch.com/product/graphic-processing-unit-gpu-market/
[27] Gao Sheng Han, Xiong Tinggang. The realization of OpenCL on domestic GPU [J]. Warship Electronic Engineering, 2021, 41 (9): 113-116125.
[28] Jon Peddie Research:Q2'22 saw a significant decline in GPU and PC shipments quarter to quarter.2022.8.30. https://www.jonpeddie.com/press-releases/q222-saw-a-significant-decline-in-gpu-and-pc-shipments-quarter-to-quarter-a
[29] Nvidia Q2 financial report 2022. Https://www.sec.gov/ ix?doc=/Archives/ edgar / data / 0001045810Univer 000104581022000147 / nvda-20220731.htm
[30] Twisted:Nvidia graphics card architecture. 2022.4.9. Https://www.twisted-meadows.com/ nvidia-gpu-architecture/
[31] Nvidia GeForce:NVIDIA has made a great leap forward in performance, and GeForce RTX 40 series ushered in a new era of neural network rendering. 2022.9.21. Https://mp.weixin.qq.com/ s / Sc5uL3i2PolxXKhVhpdtxg
[32] VideoCardz:NVIDIA scraps RTX 4080 12GB. https://videocardz.com/newz/nvidia-cancels-geforce-rtx-4080-12gb
[33] pioneering Securities: the pace of localization of GPU is accelerated, and emerging teams continue to emerge. 2022.8.1. Https://pdf.dfcfw.com/ pdf / H3_AP202208021576791297_1.pdf?1659427369000.pdf
[34] Tom's Hardware:
CPU Benchmarks and Hierarchy 2022: Processor Ranking Charts.2022.10.16.
Https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html#section-integrated-gpu-gaming-cpu-benchmarks-rankings-2022
[35] Tom's Hardware:GPU Benchmarks and Hierarchy 2022: Graphics Cards Ranked.2022.10.16. https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html
[36] Statista:PC graphics processing unit (GPU) shipment share worldwide from 2nd quarter 2009 to 1st quarter 2022, by vendor.2022.5. https://www.statista.com/statistics/754557/worldwide-gpu-shipments-market-share-by-vendor/
[37] New knowledge of science and technology: 72 core 288 threads, how did Intel get this monster? . https://mp.weixin.qq.com/ s / otQQpf6deW2T74tr-TdCEg
[38] Microgrid: core breakthrough! Opportunity window and ecological play of domestic GPU. 2021.5.17. Https://mp.weixin.qq.com/ s / lxCzkA45PE4QFZZ4NKbMYw
[39] International electronic business situation: looking at the development of mobile phone GPU in the past two years, is iPhone still at the top? .2021.12.18. Https://mp.weixin.qq.com/ s / DtlJTNynQ9-aZJ3oVrKLEg
[40] Note Book Check:Smartphone and Tablet Graphics Cards-Benchmark List and Comparison. https://www.notebookcheck.net/Smartphone-Graphics-Cards-Benchmark-List.149363.0.html
[41] China Science News: pick up the "Pearl in the Crown", domestic high-performance GPU on the road. 2022.9.5. Https://news.sciencenet.cn/ sbhtmlnews / 2022/9/371092.shtm
[42] Godson Chinese Science: the new generation of Godson 3 series processors supporting bridge 7A2000 officially released, internal integration self-developed GPU.2022.7.19. https://mp.weixin.qq.com/ s / A05j9en7Ye5O7_L6Bcps9A
[43] Science and Technology Innovation Board Daily: GPU has been pushed into the spotlight: high R & D barriers in the industry an overview of the local industrial chain "Lone brave" .2022.9.1. Https://mp.weixin.qq.com/ s / g6_1JYZBXnY9voonFSWklw
[44] pioneering Securities: the pace of localization of GPU is accelerated, and emerging teams continue to emerge. 2022.8.1. Https://pdf.dfcfw.com/ pdf / H3_AP202208021576791297_1.pdf?1659427369000.pdf
[45] ZhenFund: Muxi Peng Li: solve the ultimate problem on the road of extraordinary "core" | authentic science and technology story https://mp.weixin.qq.com/ s / WrI04AqWbUvAEfYS7KGLjQ
[46] Electronic fever friends: is it difficult for GPU to surpass CUDA ecology? Domestic GPU manufacturer: just do it! .2022.1.29. Https://mp.weixin.qq.com/ s / HBxGCl1UpUpCVEY9jTiX7g
[47] China Electronic News: the reality and dawn of high-end GPU. 2022.9.16. Http://m.cena.com.cn/ semi / 20220916/117621.html
Gao Guihai, Lu Wenyan, Li Xiaowei, et al. Comparative analysis of special processors [J]. Chinese Science: information Science, 2022 http://scis.scichina.com/ cn / 2022 / SSI-2021-0274.pdf
Li Zhengmao, Wang Guirong. On the three laws in the Age of Computing [J]. Telecommunications Science, 38 (6): 13-17. Http://www.infocomm-journal.com/ dxkx / article / 2022 Universe 1000-0801 Universe 1000-0801-38-6-00013.shtml
[50] strength of science and technology: the supply of high-end GPU is cut off, and China's top supercomputer is not afraid at all. 2022.9.2. Https://mp.weixin.qq.com/ s / wDGZp4NQSVP6RFZk6H-0zA
[51] ask core Voice: days Intelligence Core launched DeepSpark general development platform, domestic GPU can not be "infatuated" with a few performance indicators to win. 2022.8.31. Https://mp.weixin.qq.com/ s / CYinRjsYqicOpHR9AFNgFg
[52] Q Core Voice: exclusive conversation | domestic GPU cannot sell dog meat by hanging sheep's head and should stick to self-use | number of days Smart Core CTO Lu Jianping. 2022.7.27. Https://mp.weixin.qq.com/ s / HvuTwy9O8hvULdRGo37OYw
This article comes from the official account of Wechat: fruit Shell hard Technology (ID:guokr233), author: Fu Bin, Editor: Li Tuo
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.