In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Thanks to CTOnews.com netizen OC_Formula for the clue delivery! CTOnews.com news on December 18, with the gradual opening up around the world, a variety of offline activities are also gradually resumed. This year, the 68th annual IEEE International Electronic Devices Conference (IEDM) fully resumed, with nearly 1500 engineers from around the world gathered in San Francisco to discuss the latest developments in the semiconductor industry.
Wikichip found in TSMC's paper that although logic circuits are still moving more or less along the historical trajectory, the path of SRAM seems to have completely collapsed.
When TSMC officially launched its N3 technology earlier this year, it said that compared with N5, the logical density of the new node will be increased by 1.6x and 1.7x, but what they did not say is that compared with N5, the HD SRAM density of the new technology has almost no change, and the overall improvement is better than nothing, which may mean that the CPU and GPU using the new generation 3nm process will be more expensive and the end products will be more expensive.
At the IEEE summit meeting, TSMC talked about 3nm basic (N3B) nodes and some data of 3nm enhanced (N3E). To put it simply, N3E is a slightly cheaper version of N3B, and it can be said that it pays more attention to power control than performance on the final chip.
Interestingly, for the new N3E node, the size of the high-density SRAM bit cell has not been reduced, still 0.021 μ m ², which is exactly the same as the bit cell size of the N5 node. But you should know that the N3B is actually equipped with SRAM scaling, and its unit size is only 0.0199 μ m ², which is 5% smaller than the previous version.
According to a rough estimate, the memory density (ISO-assist circuit overhead) of N3E is approximately 31.8 Mib / mm ².
For comparison, Intel's Intel 4 (formerly 7nm) reduced the SRAM bit size from 0.0312 μ m ²to 0.024 μ m ². Of course, the current Intel 7 (formerly known as 10nm Enhanced superin) is about 27.8 Mib / mm ², which still lags behind TSMC's HD SRAM density.
In addition, WikiChip reviewed a demonstration by Imec in which PPT showed a SRAM density of about 60 Mib / mm ²on "more than 2nm nodes" with fork transistors. However, this process will take several years, and practitioners in the chip industry will have to develop semiconductors with the above SRAM density in Intel and TSMC.
So members of CTOnews.com may ask, what does this so-called SRAM have to do with me?
In fact, modern CPU, GPU and SoC all use a lot of SRAM for various caches when dealing with large amounts of data, because it is very inefficient to get data directly from memory, especially for a variety of artificial intelligence (AI) and machine learning (ML) workloads, but now SoC general-purpose processors, graphics chips and application processors in smartphones have a lot of cache. Even the desktop-level AMD Ryzen 9 7950X comes with 81MB's cache, while Nvidia AD102 uses at least 123MB's SRAM cache.
Suppose there is a 10 billion transistor chip on TSMC N16, of which SRAM,60% is a logic transistor, and suppose its chip area is about 255mm ², where 45mm ²(or 17.6%) is SRAM, and switching the exact same chip to N5 will become a 56mm ²chip, of which 22.5% is SRAM, and further upgrading to N3 will become a 44mm ²chip, but the area of the SRAM used in N5 and N3e is 12.58mm ². This will account for nearly 30% of the chip area.
Of course, it may be hard to feel this change, but for some AI hardware, the architecture requires SRAM to cover a large percentage of the chip, and these engineers will be more impressed than other engineers.
Looking to the future, the demand for cache SRAM in various industries will only increase, and this will make it difficult to reduce the chip area occupied by SRAM for a while, nor can it achieve obvious costs and benefits with N5 nodes. In essence, this means that the chip size of high-performance processors will increase, which will lead to a further increase in their costs. For example, people often complain about the Nvidia RTX 40 series, its GPU chip is because of the jump from Samsung 8nm to the latest TSMC 4N process that the cost has increased significantly, but from this point of view, even the next generation of RTX 50 series products may be difficult to return to the previous "low price" level.
From a cost perspective, the most significant way to mitigate the impact of SRAM is to use a small chip design and break down the larger cache into separate chips on cheaper nodes, which is the approach adopted by AMD in its 3D V-Cache processor. Another approach is to use an alternative memory technology, such as eDRAM or FeRAM, as a cache, but each cache has its own advantages.
In any case, slowing down SRAM scaling with FinFET-based nodes on 3nm and above seems to be a major challenge for chip designers in the next few years, and the possible impact for you is an increase in the price of end products, such as the Apple iPhone 15 Pro series with A17 chips.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.