In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
CTOnews.com September 20 news, Intel today announced the latest Meteor Lake processor, and detailed the integrated NPU of Meteor Lake.
Intel says Al is being integrated into every aspect of people's lives. Although Al in the cloud provides scalable computing, it also has limitations. It has the characteristics of dependent connection, high latency, high implementation cost, and privacy issues. MeteorLake introduces Al into client-side PC to provide low-latency Al computing, which can better protect data privacy at a lower cost.
Intel said that starting with MeteorLake, Intel will widely introduce Al into PC, leading hundreds of millions of PC into the Al era, while the vast x86 ecosystem will provide a wide range of software models and tools.
IT House provides details of Intel NPU architecture:
Host Interface and device Management-the device management area supports Microsoft's new driver model, called Microsoft Computing driver Model (MCDM). This enables Meteor Lake's NPU to support MCDM in an excellent way while ensuring security, while memory management units (MMU) provide isolation in multiple situations and support power and workload scheduling for fast low-power state transitions.
Multi-engine architecture-NPU consists of a multi-engine architecture with two neural computing engines that can work together to handle a single workload or different workloads. In the neural computing engine, there are two main computing components, one is the reasoning pipeline, which is the core driver of energy-efficient computing, by minimizing data movement and using fixed functions to deal with common large computational tasks, high efficiency and energy saving can be achieved in neural network execution. The vast majority of computing takes place on inference pipes, which are fixed-function pipeline hardware that supports standard neural network operations. The pipeline consists of a multiplication, accumulation and addition (MAC) array, an activation block and a data conversion block. The second is SHAVEDSP, a highly optimized VLIW DSP (very long instruction word / digital signal processor) designed for Al. Streaming hybrid architecture vector engine (SHAVE) can be pipelined with inference pipeline and direct memory access (DMA) engine to achieve truly heterogeneous computing on NPU in parallel, thus maximizing performance.
DMA engine-this engine optimizes choreographed data movement for maximum energy efficiency and performance.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.