In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what is pseudo-sharing in C++". The explanation in this article is simple and clear, easy to learn and understand. Please follow the ideas of Xiaobian and go deep into it slowly to study and learn "what is pseudo-sharing in C++" together.
cache line
For the purposes of the following discussion, we need to familiarize ourselves with the concept of cache lines. Those of you who have studied the memory structure section of the operating system course should be impressed by the pyramid model of memory hierarchy, which represents lower cost and larger capacity of storage media from top to bottom, and higher access speed from bottom to top. At the top of the pyramid model is the register in the CPU, followed by CPU cache (L1, L2, L3), then memory, the bottom layer is disk, the operating system adopts this storage hierarchy model mainly to solve the contradiction between the high speed of CPU and the low speed of memory disk, CPU will read the most recently used data into Cache in advance, the next time to access the same data, you can directly read from the faster CPU cache, avoid reading from memory or disk slow down the overall speed.
The smallest unit of CPU cache is cache line. The cache line size varies according to different architectures. The most common ones are 64Byte and 32Byte. When CPU cache accesses data from within, it is carried out in cache line units. Each time, the entire cache line where data needs to be read is taken. Even if adjacent data is not used, it will be cached in CPU cache (this involves locality principle, which will be described later).
cache coherency
In the case of a single-core CPU, the above method works fine and ensures that the data cached in the CPU cache is always "clean" because no other CPU can change the data in memory, but in the case of a multi-core CPU, the situation becomes more complicated. In multi-CPU, each CPU has its own private cache (possibly sharing L3 cache). When a CPU1 operates on cached data in Cache, if CPU2 changes the data before this, the data in CPU1 is no longer "clean", that is, it should be invalid data. Cache coherence is to ensure cache coherence among multiple CPUs.
MESI protocol is used to handle cache coherency in Linux system. MESI refers to four states of CPU cache:
M (Modified): The local processor has modified the cache line, that is, the dirty line, its contents are different from those in memory, and this cache has only one local copy (proprietary);
E (Exclusive): The cache line content is the same as that in memory, and no other processor has this line of data;
S (Shared): The contents of the cache line are the same as those in memory, and it is possible that other processors also have copies of this cache line;
I (Invalid): cache line invalid, cannot be used.
Each CPU cache line transitions between four states to determine whether the CPU cache is invalid. For example, if CPU1 performs a write operation on a cache line, this operation will cause the cache line of other CPUs to enter the Invalid state. When the CPU needs to use the cache line, it needs to read it again from memory. This solves the cache coherency problem among multiple CPUs.
false sharing
What is false sharing? As mentioned above, CPU caching is carried out in cache behavior units, that is, in addition to the data it needs to read and write, it will also cache the data in the same cache line with the data. Assuming that the cache line size is 32 bytes, there are eight int data of "abcdefgh" in memory. When CPU reads "d" this data, CPU will add the eight int data of "abcdefgh" into a cache line to CPU cache. Suppose the computer has two CPUs: CPU1 and CPU2, CPU1 only reads and writes frequently the data of "a", CPU2 only reads and writes frequently the data of "b". Logically speaking, there is no correlation between these two CPUs reading and writing data, so there will be no competition and no performance problem. However, since CPU cache is accessed in cache behavior units, it is also invalid in cache behavior units. Even if CPU1 only changes the data of "a" in cache line, it will cause the cache line in CPU2 to be completely invalid. Similarly, CPU2's change to "b" will also cause the cache line in CPU1 to be invalidated, thus causing the cache line to "ping pong" between two CPUs, cache line frequent invalidation, and finally lead to program performance degradation, which is pseudo-sharing.
How to avoid false sharing
There are two main ways to avoid false sharing:
1. Cache line padding: To avoid false sharing, it is necessary to put multiple variables that may cause false sharing in different cache lines, which can be achieved by padding bytes after variables.
2. Use some languages or compilers to enforce variable alignment, aligning variables to cache line size to avoid false sharing.
Thank you for reading, the above is "C++ pseudo-sharing is what" the content, after the study of this article, I believe that we have a deeper understanding of C++ pseudo-sharing is what this problem, the specific use of the situation also needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.