In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "what is the meaning of pseudo-sharing in Java". In daily operation, I believe that many people have doubts about what pseudo-sharing in Java means. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubt of "what is the meaning of pseudo-sharing in Java?" Next, please follow the editor to study!
1. What is pseudo sharing?
The CPU cache system is stored in cache lines (cache line). At present, the Cache Line size of mainstream CPU Cache is 64 Bytes. In the case of multithreading, if you need to modify variables that share the same cache line, it will inadvertently affect each other's performance, which is called pseudo-sharing (False Sharing).
two。 Cache Lin
Since shared variables are stored in the CPU cache in cache behavior units, a cache line can store multiple variables (the number of bytes full of the current cache line); while CPU modifies the cache in the least unit of cache behavior, then the pseudo-sharing problem of appeal will occur.
Cache Line can be simply understood as the smallest cache unit in CPU Cache. Today's CPU no longer accesses memory by bytes, but fetches in 64-byte blocks (chunk), called a cache line (cache line). When you read a specific memory address, the entire cache line is swapped from the main memory to the cache, and the overhead of accessing other values in the same cache line is small.
3. Three-level cache of CPU
Because CPU is much faster than memory, CPU designers added CPU Cache to CPU. So as not to be dragged down by the speed of memory. Just as we write code to Cache shared data and don't want to be dragged down by DB access speed, CPU Cache is divided into three levels: L1 Magi L2 Magi L3. The closer you get to the CPU, the faster and smaller the cache. The L1 cache is small but fast, and is close to the CPU kernel that is using it. L2 is larger, slower, and still can only be used by a single CPU core. L3 is more common in modern multicore machines, still larger, slower, and shared by all CPU cores in a single slot. Finally, you have a piece of main memory that is shared by all CPU cores on all slots.
When CPU performs operations, it first goes to L1 to find the data it needs, then to L2, then L3, and finally, if none of these caches are available, the required data will go to the main memory. The farther you go, the longer the operation will take. So if you are doing something very often, you need to make sure the data is in the L1 cache.
4. Cache relevance
At present, the commonly used cache design is N-way group association (N-Way Set Associative Cache). His principle is that a cache is divided into a group (Set) according to N Cache Line, and the cache is divided into equal groups. Each memory block can be mapped to any cache line in the corresponding set. For example, a 16-way cache, 16 CacheLine as a Set, each memory block can be mapped to any one of the 16 CacheLine in the corresponding Set. In general, memory blocks with the same low-bit bit address will share the same Set.
The following figure shows a Cache for 2-Way. From the picture, we can see that the Index 0meme 2je 4 in Main Memory are all mapped in different CacheLine of Way0, and the Index 1meme 3meme 5 are all mapped in different CacheLine of Way1.
5. MESI protocol
Multicore CPU has its own proprietary cache (typically L1Magine L2) and a cache shared by cores between the same CPU slot (typically L3). It is inevitable that the same data will be loaded in different core CPU caches, so how to ensure data consistency is the MESI protocol.
In MESI protocol, each Cache line has four states, which can be represented by two bit, they are: M (Modified): this line of data is valid, the data has been modified, it is inconsistent with the data in memory, and the data only exists in this Cache; E (Exclusive): this line of data is valid, the data is consistent with the data in memory, and the data only exists in this Cache S (Shared): this line of data is valid, the data is consistent with the data in memory, and the data exists in many Cache; I (Invalid): this line of data is invalid.
So, suppose there is a variable ibuffer 3 (which should be a cache block including variable I, the block size is the size of the cache line); it has been loaded into the cache of multi-core (a), and the state of the cache line is S; at this time, one of the core a changes the value of the variable I, then the state of the current cache line in core a will become the current cache line state in the Mrecore bforce c core will become I. As shown below:
6. Solution principle
In order to avoid repeated loading of CacheLine from L1, L2, and L3 to main memory due to false sharing, we can use data padding to avoid it, that is, a single data fills a CacheLine. This is essentially a method of exchanging space for time.
7. Java's traditional solution for pseudo-sharing
Import java.util.concurrent.atomic.AtomicLong;public final class FalseSharing implements Runnable {public final static int NUM_THREADS = 4; / / change public final static long ITERATIONS = 500L * 1000L * 1000L; private final int arrayIndex; private static VolatileLong [] longs = new Volatilon [num _ THREADS]; static {for (int I = 0; I < longs.length; I +) {longs [I] = new VolatileLong ();}} public FalseSharing (final int arrayIndex) {this.arrayIndex = arrayIndex } public static void main (final String [] args) throws Exception {final long start = System.nanoTime (); runTest (); System.out.println ("duration =" + (System.nanoTime ()-start));} private static void runTest () throws InterruptedException {Thread [] threads = new Thread [NUM _ THREADS]; for (int I = 0; I < threads.length; iTunes +) {threads [I] = new Thread (new FalseSharing (I));} for (Thread t: threads) {t.start () } for (Thread t: threads) {t.join ();}} public void run () {long I = ITERATIONS + 1; while (0! =-I) {longs [arrayIndex] .set (I);} public static long sumPaddingToPreventOptimisation (final int index) {VolatileLong v = longs [index]; return v.p1 + v.p2 + v.p3 + v.p4 + v.p5 + v.p6 } / / jdk7 above use this method (a version of jdk7 oracle optimizes pseudo sharing) public final static class VolatileLong {public volatile long value = 0L; public long p1, p2, p3, p4, p5, p6;} / jdk7 below use this method public final static class VolatileLong {public long p1, p2, p3, p4, p5, p6, p7; / / cache line padding public volatile long value = 0L; public long p8, p9, p10, p11, p12, p13, p14; / cache line padding}}
8. Solutions in Java 8
An official solution has been provided in Java 8, and a new annotation has been added to Java 8: @ sun.misc.Contended. The class with this annotation will automatically complete the cache line. It should be noted that this annotation is invalid by default and needs to be set when jvm starts-XX:-RestrictContended to take effect.
@ sun.misc.Contendedpublic final static class VolatileLong {public volatile long value = 0L; / / public long p1, p2, p3, p4, p5, p6
At this point, the study on "what is the meaning of pseudo-sharing in Java" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 228
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.