Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the implementation principle of CAS?

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "what is the implementation principle of CAS". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what is the implementation principle of CAS"?

Pessimistic lock and optimistic lock

Pessimistic lock

It is always assumed that in the worst case, thread a will feel that other threads are modifying the data every time it goes to obtain or update the data. in order to avoid the loss of its own update operation, thread a will try to acquire the lock of this data. thread a can only update this data after it is acquired. In the meantime, other threads cannot update and cannot update until thread a releases the lock.

It is called pessimistic lock because it is a kind of concurrency control method that is pessimistic about data modification. We generally think that the probability of data being modified concurrently is relatively high, so we need to lock it before modifying it.

Pessimistic concurrency control is actually a conservative strategy of "lock first and then access".

Synchronized is an implementation of pessimistic locks.

Optimistic lock

Optimistic locking assumes that the data generally does not cause conflicts, so it will not be locked when taking the data, but it will be updated to determine whether other threads have modified the data during this period.

The CAS mechanism is an implementation of optimistic locking.

Implementation of optimistic Lock-- CAS

The CAS operation generally contains three parameters, the expected value, the memory value, and the new value. If the expected value is equal to the memory value, the memory value is updated with the new value. If it is not equal, you can compare it again until it is successful.

CAS is a non-blocking algorithm that does not need to hang when a thread fails to update, thus saving a lot of thread context switching overhead.

Java uses the Unsafe class to support CAS operations. Students who don't know anything about the Unsafe class can first refer to my other article, the JUC cornerstone-the Unsafe class.

We use java code to briefly simulate the process of CAS:

/ * * @ param expect expected value * @ param update New value * @ return * / public int cas (int expect, int update) {/ / the update failed and the while (true) {/ / get method fetches the latest value int memory = get () from memory. If (memory = = expect) {/ / set method sets the value in memory to the new value set (update); return update;}

Of course, this is just a simulation, the actual cas operation will use the underlying system instructions, these instructions will ensure the atomicity of the entire cas operation, these instructions may need to be explained at another length.

The realization of pessimistic lock-- synchronized

Synchronized is a typical implementation of pessimistic lock, about its usage, you can refer to my article about Synchronized, the early synchronized is very bulky, fortunately, after 1.6a lot of optimization, lock performance improved a lot, about the optimization of synchronized, you can refer to my article Synchronized optimization.

The defect of CAS-- ABA problem

Suppose there is a situation in which the memory value of x is first A, thread 1 reads A, and then it is busy with something else, and then the value is changed to B by thread 2, and then changed to A by thread 3. Thread 1 performs CAS operation at this time and finds that the memory value is still A, so it is updated. But this An is no longer the original A, or the previous version of A.

To solve this defect, you can use CAS with a version number or timestamp, each time the A value is updated, the version number is added by 1, or the timestamp is updated. At this point, the memory value is equal to the expected value, but not the version number expected by the thread.

At this point, A → B → A becomes A (version=1) → B (version=2) → A (version=3). When you use CAS with a version number, you can avoid ABA problems.

Applicable scenarios for CAS and synchronized

Thread conflict is relatively small, CAS spin operation, synchronized upgrade to lightweight lock, is also spinning, the efficiency of the two is similar.

When the thread conflict is serious, most of the spin operations of CAS will waste a lot of time slices of CPU, and synchronized will be upgraded to heavy locks, but in this case, synchronized is much more efficient than CAS. Because synchronized has realized that the spin operation of lightweight locks is inefficient and actively escalates to heavy locks when thread conflicts are serious, the cost of busy loops here is much greater than that of thread switching.

CAS operation in JAVA

AtomicInteger implements CAS, which can update an int type data atomically, but the underlying Unsafe class is also called. However, if you want to update multiple variables atomically at a time, you can use AtomicReference. Of course, this has the above ABA problem. In this case, you can use the CAS implementation class with version number mechanism-- AtomicStampedReference, which uses a stamp field to represent the version number. The code is shown in the following figure:

CAS operation in database

The optimistic locking mechanism in the database does not need to use table locks, row locks, and so on. Taking the modification of inventory as an example, optimistic locking is implemented as follows:

Update goods set quantity=99 where id=1 and quantity= 100

This scenario is relatively simple, do not consider the ABA problem for the time being.

In fact, the above SQL still has some problems, that is, once the high concurrency, only one thread can be modified successfully, then there will be a large number of failures. Therefore, the granularity of optimistic locks needs to be reduced.

There is a good suggestion that can reduce optimistic locking, maximize throughput and improve concurrency! As follows:

Update goods set quantity=quantity-1 where id = 1 and quantity-1 > 0

The quantity=100 is transformed into quantity-1 > 0, which greatly reduces the intensity of optimistic locking and greatly improves the efficiency.

CAS operation in JVM

When Java calls new object (), it creates an object that is assigned to the heap of JVM. So how exactly is this object saved in the heap?

First of all, how much space this object needs when new object () is executed is actually determined, because the various data types in java take up a fixed amount of space. How to determine the size of the object, you can refer to my article object memory layout, how to determine the size of the object. Then the next job is to find a piece of space in the heap to store the object.

In the case of a single thread, there are generally two allocation strategies:

Pointer collision: this is generally applicable to memory is absolutely regular (whether the memory is regular or not depends on the memory recovery strategy). The used one is stored on one side, the free one on the other, and there is a demarcation pointer between them. The task of allocating space is to move the demarcation pointer to the side of free memory by a distance of object size.

Free list: this applies to irregular memory, in which case JVM maintains a memory list that records which memory areas are free and how big they are. When allocating space to an object, go to the free list to find the appropriate area and allocate it.

However, the creation of objects is frequent, and to ensure efficiency, JVM can concurrently allocate memory space to objects. Because memory allocation is not an atomic operation, it is not safe to find the free list, allocate memory, modify the free list, and so on. There are also two strategies for solving security issues in concurrency:

CAS: in fact, the virtual machine uses CAS with failed retry to ensure the atomicity of the update operation, the principle is the same as above.

TLAB: using CAS actually has an impact on performance, so JVM proposes a more advanced optimization strategy: each thread pre-allocates a small piece of memory in the Java heap, which is called the local thread allocation buffer (TLAB). When the thread needs to allocate memory, it can be allocated directly on the TLAB to avoid thread conflicts. The CAS operation allocates more memory space only when the buffer runs out of memory and needs to be reallocated. Whether the virtual machine uses TLAB can be configured with the-XX:+/-UseTLAB parameter (TLAB is enabled by default in jdk5 and later versions).

At this point, I believe you have a deeper understanding of "what is the implementation principle of CAS?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report