Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the parsing of ReentrantLock source code

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

ReentrantLock source code parsing is what, I believe that many inexperienced people do not know what to do, so this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.

What is a relockable ReentrantLock?

So what is a reentrant lock?

Reentry lock (recursive lock) can be understood as: after the same thread function acquires the lock, the inner recursive function can still obtain the code of the lock object, that is, when the outer method of the same thread accesses, it acquires the lock and automatically acquires the lock after entering the inner method. A thread can enter any block of code synchronized by a lock it already owns. Well, what do you mean? I know every word in Chinese, but when I put it together, I don't know what it means.

Let's give an example of life:

In real life, we usually only need the key to our own door (of course, if you are a roommate, you also need to bring the key to your room). When we open the key to the door and enter the room, we don't have to take the key to open the kitchen or bathroom door when we go to the kitchen or the bathroom. Why is that? Because we already have the key to lock the door and have entered the room. The kitchen and bathroom are already within the scope of the gate lock management. From the point of view of the concurrent lock, after the lock is acquired by a thread function (after you open the door with the key), the inner recursive function can still get the code of the lock object (after entering the room, the kitchen and bathroom in the room can enter and exit freely). Isn't that easy to understand?

The underlying implementation principle is mainly realized by inheriting AQS, and also by using CAS operation on volatile state + CLH queue.

CAS:Compare and Swap compare and exchange. The idea of CAS is very simple: three parameters, a current memory value V, an expected value A, and a value B to be updated. Currently, only when the expected value An and the memory value V are equal, change the memory value V to B, otherwise do nothing. This operation is an atomic operation that is widely used in the underlying implementation of java. In java, CAS is mainly implemented by calling CPU low-level instructions through JNI. More low-level ideas refer to Wolf's article, the underlying principle of cas: https://www.jianshu.com/p/fb6e91b013cc.

CLH queue: also known as synchronous queue, is a two-way acyclic list of leading nodes, which is the main implementation principle of AQS (the structure is shown in the following figure)

The most common way:

Int a = 12; / Note: normally, this will be set to a class variable, such as the segment lock in Segement and the global lock in copyOnWriteArrayList: final ReentrantLock lock = new ReentrantLock (); lock.lock (); / / acquire the lock try {a segment lock / business logic} catch (Exception e) {} finally {lock.unlock () / / release the lock}

1. For ReentrantLock, you need to master the following points

Creation of ReentrantLock (fair lock / unfair lock)

Lock: lock ()

Unlock: unlock ()

First, let's talk about the class structure:

ReentrantLock-- > Lock

NonfairSync/FairSync-- > Sync-- > AbstractQueuedSynchronizer-- > AbstractOwnableSynchronizer

NonfairSync/FairSync-- > Sync are the three inner classes of ReentrantLock

Node is the inner class of AbstractQueuedSynchronizer

Note: the above four lines correspond to: "subclass"-> "parent class"

2. The creation of ReentrantLock

Fair locking is supported (the thread that comes in first executes)

Unfair locks are supported (threads that come in later may also execute first)

Unfair lock and the creation of unfair lock

Unfair lock: ReentrantLock () or ReentrantLock (false)

Final ReentrantLock lock = new ReentrantLock ()

Fair Lock: ReentrantLock (true)

Final ReentrantLock lock = new ReentrantLock (true)

Unfair locks are used by default.

The source code is as follows:

ReentrantLock:

/ * * Synchronizer: a reference to the inner class Sync * / private final Sync sync; / * creates an unfair lock * / public ReentrantLock () {sync = new NonfairSync () } / * * create a lock * @ param fair true-- > fair lock false-- > unfair lock * / public ReentrantLock (boolean fair) {sync = (fair)? New FairSync (): new NonfairSync ();}

There are three inner classes Sync/NonfairSync/FairSync in the above source code. Only the definition of the class is listed here, and the specific methods in these three classes will be introduced in the first subsequent reference.

The Sync/NonfairSync/FairSync class definition:

/ * * A base class of the lock synchronization control. There are two subcategories: unfair mechanism and fair mechanism. AbstractQueuedSynchronizer class * / static abstract class Sync extends AbstractQueuedSynchronizer / * unfair lock synchronizer * / final static class NonfairSync extends Sync / * fair lock synchronizer * / final static class FairSync extends Sync

3. Lock of unfair lock ()

Specific usage:

Lock.lock ()

Let's first introduce the simplified version of this overall step, then give the detailed source code, and give the detailed version of the steps in the lock () method section of the source code.

Steps for simplified version: (core of unfair lock)

Try to set state (number of locks) from 0 to 1 based on CAS

A. if the setting is successful, set the current thread to the thread with an exclusive lock

B. if the setting fails, the number of locks will be obtained again.

B1. If the number of locks is 0, try to set the state (number of locks) from 0 to 1 based on CAS. If the setting is successful, set the current thread as the thread with exclusive lock.

B2, if the number of locks is not 0 or the above attempt fails again, check to see if the current thread is already an exclusive lock thread, if so, the current number of locks will be + 1; if not, the thread will be encapsulated in a Node and added to the waiting queue. Waiting to be awakened by its previous thread node.

Source code: (before introducing the source code, I have a general impression of the step of acquiring the lock, that is, the "simplified version of the step" above)

3. 1. ReentrantLock:lock ()

/ * acquire a lock * three situations: * 1. If the lock is not held by any thread (including the current thread), acquire the lock immediately, the number of locks = = 1, and then execute the corresponding business logic * 2. If the current thread is holding the lock, then the number of locks + 1 Then execute the corresponding business logic * 3. If the lock is held by another thread, the current thread is dormant until the lock is acquired and the current thread is awakened. The number of locks = = 1, and then execute the corresponding business logic * / public void lock () {sync.lock () / / call the lock () method of NonfairSync (unfair lock) or FairSync (fair lock)}

3.2The NonfairSync:lock ()

/ * 1) first set the state (number of locks) from 0 to 1 based on CAS. If the setting is successful, set the current thread as the thread with exclusive lock. -- > request successful-- > queue jumping for the first time * 2) if the setting fails (that is, the current number of locks may already be 1, that is, during the attempt, the lock has been occupied by another thread first), then the current thread executes the acquire (1) method * 2.1) the acquire (1) method first calls the following tryAcquire (1) method, in which the number of locks is obtained first. * 2.1.1) if 0 (proves that the exclusive lock has been released and no thread is in use), then we continue to use CAS to set the state (number of locks) from 0 to 1. If the setting is successful, the current thread monopolizes the lock. -- > request succeeded-- > jump the queue for the second time Of course, if the setting is not successful, directly return false * 2.2.2) if it is not 0, determine whether the current thread is the thread of the current exclusive lock. If so, the current number of locks status value + 1 (this is the source of the name of the reentrant lock)-> request success * * the following process sentence: after the request fails Chain the current thread to the end of the queue and suspend it, then wait to be woken up. * * 2.2.3) if none of the above execution in the tryAcquire (1) method is successful, that is, if the request is not successful, false is returned and the acquireQueued (addWaiter (Node.EXCLUSIVE), arg) method * 2.2 is returned. In the above method, the current thread is first encapsulated into the Node node node using addWaiter (Node.EXCLUSIVE), and then the node is queued (quickly queued first. If the fast queue is not successful, it uses the normal queue method to loop indefinitely until the Node node joins the queue) * 2.2.1) Quick queue: if there is a tail node in the synchronous waiting queue, it will use CAS to try to set the tail node to node And insert the previous tail node before the node * 2.2.2) join the queue normally: if there is no tail node in the synchronous waiting queue or the above CAS attempt is unsuccessful, execute the normal queue (this method is an infinite loop process, that is, until you join the queue)-> first blocking * 2.2.2.1) if the tail node is empty (initialize the synchronous waiting queue) Create a dummy node and try to set it to the header node through CAS. If the setting is successful, point the tail node to the dummy node (that is, both the head node and the tail node point to the dummy node) * 2.2.2.1) if the tail node is not empty, perform the same logic as Quick queue, even if you use CAS to try to set the tail node to node And insert the previous tail node before the node * finally, if you join the queue smoothly, return to the enlisted node node, if not, infinite loop to execute the following process, until joining the queue * 2.3) after the node node joins the queue, you will execute acquireQueued (final Node node, int arg) (this is another infinite loop process, it should be noted here An infinite loop equals blocking, and multiple threads can loop indefinitely at the same time-each thread can execute its own loop so that the node queued behind keeps moving forward) * 2.3.1) get the precursor node p of node, if p is the head node, continue to use the tryAcquire (1) method to try to request success,-- > jump the queue for the third time (of course If the first request is successful, you don't have to interrupt your own thread. If you suspend the thread in a later loop and then the request succeeds, use selfInterrupt () to interrupt yourself. (note that p==head&&tryAcquire (1) success is the only way out of the loop. Until then, it will block here until other threads continue to reduce the number of nodes in front of p during execution, until p becomes a head and the node request succeeds-that is, the node is awakened before exiting the loop) * 2.3.1.2) if p is not the header node, or the tryAcquire (1) request is not successful Then execute shouldParkAfterFailedAcquire (Node pred, Node node) to check whether the current node can be safely suspended. * 2.3.1.2.1) if the wait state of the node precursor node pred is SIGNAL (that is, it can wake up the thread of the next node), the thread of the node node can be safely suspended. Execute 2.3.1.3) * 2.3.1.2.2) if the wait state of node's precursor node pred is CANCELLED, then the thread of pred is cancelled, and we will remove several consecutive canceled precursor nodes before pred from the queue and return false (that is, cannot be suspended) If the wait state of the precursor node pred of node is other than the above two states, then use CAS to try to set the wait state of the precursor node to SIGNAL and return false (because CAS may fail, false will be returned here regardless of failure or not, and the wait state of pred will be SIGNAL after the next execution of this method) Then continue to execute the above code * 2.3.1.3) if it is safe to suspend, execute parkAndCheckInterrupt () to suspend the current thread, then continue to execute the previous code in 2.3) * finally, until all the nodes before the precursor node p of the node have been executed, our p becomes the header node, and the tryAcquire (1) request succeeds and jumps out of the loop To execute. * (during the whole process before p becomes a header node, we find that this process will not be interrupted) * 2.3.2) of course, if an exception occurs in 2.3.1), we will execute cancelAcquire (Node node) to cancel node's intention to acquire the lock. * / final void lock () {if (compareAndSetState (0,1)) / / if CAS tries to successfully setExclusiveOwnerThread (Thread.currentThread ()); / / sets the thread else acquire (1) that the current thread is an exclusive lock;}

Note: in this method, I list a detailed process for a thread to acquire a lock and see the comments for myself.

Several methods and related properties called in NonfairSync:lock () are listed below.

3.2.1. AbstractQueuedSynchronizer: number of locks state attribute + related methods:

/ * * number of locks * / private volatile int state; / * obtain the number of locks * / protected final int getState () {return state;} protected final void setState (int newState) {state = newState;}

Note: state is volly type

3.2.2, AbstractOwnableSynchronizer: attribute + setExclusiveOwnerThread (Thread t)

/ * current thread with exclusive lock * / private transient Thread exclusiveOwnerThread; / * set the thread with exclusive lock to thread t * / protected final void setExclusiveOwnerThread (Thread t) {exclusiveOwnerThread = t;}

3.2.3, AbstractQueuedSynchronizer: attribute + acquire (int arg)

/ * method to acquire locks * @ param arg * / public final void acquire (int arg) {if (! tryAcquire (arg) & & acquireQueued (addWaiter (Node.EXCLUSIVE), arg)) selfInterrupt (); / / interrupt yourself}

Before introducing the above method, let's talk about the overall construction of an inner class Node of AbstractQueuedSynchronizer. The source code is as follows:

/ * * the node in the synchronous waiting queue (two-way linked list) * / static final class Node {/ * * thread was cancelled * / static final int CANCELLED = 1 / * if the wait status of the precursor node is SIGNAL, indicating that the current node can be awakened in the future, then the current node can be safely suspended * otherwise, the current node cannot be suspended * / static final int SIGNAL =-1; / * * the thread is waiting for the condition * / static final int CONDITION =-2 / * waitStatus value to indicate the next acquireShared should * unconditionally propagate * / static final int PROPAGATE =-3; / * * Marker to indicate a node is waiting in shared mode * / static final Node SHARED = new Node (); / * * A flag: indicates that the node is waiting in exclusive lock mode * / static final Node EXCLUSIVE = null / / value is the first four int (CANCELLED/SIGNAL/CONDITION/PROPAGATE), plus a 0 volatile int waitStatus; / * * precursor node * / volatile Node prev; / * * the thread * / volatile Thread thread; / * * Link to next node waiting on condition, or the special value SHARED in the successor node * / volatile Thread thread; / * * node. * Because condition queues are accessed only when holding in exclusive * mode, we just need a simple linked queue to hold nodes while they are * waiting on conditions. They are then transferred to the queue to * re-acquire. And because conditions can only be exclusive, we save a * field by using special value to indicate shared mode. * / Node nextWaiter; / * Returns true if node is waiting in shared mode * / final boolean isShared () {return nextWaiter = = SHARED;} / * return the previous node of the node * / final Node predecessor () throws NullPointerException {Node p = prev If (p = = null) throw new NullPointerException (); else return p;} Node () {/ / Used to establish initial head or SHARED marker} Node (Thread thread, Node mode) {/ / used in addWaiter this.nextWaiter = mode; this.thread = thread } Node (Thread thread, int waitStatus) {/ / Used by Condition this.waitStatus = waitStatus; this.thread = thread;}}

Note: here I give the full version of the Node class, some of which properties and methods are used in shared lock mode, while our ReentrantLock here is an exclusive lock, just pay attention to the parts related to the exclusive lock (with comments)

3.3.The two methods used in AbstractQueuedSynchronizer:acquire (int arg) method

3.3.1, NonfairSync:tryAcquire (int acquires)

/ * try to request successfully * / protected final boolean tryAcquire (int acquires) {return nonfairTryAcquire (acquires);}

Syn:

/ * called by tryAcquire in unfair lock * / final boolean nonfairTryAcquire (int acquires) {final Thread current = Thread.currentThread (); / / get the current thread int c = getState () / / get the number of locks if (c = = 0) {/ / if the number of locks is 0, it proves that the exclusive lock has been released. No thread is using if (compareAndSetState (0, acquires)) {/ / continue to change state from 0 to 1 through CAS. Note that the acquires passed here is 1 setExclusiveOwnerThread (current). / / the thread that sets the current thread as an exclusive lock return true;}} else if (current = = getExclusiveOwnerThread ()) {/ / check whether the current thread is an exclusive lock thread int nextc = c + acquires;// if so, the number of locks is the current number of locks + 1 if (nextc

< 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc);//设置当前的锁数量 return true; } return false; } 注意:这个方法就完成了"简化版的步骤"中的"A/B/B1"三步,如果上述的请求不能成功,就要执行下边的代码了, 下边的代码,用一句话介绍:请求失败后,将当前线程链入队尾并挂起,之后等待被唤醒。在你看下边的代码的时候心里默记着这句话。 3.3.2、AbstractQueuedSynchronizer:addWaiter(Node mode) /** * 将Node节点加入等待队列 * 1)快速入队,入队成功的话,返回node * 2)入队失败的话,使用正常入队 * 注意:快速入队与正常入队相比,可以发现,正常入队仅仅比快速入队多而一个判断队列是否为空且为空之后的过程 * @return 返回当前要插入的这个节点,注意不是前一个节点 */ private Node addWaiter(Node mode) { Node node = new Node(Thread.currentThread(), mode);//创建节点 /* * 快速入队 */ Node pred = tail;//将尾节点赋给pred if (pred != null) {//尾节点不为空 node.prev = pred;//将尾节点作为创造出来的节点的前一个节点,即将node链接到为节点后 /** * 基于CAS将node设置为尾节点,如果设置失败,说明在当前线程获取尾节点到现在这段过程中已经有其他线程将尾节点给替换过了 * 注意:假设有链表node1-->

Node2-- > pred (double-linked list, of course, it is appropriate to draw a double-linked list here), * replace pred with node node through CAS, that is, the current linked list is node1-- > node2-- > node, * then insert pred into the double-linked list according to "node.prev = pred" above and "pred.next = node" below The final linked list is as follows: * node1-- > node2-- > pred-- > node * in this case, we actually find that node2.next=pred and pred.prev=node2 are not specified. Why? * because these two sentences have been executed long before, that is, node2.next and pred.prev have been set before * / if (compareAndSetTail (pred, node)) {pred.next = node;// put node on the tail node return node;}} enq (node) / / normal enrollment return node;}

AbstractQueuedSynchronizer:enq (final Node node)

/ * * the tail node before joining the queue normally * @ param node * @ return * / private Node enq (final Node node) {for (;;) {/ / infinite loop must be blocked until Node t = tail / / get the tail node if (t = = null) {/ / if the tail node is null, the current waiting queue is empty / * Node h = new Node (); / / Dummy header h.next = node; node.prev = h If (compareAndSetHead (h)) {/ / according to the code is actually: compareAndSetHead (null,h) tail = node; return h } * / / * * Note: the code commented out above is in jdk1.6.45. In later versions, this paragraph is changed to the following paragraph * set a new node (a dummy node) to the header head based on CAS, and if you find that the current value in memory is not null, it means that in this process Other threads have already been set up. * when we successfully set the dummy node to the head node, we set the head node to the tail node, that is, head and tail are both the current dummy node. * if a new node joins the queue, insert it after the dummy * / if (compareAndSetHead (new Node ()) tail = head) } else {/ / this piece of logic is exactly the same as fast queue entry node.prev = t; if (compareAndSetTail (t, node)) {/ / try to set the node node as the tail node t.next = node;// set the node node as the tail node return t }}

Note: here is a complete queuing method. See the comments and the relevant sections of the comments section of ReentrantLock:lock () for specific logic.

3.3.3, AbstractQueuedSynchronizer:acquireQueued (final Node node, int arg)

Final boolean acquireQueued (final Node node, int arg) {try {boolean interrupted = false; / * * infinite loop (always blocked) until all nodes before the precursor node p of node are executed, p becomes head and the node request succeeds * / for (;) {final Node p = node.predecessor () / / get the previous node inserted into the node p / * * Note: * 1. This is the only condition for jumping out of the loop, unless an exception is thrown * 2, if p = = head & & tryAcquire (arg) the first loop is successful, interrupted is false There is no need to interrupt yourself * if p = = head & & tryAcquire (arg) if the suspend operation is successful after the first loop, and interrupted is true, you should interrupt yourself * / if (p = = head & & tryAcquire (arg)) {setHead (node) / / the current node is set to the header node p.next = null; return interrupted;// out of the loop} if (shouldParkAfterFailedAcquire (p, node) & & parkAndCheckInterrupt ()) interrupted = true;// interrupted}} catch (RuntimeException ex) {cancelAcquire (node) Throw ex;}}

AbstractQueuedSynchronizer:shouldParkAfterFailedAcquire (Node pred, Node node)

/ * detect whether the current node can be safely suspended (blocked) * @ param pred the precursor node of the current node * @ param node current node * / private static boolean shouldParkAfterFailedAcquire (Node pred, Node node) {int ws = pred.waitStatus / / get the wait state if (ws = = Node.SIGNAL) of the precursor node (that is, the previous node of the current thread) / / if the wait state of the precursor node is SIGNAL, indicating that the current node can be awakened in the future, then the current node can safely suspend return true / * * 1) when ws > 0 (that is, CANCELLED==1), the thread of the precursor node is cancelled. We will remove several consecutive cancelled precursor nodes from the queue and return false (that is, cannot suspend) * 2) if ws 0) {do {/ * * node.prev = pred = pred.prev * the above code is equivalent to the following two sentences * pred = pred.prev; * node.prev = pred; * / node.prev = pred = pred.prev;} while (pred.waitStatus > 0); pred.next = node } else {/ * * attempted to set the wait state of the precursor node of the current node to SIGNAL * 1 / Why did you use CAS? now that you have successfully joined the queue, the precursor node is pred, and there should be no thread operating this node except node, so why use CAS? Instead of assigning values directly? * (explanation: because pred can change its state to cancel, that is, the state of pred may have two threads (pred and node) to operate at the same time) * 2 / since the precursor node has been set to SIGNAL, why return false * (because CAS may fail, false is returned here regardless of failure or not, after the next execution of the method The waiting status of pred is SIGNAL) * / compareAndSetWaitStatus (pred, ws, Node.SIGNAL) } return false;}

AbstractQueuedSynchronizer:

Private final boolean parkAndCheckInterrupt () {LockSupport.park (this); / / suspend the current thread return Thread.interrupted (); / / return true} if the current thread has been interrupted

This is the whole process of a thread acquiring an unfair lock (lock ()).

4. Lock of Fair Lock ()

The specific usage is the same as the unfair lock.

If you have mastered the process of unfair locking, then the process of mastering fair locking will be very simple, with only two differences (which will be discussed in the end).

Steps for a simplified version: (core of fair lock)

Get the number of locks at a time

B1. If the number of locks is 0, if the current thread is the head node in the waiting queue, try to set the state (number of locks) from 0 to 1 based on CAS. If the setting is successful, set the current thread as the thread with exclusive lock.

B2, if the number of locks is not 0 or the current thread is not the head node in the waiting queue or the attempt above fails again, check to see if the current thread is already an exclusive lock thread, if so, the current number of locks is + 1; if not, the thread is encapsulated in a Node and added to the waiting queue. Waiting to be awakened by its previous thread node.

Source code:

4.1By ReentrantLock:lock ()

/ * acquire a lock * three situations: * 1. If the lock is not held by any thread (including the current thread), acquire the lock immediately, the number of locks = = 1, and then be awakened to execute the corresponding business logic * 2. If the current thread is holding the lock, then the number of locks + 1 Then wake up and execute the corresponding business logic * 3. If the lock is now held by another thread, the current thread is in a dormant state until the current thread is awakened and the number of locks = = 1, and then execute the corresponding business logic * / public void lock () {sync.lock () / / call the lock () method of FairSync}

4.2, FairSync:lock ()

Final void lock () {acquire (1);}

AbstractQueuedSynchronizer:acquire (int arg) is the method used by unfair locks

4.3.1, FairSync:tryAcquire (int acquires)

/ * the method to obtain fair locks * 1) get the number of locks c * 1.1.If clocked 0, if the current thread is the head node in the waiting queue, use CAS to set the state (number of locks) from 0 to 1. If the setting is successful, the current thread monopolizes locks-- > request succeeds * 1.2) if clocked 0 Determine whether the current thread is the one with the current exclusive lock, and if so, set the current lock number status value + 1 (this is the source of the name of the reentrant lock)-> the request is successful * finally, after the request fails, chain the current thread to the end of the queue and suspend it, then wait to be awakened. * / protected final boolean tryAcquire (int acquires) {final Thread current = Thread.currentThread (); int c = getState (); if (c = = 0) {if (isFirst (current) & & compareAndSetState (0, acquires)) {setExclusiveOwnerThread (current); return true }} else if (current = = getExclusiveOwnerThread ()) {int nextc = c + acquires; if (nextc < 0) throw new Error ("Maximum lock count exceeded"); setState (nextc); return true;} return false;}

Finally, if the request fails, chain the current thread to the end of the queue and suspend it, and then wait to be awakened, the following code is the same as the unfair lock.

Summary: comparison between fair lock and unfair lock

FairSync:lock () is missing the queue-jumping part (that is, the process by which CAS tries to set the state from 0 to 1, thus obtaining the lock)

FairSync:tryAcquire (int acquires) has more logic to determine whether the current thread is waiting for the head of the queue (in fact, there is less process to jump the queue again, but there is still a CAS fetch).

One last word.

ReentrantLock is implemented based on AbstractQueuedSynchronizer. AbstractQueuedSynchronizer can implement either exclusive lock or shared lock. ReentrantLock only uses the exclusive lock mode.

There is a lot of code in this piece, and the logic is more complex. In the process of reading, you can take a stroke to join the team and other diagrams related to the data structure.

Be sure to remember the "simplified version of the steps", which is the core of the whole unfair lock and fair lock

After reading the above, have you mastered the method of ReentrantLock source code parsing? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report