Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the advantages of the concurrency mechanism of C #

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the advantages of the concurrency mechanism of C#, which can be used for reference by interested friends. I hope you will gain a lot after reading this article.

A single line of useless code improves efficiency?

Since the file copy information I need to record is not echoed in UI, the concurrency conflict is not considered. In the initial version of the implementation, I handled the callback events of filesystemwatcher directly, as follows:

Private void DeleteFileHandler (object sender, FileSystemEventArgs e) {if (files.Contains (e.FullPath)) {files.Remove (e.FullPath); / / some other operations}}

The processing efficiency of this program is that if 20 files are copied at the same time on a normal office PC, the CPU utilization rate of the U disk monitoring program is about 0.7% during the copying process.

But by chance, I used Event/Delegate 's Invoke mechanism and found that such a seemingly useless operation reduced the CPU occupancy of the program to about 0.2%.

Private void UdiskWather_Deleted (object sender, FileSystemEventArgs e) {if (this.InvokeRequired) {this.Invoke (new DeleteDelegate (DeleteFileHandler), new object [] {sender,e});} else {DeleteFileHandler (sender,e);}}

In my initial understanding, the Delegate mechanism in. Net has to be dismantled and boxed during the call process, so it would be nice not to slow down the operation, but the actual verification result is the opposite.

What is the use of seemingly useless Invoke?

Here we first come to the conclusion that Invoke can improve the efficiency of program execution. The key is that the consumption of thread switching between cores is much higher than the resource consumption of disassembly and packing. We know that the core of our program is to operate the shared variable files. If the file changes in the detected USB disk directory, the callback notification function may run on different threads, as follows:

Behind the Invoke mechanism is to ensure that all operations on the shared variable files are performed by a single thread.

At present, the .net code is open source. Let's briefly explain the process of calling Invoke. Whether it is BeginInvoke or Invoke, it is actually done by calling the MarshaledInvoke method, as follows:

Public IAsyncResult BeginInvoke (Delegate method, params Object [] args) {using (new MultithreadSafeCallScope ()) {Control marshaler = FindMarshalingControl (); return (IAsyncResult) marshaler.MarshaledInvoke (this, method, args, false);}}

The main job of MarshaledInvoke is to create a ThreadMethodEntry object, manage it in a linked list, and then call PostMessage to send the relevant information to the thread to communicate, as follows:

Private Object MarshaledInvoke (Control caller, Delegate method, Object [] args, bool synchronous) {if (! IsHandleCreated) {throw new InvalidOperationException (SR.GetString (SR.ErrorNoMarshalingThread));} ActiveXImpl activeXImpl = (ActiveXImpl) Properties.GetObject (PropActiveXImpl); if (activeXImpl! = null) {IntSecurity.UnmanagedCode.Demand () } / / We don't want to wait if we're on the same thread, or else we'll deadlock. / / It is important that syncSameThread always be false for asynchronous calls. / / bool syncSameThread = false; int pid; / / ignored if (SafeNativeMethods.GetWindowThreadProcessId (new HandleRef (this, Handle), out pid) = = SafeNativeMethods.GetCurrentThreadId () {if (synchronous) syncSameThread = true } / / Store the compressed stack information from the thread that is calling the Invoke () / / so we can assign the same security context to the thread that will actually execute / / the delegate being passed. / / ExecutionContext executionContext = null; if (! syncSameThread) {executionContext = ExecutionContext.Capture ();} ThreadMethodEntry tme = new ThreadMethodEntry (caller, this, method, args, synchronous, executionContext); lock (this) {if (threadCallbackList = = null) {threadCallbackList = new Queue () }} lock (threadCallbackList) {if (threadCallbackMessage = = 0) {threadCallbackMessage = SafeNativeMethods.RegisterWindowMessage (Application.WindowMessagesVersion + "_ ThreadCallbackMessage");} threadCallbackList.Enqueue (tme);} if (syncSameThread) {InvokeMarshaledCallbacks () } else {/ / UnsafeNativeMethods.PostMessage (new HandleRef (this, Handle), threadCallbackMessage, IntPtr.Zero, IntPtr.Zero);} if (synchronous) {if (! tme.IsCompleted) {WaitForWaitHandle (tme.AsyncWaitHandle) } if (tme.exception! = null) {throw tme.exception;} return tme.retVal;} else {return (IAsyncResult) tme;}}

Invoke's mechanism ensures that a shared variable can only be maintained by one thread, which coincides with the GO language's design of using communication instead of shared memory, and their idea is to "let the same block be operated by only one thread at the same time." This is closely related to the multicore CPU (SMP) of modern computing architecture.

Here, let's first popularize the contents of the MESI protocol for communication between CPU. We know that modern CPU is equipped with cache. According to the MESI protocol of multicore cache synchronization, each cache line has four states, namely E (exclusive), M (modified), S (shared) and I (invalid), where:

M: represents that the content in the cache line is modified, and the cache line is only cached in the CPU. This state represents that the data in the cache row is different from the data in memory.

E: represents that the contents of the corresponding memory of the cache line are only cached by the CPU, while other CPU does not cache the contents of the corresponding memory line of the cache. The data in the cache row of this state is consistent with the data in memory.

I: indicates that the content in the cache line is invalid.

S: this state means that the data exists not only in the local CPU cache, but also in other CPU caches. The data in this state is also consistent with the data in memory. However, as long as CPU modifies the cache line, it will change the state of the row to I.

The state transition diagrams of the four states are as follows:

As we mentioned earlier, there is a high probability that different threads will run on different CPU cores. When different CPU operates on the same memory, from the CPU0's point of view, CPU1 will continue to initiate remote write operations, which will make the state of the cache always migrate between S and I, and it will take more time to synchronize the state once the state changes to I.

So we can basically get this.Invoke (new DeleteDelegate (DeleteFileHandler), new object [] {sender,e});; after this seemingly insignificant line of code, the maintenance operation of files shared variables is inadvertently changed from multi-core and multi-thread to communicate with the main thread, and all maintenance operations are carried out by the main thread, which also improves the final execution efficiency.

In-depth interpretation, why add two locks

Under the current trend of using communications instead of shared memory, locks are actually the most important design.

We see that in the Invoke implementation of .net, two locks lock (this) and lock (threadCallbackList) are used.

Lock (this) {if (threadCallbackList = = null) {threadCallbackList = new Queue ();}} lock (threadCallbackList) {if (threadCallbackMessage = = 0) {threadCallbackMessage = SafeNativeMethods.RegisterWindowMessage (Application.WindowMessagesVersion + "_ ThreadCallbackMessage") } threadCallbackList.Enqueue (tme);}

The basic understanding of the lock keyword in .NET is that it provides a Compare And Swap similar to CAS. The principle of CAS constantly compares the "expected value" with the "actual value". When they are equal, it means that the CPU that holds the lock has released the lock, then the CPU that tries to acquire the lock will try to write the value (0) of "new" to "p" (swap) to indicate that it is the new owner of spinlock. The pseudo code is demonstrated as follows:

Void CAS (int p, int old,int new) {if * p! = old do nothing else * p ← new}

The lock efficiency based on CAS is fine, especially when there is no multi-core competition, but the biggest problem with CAS is unfair, because if there are multiple CPU applying for a lock at the same time, then the CPU that has just released the lock is very likely to gain an advantage in the next round of competition and acquire the lock again. As a result, one CPU is busy while the other CPU is idle. We often criticize multicore SOC that "one core is difficult, eight cores are onlookers". In fact, it is often caused by this unfairness.

In order to solve the unfair problem of CAS, the industry gods have introduced the TAS (Test And Set Lock) mechanism. I feel that it is better to understand the T in TAS as Ticket. The TAS scheme maintains a head-tail index value requesting the lock, which is composed of "head" and "tail" indexes.

Struct lockStruct {int32 head; int32 tail;}

"head" represents the header of the request queue, and "tail" represents the tail of the request queue, with an initial value of 0.

At first, the CPU of the first request finds that the tail value of the queue is 0, then the CPU will acquire the lock directly, update the tail value to 1, and update the head value to 1 when the lock is released.

In general, when the lock is released by the held CPU, the head value of the queue will be increased by 1. When other CPU is trying to acquire the lock, the tail value of the lock will be obtained, then the tail value will be added 1 and stored in its own register, and then the updated tail value will be updated to the queue's tail. The next step is to loop and compare constantly to determine whether the current "head" value of the lock is equal to the "tail" value stored in the register, which means that the lock has been successfully acquired.

TAS this is similar to the user to the government hall to do business, the first thing is to get the number in the calling machine, when the staff broadcast the same number as the number in your hand, you will obtain the ownership of the counter.

However, there are some efficiency problems in TAS. According to the MESI protocol we introduced above, the head-tail index of this lock is actually shared among various CPU, so frequent updates of tail and head will still lead to invalidate that constantly adjusts the cache, which will greatly affect the efficiency.

Therefore, we can see that in the implementation of .net, the queue of threadCallbackList is introduced directly, and tme (ThreadMethodEntry) is added to the end of the queue, while the process of receiving messages continues to get messages from the head of the queue.

Lock (threadCallbackList) {if (threadCallbackMessage = = 0) {threadCallbackMessage = SafeNativeMethods.RegisterWindowMessage (Application.WindowMessagesVersion + "_ ThreadCallbackMessage");} threadCallbackList.Enqueue (tme);}

The message is sent only when the queue leader points to this tme, which is actually an implementation similar to MAS. Of course, MAS actually establishes a dedicated queue for each CPU, which is slightly different from the design of Invoke, but the basic idea is the same.

Thank you for reading this article carefully. I hope the article "what are the advantages of the concurrency mechanism of C#" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report