In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
What this article shares to you is about the performance essentials and optimization suggestions of the .NET program. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article. Let's take a look at it with the editor.
Some suggestions for performance optimization are provided, which come from rewriting the C # and VB compilers with managed code, and using some real-world scenarios in writing the C # compiler as examples to demonstrate these optimization experiences. The development of applications on the .NET platform is extremely productive. The powerful and secure programming language and rich class libraries on the .NET platform make the development of applications very effective. But the greater the ability, the greater the responsibility. We should use the powerful capabilities of the .NET Framework, but at the same time we need to be prepared to tune our code if we need to deal with large amounts of data such as files or databases.
Why the performance optimization experience from the new compiler also applies to your application
Microsoft uses managed code to rewrite C # and Visual Basic compilers, and provides a series of new API for code modeling and analysis, and development of compilation tools, making Visual Studio a richer code-aware programming experience. The experience of rewriting compilers and developing Visual Studio on new compilers has given us useful performance optimization experiences that can also be applied to large .NET applications or APP that need to process large amounts of data. You don't need to know the compiler to get these insights from the example of the C # compiler.
Visual Studio uses the compiler's API to achieve powerful Intellisense functions, such as code keyword coloring, syntax filling lists, error wavy line hints, parameter hints, code problems and modification suggestions, etc., these functions are welcomed by developers. When developers enter or modify the code, Visual Studio will dynamically compile the code to get the analysis and hints of the code.
When users interact with App, they usually want the software to be responsive. The application interface should not be blocked when entering or executing commands. Help or prompts can be displayed quickly or stop when the user continues to type. The current App should avoid blocking UI threads when performing long calculations to make users feel that the program is not smooth enough.
To learn more about the new compiler, visit the .NET Compiler Platform ("Roslyn")
Basic essentials
When tuning the performance of .NET and developing responsive applications, consider the following basic points:
Point 1: don't optimize prematurely
Writing code is much more complex than expected, and the code requires maintenance, debugging, and performance optimization. An experienced programmer will usually naturally come up with solutions to problems and write efficient code. But sometimes you can get caught up in the problem of optimizing your code too early. For example, sometimes it is enough to use a simple array, it has to be optimized to use a hash table, and sometimes it is possible to simply recalculate it, and it is necessary to use complex caches that can cause memory leaks. When you find a problem, you should test the performance problem first and then analyze the code.
Point 2: if there is no evaluation, it is guessing.
Analysis and measurement don't lie. The evaluation can show whether the CPU is running at full capacity or if there is a disk I / O blocking. The assessment will tell you what and how much memory the application has allocated, and whether CPU spends a lot of time on garbage collection.
You should set performance goals for key user experiences or scenarios, and write tests to measure performance. The steps to analyze the reasons for substandard performance by using scientific methods are as follows: use the evaluation report to guide, assume what may happen, and write experimental code or modify the code to verify our hypothesis or correction. If we set basic performance metrics and test them frequently, we can avoid some changes leading to a regression of performance, which can prevent us from wasting time on unnecessary changes.
Point 3: good tools are very important.
Good tools allow us to quickly locate the factors that affect performance (CPU, memory, disk) and help us locate the code that creates these bottlenecks. Microsoft has released many performance testing tools such as Visual Studio Profiler, Windows Phone Analysis Tool, and PerfView.
PerfView is a free and powerful tool that focuses on some of the deep-seated issues that affect performance (disk I _ hand OGC events, memory), and examples of this will be shown later. We can grab performance-related Event Tracing for Windows (ETW) events and view this information on an application, process, stack, thread scale. PerfView can show how much the application allocates, what kind of memory is allocated, the functions in the application, and the contribution of the call stack to memory allocation. For details of these aspects, you can see the very detailed help on PerfView, Demo, and video tutorials (such as video tutorials on Channel9) that are downloaded and released with the tool.
Point 4: all are related to memory allocation
You might think that the key to writing .NET-based applications with timely responses is to use good algorithms, such as using quick sorting instead of bubble sorting, but this is not the case. The factor in writing a responsive app is memory allocation, especially when the app is very large or processing large amounts of data.
In the practice of developing responsive IDE with the new compiler API, most of the work is spent on how to avoid opening up memory and managing caching strategies. PerfView tracking shows that the performance of the new C # and VB compilers basically has nothing to do with the performance bottleneck of CPU. The compiler reads hundreds or even tens of thousands of lines of code, reading metadata to produce compiled code alive, all of which are actually bound-intensive. The latency of UI threads is almost entirely due to garbage collection. The .NET Framework has highly optimized the performance of garbage collection, and it can perform most of the garbage collection operations in parallel while the application code is executing. However, a single memory allocation operation may trigger an expensive garbage collection operation so that GC temporarily suspends all threads for garbage collection (such as Generation 2 garbage collection)
Common memory allocation and examples
Although there is little about memory allocation behind this part of the example. However, if a large application executes enough of these small expressions that lead to memory allocation, then these expressions can result in hundreds of megabytes or even gigabytes of memory allocation. For example, before the performance test team locates the problem to the input scenario, a minute of test simulation developers will allocate several gigabytes of memory when writing code in the compiler.
Boxing
Boxing occurs when a value type that is usually assigned on the thread stack or in a data structure, or when a temporary value needs to be wrapped in an object (such as allocating an object to store data and returning a pointer to an Object object alive). The .NET Framework sometimes boxed value types automatically because of the signature of the method or the location of the type. Wrapping a value type as a reference type results in memory allocation. The .NET Framework and languages try to avoid unnecessary boxing, but sometimes boxing occurs when we don't notice it. Too many boxing operations allocate G memory on M in the application, which means garbage collection is more frequent and takes longer.
To see the boxing operation in PerfView, you only need to turn on a trace, and then look at the GC Heap Alloc entry under the application name (remember, PerfView reports the resource allocation of all processes). If you see some value types such as System.Int32 and System.Char in the allocation phase, boxing occurs. Selecting a type displays the call stack and the function in which the boxing operation occurred.
Example 1 string method and its value type parameter
The following sample code demonstrates the potential for unnecessary boxing and frequent boxing operations in large systems.
Public class Logger {public static void WriteLine (string s) {/ *... * /}} public class BoxingExample {public void Log (int id, int size) {var s = string.Format ("{0}: {1}", id, size); Logger.WriteLine (s);}}
This is a basic logging class, so app frequently calls the Log function to log, and this method may be called millons times. The problem is that calling the string.Format method invokes its overloaded method that accepts one string type and two Object types:
String.Format Method (String, Object, Object)
This overloaded method requires .NET Framework to box int into object and pass it to the method call. To solve this problem, the method is to call the id.ToString () and size.ToString () methods, and then pass them into the string.Format method. Calling the ToString () method does result in an allocation of string, but an allocation of type string is generated within the string.Format method anyway.
You might think that this basic call to string.Format is just a string concatenation, so you might write code like this:
Var s = id.ToString () +':'+ size.ToString ()
In fact, the above line of code also causes boxing because the above statement is called at compile time:
String.Concat (Object, Object, Object)
This method,. NET Framework must box character constants to call the Concat method.
Solution:
It's easy to fix this problem completely, replacing the above single quotation marks with double quotes, that is, replacing character constants with string constants, you can avoid boxing because the string type is already a reference type.
Var s = id.ToString () + ":" + size.ToString ()
Example 2 boxing of enumerated types
The following example is the reason why new C # and VB compilers allocate a lot of memory due to the frequent use of enumerated types, especially when doing lookups in Dictionary.
Public enum Color {Red, Green, Blue} public class BoxingExample {private string name; private Color color; public override int GetHashCode () {return name.GetHashCode () ^ color.GetHashCode ();}}
The problem is very hidden. PerfView will tell you that enmu.GetHashCode () produces a boxing operation due to internal implementation. This method boxing on the representation of the underlying enumerated type. If you look closely at PerfView, you will see that two boxing operations occur each time you call GetHashCode. The compiler inserts once, and .NET Framework inserts another time.
Solution:
This boxing operation can be avoided by casting the underlying representation of the enumeration when calling GetHashCode.
(int) color) .GetHashCode ()
Another operation that uses enumerated types often produces boxing when enum.HasFlag. The parameters passed to HasFlag must be boxed, and in most cases, repeated calls to HasFlag to test through bit operations are simple and do not require memory allocation.
Keep the basics in mind and don't optimize them prematurely. And don't start rewriting all the code too early. You need to be aware of the cost of these boxes, and you can only start modifying the code after you find and locate the main problem through the tool.
String
String manipulation is one of the main culprits of memory allocation, which usually accounts for the top five causes of memory allocation in PerfView. The application uses strings to serialize to represent JSON and REST. When enumerated types are not supported, strings can be used to interact with other systems. When we locate that the string operation has a serious impact on performance, we need to pay attention to the methods of the string class, such as Format (), Concat (), Split (), Join (), Substring (), etc. Using StringBuilder can avoid the overhead of creating multiple new strings when concatenating multiple strings, but the creation of StringBuilder also needs good control to avoid possible performance bottlenecks.
Example 3 string operation
The following methods are available in the C # compiler to output comments in xml format in front of the method.
Public void WriteFormattedDocComment (string text) {string [] lines = text.Split (new [] {"\ r\ n", "\ r", "\ n"}, StringSplitOptions.None); int numLines = lines.Length; bool skipSpace = true; if (lines [0] .TrimStart (). StartsWith ("/")) {for (int I = 0; I)
< numLines; i++) { string trimmed = lines[i].TrimStart(); if (trimmed.Length < 4 || !char.IsWhiteSpace(trimmed[3])) { skipSpace = false; break; } } int substringStart = skipSpace ? 4 : 3; for (int i = 0; i < numLines; i++) Console.WriteLine(lines[i].TrimStart().Substring(substringStart)); } else { /* ... */ }} 可以看到,在这片代码中包含有很多字符串操作。代码中使用类库方法来将行分割为字符串,来去除空格,来检查参数text是否是XML文档格式的注释,然后从行中取出字符串处理。 在WriteFormattedDocComment方法每次被调用时,***行代码调用Split()就会分配三个元素的字符串数组。编译器也需要产生代码来分配这个数组。因为编译器并不知道,如果Splite()存储了这一数组,那么其他部分的代码有可能会改变这个数组,这样就会影响到后面对WriteFormattedDocComment方法的调用。每次调用Splite()方法也会为参数text分配一个string,然后在分配其他内存来执行splite操作。 WriteFormattedDocComment方法中调用了三次TrimStart()方法,在内存环中调用了两次,这些都是重复的工作和内存分配。更糟糕的是,TrimStart()的无参重载方法的签名如下: namespace System{ public class String { public string TrimStart(params char[] trimChars); }} 该方法签名意味着,每次对TrimStart()的调用都回分配一个空的数组以及返回一个string类型的结果。 ***,调用了一次Substring()方法,这个方法通常会导致在内存中分配新的字符串。 解决方法: 和前面的只需要小小的修改即可解决内存分配的问题不同。在这个例子中,我们需要从头看,查看问题然后采用不同的方法解决。比如,可以意识到WriteFormattedDocComment()方法的参数是一个字符串,它包含了方法中需要的所有信息,因此,代码只需要做更多的index操作,而不是分配那么多小的string片段。 下面的方法并没有完全解,但是可以看到如何使用类似的技巧来解决本例中存在的问题。C#编译器使用如下的方式来消除所有的额外内存分配。 private int IndexOfFirstNonWhiteSpaceChar(string text, int start){ while (start < text.Length && char.IsWhiteSpace(text[start])) start++; return start;}private bool TrimmedStringStartsWith(string text, int start, string prefix){ start = IndexOfFirstNonWhiteSpaceChar(text, start); int len = text.Length - start; if (len < prefix.Length) return false; for (int i = 0; i < len; i++) { if (prefix[i] != text[start + i]) return false; } return true;} WriteFormattedDocComment() 方法的***个版本分配了一个数组,几个子字符串,一个trim后的子字符串,以及一个空的params数组。也检查了"///"。修改后的代码仅使用了index操作,没有任何额外的内存分配。它查找***个非空格的字符串,然后逐个字符串比较来查看是否以"///"开头。和使用TrimStart()不同,修改后的代码使用IndexOfFirstNonWhiteSpaceChar方法来返回***个非空格的开始位置,通过使用这种方法,可以移除WriteFormattedDocComment()方法中的所有额外内存分配。 例4 StringBuilder 本例中使用StringBuilder。下面的函数用来产生泛型类型的全名: public class Example { // Constructs a name like "SomeType" public string GenerateFullTypeName(string name, int arity) { StringBuilder sb = new StringBuilder(); sb.Append(name); if (arity != 0) { sb.Append(""); } return sb.ToString(); }} 注意力集中到StringBuilder实例的创建上来。代码中调用sb.ToString()会导致一次内存分配。在StringBuilder中的内部实现也会导致内部内存分配,但是我们如果想要获取到string类型的结果化,这些分配无法避免。 解决方法: 要解决StringBuilder对象的分配就使用缓存。即使缓存一个可能被随时丢弃的单个实例对象也能够显著的提高程序性能。下面是该函数的新的实现。除了下面两行代码,其他代码均相同 // Constructs a name like "Foo" public string GenerateFullTypeName(string name, int arity){ StringBuilder sb = AcquireBuilder(); /* Use sb as before */ return GetStringAndReleaseBuilder(sb);} 关键部分在于新的 AcquireBuilder()和GetStringAndReleaseBuilder()方法: [ThreadStatic]private static StringBuilder cachedStringBuilder;private static StringBuilder AcquireBuilder(){ StringBuilder result = cachedStringBuilder; if (result == null) { return new StringBuilder(); } result.Clear(); cachedStringBuilder = null; return result;}private static string GetStringAndReleaseBuilder(StringBuilder sb){ string result = sb.ToString(); cachedStringBuilder = sb; return result;} 上面方法实现中使用了thread-static字段来缓存StringBuilder对象,这是由于新的编译器使用了多线程的原因。很可能会忘掉这个ThreadStatic声明。Thread-static字符为每个执行这部分的代码的线程保留一个唯一的实例。 如果已经有了一个实例,那么AcquireBuilder()方法直接返回该缓存的实例,在清空后,将该字段或者缓存设置为null。否则AcquireBuilder()创建一个新的实例并返回,然后将字段和cache设置为null 。 当我们对StringBuilder处理完成之后,调用GetStringAndReleaseBuilder()方法即可获取string结果。然后将StringBuilder保存到字段中或者缓存起来,然后返回结果。这段代码很可能重复执行,从而创建多个StringBuilder对象,虽然很少会发生。代码中仅保存***被释放的那个StringBuilder对象来留作后用。新的编译器中,这种简单的的缓存策略极大地减少了不必要的内存分配。.NET Framework 和MSBuild 中的部分模块也使用了类似的技术来提升性能。 简单的缓存策略必须遵循良好的缓存设计,因为他有大小的限制cap。使用缓存可能比之前有更多的代码,也需要更多的维护工作。我们只有在发现这是个问题之后才应该采缓存策略。PerfView已经显示出StringBuilder对内存的分配贡献相当大。 LINQ和Lambdas表达式 使用LINQ 和Lambdas表达式是C#语言强大生产力的一个很好体现,但是如果代码需要执行很多次的时候,可能需要对LINQ或者Lambdas表达式进行重写。 例5 Lambdas表达式,List,以及IEnumerable 下面的例子使用LINQ以及函数式风格的代码来通过编译器模型给定的名称来查找符号。 class Symbol { public string Name { get; private set; } /*...*/}class Compiler { private List symbols; public Symbol FindMatchingSymbol(string name) { return symbols.FirstOrDefault(s =>S.Name = = name);}}
The new compiler and IDE experience is based on calling FindMatchingSymbol, which is very frequent, and in the process, such a simple line of code hides the underlying memory allocation overhead. To show the allocation, let's first split the one-line function into two lines:
Func predicate = s = > s.Name = = name; return symbols.FirstOrDefault (predicate)
In the * line, the lambda expression "s = > s.Name==name" is a closure of the local variable name. This means that additional objects need to be allocated to allocate space for the delegate object predict, and a static class is needed to allocate a static class to hold the environment and thus the value of name. The compiler produces the following code:
/ / Compiler-generated class to hold environment state for lambda private class Lambda1Environment {public string capturedName; public bool Evaluate (Symbol s) {return s.Name = = this.capturedName;}} / / Expanded Func predicate = s = > s.Name = = name; Lambda1Environment l = new Lambda1Environment () {capturedName = name}; var predicate = new Func (l.Evaluate)
The two new operators (* to create an environment class and the second to create a delegate) clearly indicate the memory allocation.
Now look at the call to the FirstOrDefault method, which is an extension of the IEnumerable class, which also results in a memory allocation. Because FirstOrDefault uses IEnumerable as the * * parameter, you can expand the above into the following code:
/ / Expanded return symbols.FirstOrDefault (predicate)... IEnumerable enumerable = symbols;IEnumerator enumerator = enumerable.GetEnumerator (); while (enumerator.MoveNext ()) {if (predicate (enumerator.Current)) return enumerator.Current;} return default (Symbol)
Symbols variables are variables of type List. The List collection type implements IEnumerable and clearly defines an iterator, and List's iterator is implemented using a structure. Using structures instead of classes means that any allocation on the managed heap can usually be avoided, thus affecting the efficiency of garbage collection. The typical use of enumerations is to facilitate the use of foreach loops at the language level, which uses the enumerator structure to return on the call push stack. Incrementing the stack pointer to allocate space for the object does not affect GC's operation on the managed object.
In the example above that expands the FirstOrDefault call, the code calls the GetEnumerator () method in the IEnumerabole interface. Assigning symbols to an enumerable variable of type IEnumerable causes the object to lose its actual List type information. This means that when code acquires an iterator through the enumerable.GetEnumerator () method, the. NET Framework must box the returned value (that is, the iterator, implemented using a structure) type to assign it to the reference type enumerator variable of type IEnumerable
Solution:
The solution is to rewrite the FindMatchingSymbol method, replacing a single statement with six lines of code that are still coherent, easy to read and understand, and easy to implement.
Public Symbol FindMatchingSymbol (string name) {foreach (Symbol s in symbols) {if (s.Name = = name) return s;} return null;}
LINQ extension methods, lambdas expressions, and iterators are not used in the code, and there is no additional memory allocation overhead. This is because the compiler sees that symbol is a collection of List types because it can directly bind the returned structural enumerator to a local variable of the correct type, thus avoiding boxing of struct types. The original code shows the rich manifestations of the C # language and the powerful productivity of the .NET Framework. The code after this is more efficient and simple, and does not add complex code to increase maintainability.
Aync async
The next example shows a common problem when trying to cache the return value of a method:
Example 6 cache asynchronous method
The features of Visual Studio IDE are largely based on the fact that the new C # and VB compilers get the syntax tree, keeping Visual Stuido responsive when the compiler uses async. The following is the code to get * * versions of the syntax tree:
Class Parser {/ *... * / public SyntaxTree Syntax {get;} public Task ParseSourceCode () {/ *... * /} class Compilation {/ *... * / public async Task GetSyntaxTreeAsync () {var parser = new Parser (); / / allocation await parser.ParseSourceCode (); / / expensive return parser.Syntax;}}
You can see that calling the GetSyntaxTreeAsync () method instantiates a Parser object, parses the code, and then returns a Task object. The most performance-intensive part is allocating memory for Parser instances and parsing code. Method returns a Task object, so the caller can await parse the work, and then release the UI thread so that it can respond to the user's input.
Because some features of Visual Studio may need to get the same syntax tree multiple times, it is usually possible to cache the parsing results to save time and memory allocation, but the following code may result in memory allocation:
Class Compilation {/ *... * / private SyntaxTree cachedResult; public async Task GetSyntaxTreeAsync () {if (this.cachedResult = = null) {var parser = new Parser (); / / allocation await parser.ParseSourceCode (); / / expensive this.cachedResult = parser.Syntax;} return this.cachedResult;}}
There is a field named cachedResult of type SynataxTree in the code. When the field is empty, GetSyntaxTreeAsync () executes and saves the result in cache. The GetSyntaxTreeAsync () method returns a SyntaxTree object. The problem is that when there is an async asynchronous method of type Task and wants to return the value of SyntaxTree, the compiler generates code to allocate a Task to hold the execution result (by using Task.FromResult ()). The Task is marked as complete, and the result is returned immediately. The action of allocating Task objects to store the results of execution is called frequently, so fixing the allocation problem can greatly improve application responsiveness.
Solution:
To remove the assignment that the save completed execution task, you can cache the Task object to save the finished result.
Class Compilation {/ *... * / private Task cachedResult; public Task GetSyntaxTreeAsync () {return this.cachedResult?? (this.cachedResult = GetSyntaxTreeUncachedAsync ();} private async Task GetSyntaxTreeUncachedAsync () {var parser = new Parser (); / / allocation await parser.ParseSourceCode (); / / expensive return parser.Syntax;}}
The code changes the cachedResult type to Task and introduces the async helper function to save the GetSyntaxTreeAsync () function in the original code. The GetSyntaxTreeAsync function now uses the null operator to indicate that it returns directly when the cachedResult is not empty, and if it is empty, GetSyntaxTreeAsync calls GetSyntaxTreeUncachedAsync () and caches the result. Notice that GetSyntaxTreeAsync does not have an await call to GetSyntaxTreeUncachedAsync. Not using await means that when GetSyntaxTreeUncachedAsync returns a Task type, GetSyntaxTreeAsync also immediately returns Task, and now Task is cached, so there is no additional memory allocation when returning cached results.
Other miscellaneous items that affect performance
There are a few other points that can cause potential performance problems in a large app or app that handles large amounts of data.
Dictionaries
In many applications, Dictionary is widely used, although the word is very convenient and college, but it is often used incorrectly. Using performance analysis tools in Visual Studio and new compilers, it is found that many dictionay contain only one element or are simply empty. An empty Dictionay structure will have 10 fields occupying 48 bytes on the managed heap on the x86 machine. Dictionaries are useful when mapping or associating data structures requires constant time to look up in advance. But when there are only a few elements, using dictionaries wastes a lot of memory space. Instead, we can use the List structure to achieve convenience, for a small number of elements, the same university. If you just use a dictionary to load the data and then read it, then using an ordered array with N (log (N)) lookup efficiency can also be very fast, depending on the number of elements.
Class and structure
Not strictly speaking, classes and structures provide a classic space / time trade-off (trade off) when it comes to optimizing applications. On x86 machines, each class allocates 12 byte of space even if it doesn't have any fields, but passing classes as parameters between methods is very efficient and cheap, because you only need to pass a pointer to an instance of the type. If the structure does not crash, no memory allocation will be generated on the managed heap, but when a larger structure is used as a method parameter or when it is worthwhile to return, it takes CPU time to automatically copy and copy the structure, and then cache the properties of the structure locally to avoid excessive data copy.
Caching
A common technique for performance optimization is to cache results. However, if the cache does not have a size limit or a good resource release mechanism, it will lead to memory leaks. When dealing with a large amount of data, if you cache too much data in the cache, it will take up a lot of memory, and the garbage collection overhead will outweigh the benefits of finding results in the cache.
Conclusion
In large systems, or systems that need to deal with large amounts of data, we need to focus on the symptoms of performance bottlenecks, which in turn affect the responsiveness of app, such as boxing operations, string manipulation, LINQ and Lambda expressions, cached async methods, lack of size restrictions and good resource release policies, improper use of Dictionay, and passing structures everywhere. When optimizing our application, we need to keep in mind the four points mentioned earlier:
Don't optimize prematurely-tune after you locate and find the problem.
Professional tests don't lie-no evaluation is guesswork.
Good tools are important. Download PerfView, and then go to the usage tutorial.
Memory allocation determines the responsiveness of app. This is also where the new compiler performance team spends the most time.
references
If you want to watch a lecture on this topic, you can watch it on Channel 9.
VS Profiler basic http://msdn.microsoft.com/en-us/library/ms182372.aspx
Net English Program performance Analysis tools http://msdn.microsoft.com/en-us/library/hh256536.aspx
Windows Phone performance Analysis tool http://msdn.microsoft.com/en-us/magazine/hh781024.aspx
Some C # and VB performance optimization recommendations http://msdn.microsoft.com/en-us/library/ms173196(v=vs.110).aspx (Note: the link has no content in the original text, the connection address should be http://msdn.microsoft.com/en-us/library/ms173196(v=vs.100).aspx)
Some advanced optimization recommendations http://curah.microsoft.com/4604/improving-your-net-apps-startup-performance
The above is the whole content of this article, many things are actually very basic, such as the difference between value types (such as structures) and reference types (such as classes) and usage scenarios, string operations, boxing and unpacking operations, etc., which are systematically described and explained in the book CLR Via C #. It is particularly important to emphasize that most of the time we are not aware of the boxing operation. For example, the enumerated type obtaining HashCode mentioned in this article will lead to boxing. The same problem as this is that usually when we use the value type as the key of Dictionay, the internal implementation of Dictionay will call key's GetHashCode method to get the hash value for hashing, and the default method will cause the boxing operation.
The above are the performance essentials and optimization suggestions of .NET programs, and the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.