In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Performance improvement 1: JSON serializer (Jil)
The default serializer in .NET is JavaScriptSrializer, as you all know, and the performance is really poor. Later, Json.NET appeared, so much so that the default serializer used when creating a project is Json.NET, which is widely used by .NET developers. There is no doubt about its strength and performance, so that now the Json.NET version has been updated to version 9.0, but in large projects, once the amount of data is huge. At this point, it will be a little slow to serialize data with Json.NET, so we can try using Jil, and there is enough APi in it. Let's talk about a few commonly used APi and compare Json.NET to see:
Serialization comparison
This is how it is serialized in Json.NET
JsonConvert.SerializeObject (obj)
Serializing data in Jil is like this.
JSON.Serialize (obj)
At this point, there are two forms of string returned for Jil serialization data
(1) Direct reception
Object obj = new {Foo = 123, Bar = "abc"}; string s = Jil.JSON.Serialize (obj)
(2) pass it to StringWriter to receive
Var obj = new {Foo = 123, Bar = "abc"}; var t = new StringWriter (); JSON.SerializeDynamic (obj, t)
As mentioned above, when using Jil with a large amount of data, it is more efficient than Json.NET. Let's verify that 10000 pieces of data are serialized.
Serialization class:
Public class Person {public int Id {get; set;} public string Name {get; set;}}
Test data:
List = List (I =; I
< ; i++ stop = jil = stop1 = json = Jil序列化封装: private static string SerializeList(List list) { using (var output = new StringWriter()) { JSON.Serialize( list, output ); return output.ToString(); } } 我们来看看测试用例: 此时利用Json.NET序列化数据明显优于Jil,但序列化数据为10万条数,Jil所耗时间会接近于Json.NET,当数据高于100万条数时这个时候就可以看出明显的效果,如下: 此时Jil序列化数据不到1秒,而利用Json.NET则需要足足接近3秒。 测试用例更新: 当将代码进行如下修改时,少量数据也是优于Json.NET,数据量越大性能越明显,感谢园友【calvinK】提醒: var list = new List(); for (int i = 0; i < 10000; i++) { list.Add(i); } var stop = new Stopwatch(); stop.Start(); for (var i = 0; i < 1000; i++) { var jil = SerializeList(list); } Console.WriteLine(stop.ElapsedMilliseconds); stop.Stop(); var stop1 = new Stopwatch(); stop1.Start(); for (var i = 0; i < 1000; i++) { var json = JsonConvert.SerializeObject(list); } Console.WriteLine(stop1.ElapsedMilliseconds); stop1.Stop(); 结果如下: 关于Jil的序列化还有一种则是利用JSON.SerializeDynamic来序列化那些在编译时期无法预测的类型。 至于反序列化也是和其序列化一一对应。 下面我们继续来看看Jil的其他特性。若在视图上渲染那些我们需要的数据,而对于实体中不必要用到的字段我们就需要进行过滤,此时我们用到Jil中的忽略属性。 [JilDirective(Ignore = true)] 我们来看看: public class Person { [JilDirective(Ignore = true)] public int Id { get; set; } public int Name { get; set; } } var jil = SerializeList(new Person() { Id = 1, Name = 123 } ); Console.WriteLine(jil); 另外在Jil中最重要的属性则是Options,该属性用来配置返回的日期格式以及其他配置,若未用其属性默认利用Json.NET返回如【\/Date(143546676)\/】,我们来看下: var jil = SerializeList(new Person() { Id = 1, Name = "123", Time = DateTime.Now }); 进行如下设置: JSON.Serialize( p, output, new Options(dateFormat: DateTimeFormat.ISO8601) ); 有关序列化继承类时我们同样需要进行如下设置,否则无法进行序列化 new Options(dateFormat: DateTimeFormat.ISO8601, includeInherited: true) Jil的性能绝对优于Json.NET,Jil一直在追求序列化的速度所以在更多可用的APi可能少于Json.NET或者说没有Json.NET灵活,但是足以满足我们的要求。 性能提升二:压缩(Compress)压缩方式(1) 【IIS设置】 启动IIS动态内容压缩 压缩方式(2)【DotNetZip】 利用现成的轮子,下载程序包【DotNetZip】即可,此时我们则需要在执行方法完毕后来进行内容的压缩即可,所以我们需要重写【 ActionFilterAttribute 】过滤器,在此基础上进行我们的压缩操作。如下: public class DeflateCompressionAttribute : ActionFilterAttribute { public override void OnActionExecuted(HttpActionExecutedContext actionContext) { var content = actionContext.Response.Content; var bytes = content == null ? null : content.ReadAsByteArrayAsync().Result; var compressContent = bytes == null ? new byte[0] : CompressionHelper.DeflateByte(bytes); actionContext.Response.Content = new ByteArrayContent(compressContent); actionContext.Response.Content.Headers.Remove("Content-Type"); if (string.Equals(actionContext.Request.Headers.AcceptEncoding.First().Value, "deflate")) actionContext.Response.Content.Headers.Add("Content-encoding", "deflate"); else actionContext.Response.Content.Headers.Add("Content-encoding", "gzip"); actionContext.Response.Content.Headers.Add("Content-Type", "application/json;charset=utf-8"); base.OnActionExecuted(actionContext); } } 利用DotNetZip进行快速压缩: public class CompressionHelper { public static byte[] DeflateByte(byte[] str) { if (str == null) { return null; } using (var output = new MemoryStream()) { using (var compressor = new Ionic.Zlib.GZipStream( output, Ionic.Zlib.CompressionMode.Compress, Ionic.Zlib.CompressionLevel.BestSpeed)) { compressor.Write(str, 0, str.Length); } return output.ToArray(); } } } 我们来对比看一下未进行内容压缩前后结果响应的时间以及内容长度,给出如下测试类: Task dict = Dictionary li = List Employee { Id = , Name = , Email = Employee { Id = , Name = , Email = Employee { Id = , Name = , Email = Employee { Id = , Name = ,Email = 结果运行错误: 这里应该是序列化出现问题,在有些浏览器返回的XML数据,我用的是搜狗浏览器,之前学习WebAPi时其返回的就是XML数据,我们试着将其返回为Json数据看看。 var formatters = config.Formatters.Where(formatter =>Formatter.SupportedMediaTypes.Where (media = > media.MediaType.ToString () = = "application/xml" | | media.MediaType.ToString () = = "text/html") .Count () > 0) / / find the media type in the request header information .ToList (); foreach (var match in formatters) {config.Formatters.Remove (match);}
The length of the response without compressing it is as follows:
After compression, the result is obviously improved.
Next, let's customize and use .NET 's built-in compression mode to implement it.
Compression method (3) [Custom implementation]
Since the content of the response is through the HttpContent, we need to rewrite the HttpContent based on the rewriting filter ActionFilterAttribute, and finally compress the data according to the compression format supported by the browser and write it to the response stream.
Public class CompressContent: HttpContent {private readonly string _ encodingType; private readonly HttpContent _ originalContent; public CompressContent (HttpContent content, string encodingType = "gzip") {_ originalContent = content; _ encodingType = encodingType.ToLowerInvariant (); Headers.ContentEncoding.Add (encodingType);} protected override bool TryComputeLength (out long length) {length =-1 Return false;} protected override Task SerializeToStreamAsync (Stream stream, TransportContext context) {Stream compressStream = null; switch (_ encodingType) {case "gzip": compressStream = new GZipStream (stream, CompressionMode.Compress, true); break Case "deflate": compressStream = new DeflateStream (stream, CompressionMode.Compress, true); break; default: compressStream = stream; break } return _ originalContent.CopyToAsync (compressStream) .ContinueWith (tsk = > {if (compressStream! = null) {compressStream.Dispose ();});}}
Override filter Properti
Public class CompressContentAttribute: ActionFilterAttribute {public override void OnActionExecuted (HttpActionExecutedContext context) {var acceptedEncoding = context.Response.RequestMessage.Headers.AcceptEncoding.First (). Value; if (! acceptedEncoding.Equals ("gzip", StringComparison.InvariantCultureIgnoreCase) & &! acceptedEncoding.Equals ("deflate", StringComparison.InvariantCultureIgnoreCase)) {return } context.Response.Content = new CompressContent (context.Response.Content, acceptedEncoding);}}
The comparison of the response results is no longer described, which is consistent with the above results using DotNetZip.
When I was writing compressed content, I found a problem that raised the question: why do context.Response.Content.Headers and context.Response.Headers have two headers Headers in the response? Instead of looking into the problem in detail, I would like to talk about my personal thoughts.
What's the difference between context.Response.Content.Headers and context.Response.Headers?
Let's take a look at the definition in context.Response.Headers, which is summarized as follows:
/ / Summary: / / Gets a value that indicates if the HTTP response was successful. / return result: / / Returns System.Boolean.A value that indicates if the HTTP response was successful. / / true if System.Net.Http.HttpResponseMessage.StatusCode was in the range 200299; / / otherwise false.
The definition in context.Response.Content.Headers is summarized as follows:
/ / Summary: / / Gets the HTTP content headers as defined in RFC 2616. / return result: / / Returns System.Net.Http.Headers.HttpContentHeaders.The content headers as / / defined in RFC 2616.
The definition of Headers in Content.Headers is based on RFC 2616, that is, the Http specification. Presumably, the purpose of doing this is to isolate the Http specification so that we can easily implement custom code or set up information about the response header and eventually write it directly to the response stream of Http. We are more likely to operate Content.Headers so to distinguish it, perhaps for this purpose, there are gardeners who know can give a reasonable explanation, this is just my personal guess.
Performance improvement 3: caching (Cache: larger granularity)
Caching is probably the most talked about topic, of course, there are a large number of caching components for us to use, here only to talk about this issue in terms of larger granularity, it still has a little effect on some small projects, and the big ones are another matter.
When we make a request, we can see that there is a field [Cache-Control] in the response header. If we don't do anything, of course, its value is [no-cache]. The data will not be cached at any time, and the data will be requested again. The corresponding values in this attribute include private/public and must-revalidate. When we do not specify the value of max-age and set the value to private, no-cache, must-revalidate, the request will go to the server to obtain data. Here we first learn the basics of the Http protocol.
[1] if set to private, the cache cannot be shared means that the page will not be cached locally, that is, a copy will not be copied for the proxy server, and if the cache is more private for the user, but for the individual, the cache between the users is independent and not shared with each other. If public, it means that every user can share this cache. For both, for example, if the news about the push of the blog park is public, it can be set to the public shared cache to make full use of the cache.
[2] max-age is the expiration time of the cache, which will not re-request to obtain data from the server within a certain period of time, but will be obtained directly in the local browser cache.
[3] must-revalidate literally means that it must be revalidated, that is, to retrieve new data from out-of-date data, so when will it be used? In the final analysis, must-revalidate is mainly related to max-age. When max-age is set, must-revalidate is also set. After the cache expires, must-revalidate will tell the server to get the latest data. That is to say, when setting max-age = 0mu Mustmuri revalidate = true, it can be said to be the same as no-cache = true.
Let's do cache control:
Public class CacheFilterAttribute: ActionFilterAttribute {public int CacheTimeDuration {get; set;} public override void OnActionExecuted (HttpActionExecutedContext actionExecutedContext) {actionExecutedContext.Response.Headers.CacheControl = new CacheControlHeaderValue {MaxAge = TimeSpan.FromSeconds (CacheTimeDuration), MustRevalidate = true, Public = true};}}
Add cache filtering properties:
[HttpGet] [CompressContent] [CacheFilter (CacheTimeDuration = 100)] public async Task GetZipData () {var sw = new Stopwatch (); sw.Start (); Dictionary dict = new Dictionary (); List li = new List (); li.Add (new Employee {Id = "2", Name = "xpy0928", Email = "a@gmail.com"}) Li.Add (new Employee {Id = "3", Name = "tom", Email = "b@mail.com"}); li.Add (new Employee {Id = "4", Name = "jim", Email = "c@mail.com"}); li.Add (new Employee {Id = "5", Name = "tony", Email = "d@mail.com"}); sw.Stop () Dict.Add ("Details", li); dict.Add ("Time", sw.Elapsed.Milliseconds); return Ok (dict);}
The results are as follows:
Performance improvement IV: async/await (Asynchronous method)
When concurrency occurs in large projects, such as registration, when several users are registering at the same time, it will cause the current request to block and the page has been unresponsive and eventually cause the server to crash. in order to solve this problem, we need to use asynchronous methods, so that when multiple requests come, the thread pool allocates enough threads to handle multiple requests to improve the utilization of the thread pool! As follows:
Public async Task Register (Employee model) {var result = await UserManager.CreateAsync (model); return Ok (result);}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Why smooth upgrade to nginx as nginx becomes more and more popular, and the advantages of nginx
© 2024 shulou.com SLNews company. All rights reserved.