In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Today, I would like to talk to you about the small examples of IEnumerable, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following for you. I hope you can get something from this article.
It's all source code.
Each name that starts with TXX is an example. It is recommended to read from top to bottom.
Using System;using System.Collections.Generic;using System.Linq;using FluentAssertions;using Xunit;using Xunit.Abstractions;namespace Try_More_On_IEnumerable {public class EnumerableTests2 {private readonly ITestOutputHelper _ testOutputHelper; public EnumerableTests2 (ITestOutputHelper testOutputHelper) {_ testOutputHelper = testOutputHelper } [Fact] public void T11 grouping merge () {var array1 = new [] {0,1,2,3,4}; var array2 = new [] {5,6,7,8,9}; / / merging two arrays into one data var result1 = ConcatArray (array1, array2). ToArray () / / use Concat in Linq to merge two IEnumerable objects var result2 = array1.Concat (array2). ToArray (); / / use SelectMany in Linq to flatten and merge "2D data" into an array var result3 = new [] {array1, array2} .SelectMany (x = > x). ToArray () / * use Enumerable.Range to generate an array. The result of this data is * 0 result1.Should (). Equal (result) when combining the results in the above three ways Result2.Should () .Equal (result); result3.Should () .Equal (result); IEnumerable ConcatArray (IEnumerable source1, IEnumerable source2) {foreach (var item in source1) {yield return item } foreach (var item in source2) {yield return item;} [Fact] public void T12 flattens the triple loop () {/ * get a total of 1000 digits from 0 to 999 through the local function. * these data are constructed through triple loops in GetSomeData * it is worth noting that GetSomeData hides the details of triple loops * / var result1 = GetSomeData (10,10,10) .ToArray (); / * * compared with the GetSomeData method, the two logic of "traversal" and "processing" are separated. * "traversal" refers to the triple loop itself. * "processing" refers to the innermost addition process of the triple loop. * here, the "processing" process is extracted through the Select method. * this is actually the same idea used in using Where in the T03 separation condition. * / var result2 = GetSomeData2 (10,10,10) .Select (tuple = > tuple.i * 100 + tuple.j * 10 + tuple.k) .ToArray (); / / generate an array of 0-999. Var result = Enumerable.Range (0, 1000). ToArray (); result1.Should (). Equal (result); result2.Should (). Equal (result); IEnumerable GetSomeData (int maxI, int maxJ, int maxK) {for (var I = 0; I)
< maxI; i++) { for (var j = 0; j < maxJ; j++) { for (var k = 0; k < maxK; k++) { yield return i * 100 + j * 10 + k; } } } } IEnumerable GetSomeData2(int maxI, int maxJ, int maxK) { for (var i = 0; i < maxI; i++) { for (var j = 0; j < maxJ; j++) { for (var k = 0; k < maxK; k++) { yield return (i, j, k); } } } } } private class TreeNode { public TreeNode() { Children = Enumerable.Empty(); } /// /// 当前节点的值 /// public int Value { get; set; } /// /// 当前节点的子节点列表 /// public IEnumerable Children { get; set; } } [Fact] public void T13遍历树() { /** * 树结构如下: * └─0 * ├─1 * │ └─3 * └─2 */ var tree = new TreeNode { Value = 0, Children = new[] { new TreeNode { Value = 1, Children = new[] { new TreeNode { Value = 3 }, } }, new TreeNode { Value = 2 }, } }; // 深度优先遍历的结果 var dftResult = new[] {0, 1, 3, 2}; // 通过迭代器实现深度优先遍历 var dft = DFTByEnumerable(tree).ToArray(); dft.Should().Equal(dftResult); // 使用堆栈配合循环算法实现深度优先遍历 var dftList = DFTByStack(tree).ToArray(); dftList.Should().Equal(dftResult); // 递归算法实现深度优先遍历 var dftByRecursion = DFTByRecursion(tree).ToArray(); dftByRecursion.Should().Equal(dftResult); // 广度优先遍历的结果 var bdfResult = new[] {0, 1, 2, 3}; /** * 通过迭代器实现广度优先遍历 * 此处未提供"通过队列配合循环算法"和"递归算法"实现广度优先遍历的两种算法进行对比。读者可以自行尝试。 */ var bft = BFT(tree).ToArray(); bft.Should().Equal(bdfResult); /** * 迭代器深度优先遍历 * depth-first traversal */ IEnumerable DFTByEnumerable(TreeNode root) { yield return root.Value; foreach (var child in root.Children) { foreach (var item in DFTByEnumerable(child)) { yield return item; } } } // 使用堆栈配合循环算法实现深度优先遍历 IEnumerable DFTByStack(TreeNode root) { var result = new List(); var stack = new Stack(); stack.Push(root); while (stack.TryPop(out var node)) { result.Add(node.Value); foreach (var nodeChild in node.Children.Reverse()) { stack.Push(nodeChild); } } return result; } // 递归算法实现深度优先遍历 IEnumerable DFTByRecursion(TreeNode root) { var list = new List {root.Value}; foreach (var rootChild in root.Children) { list.AddRange(DFTByRecursion(rootChild)); } return list; } // 通过迭代器实现广度优先遍历 IEnumerable BFT(TreeNode root) { yield return root.Value; foreach (var bftChild in BFTChildren(root.Children)) { yield return bftChild; } IEnumerable BFTChildren(IEnumerable children) { var tempList = new List(); foreach (var treeNode in children) { tempList.Add(treeNode); yield return treeNode.Value; } foreach (var bftChild in tempList.SelectMany(treeNode =>BFTChildren (treeNode.Children)) {yield return bftChild } [Fact] public void T14 search Tree () {/ * the search tree referred to here refers to the addition of the final traversal condition to the traversal tree. * because the search tree is generally built to find the first data that meets the criteria, it is different from simple traversal. * the tree structure is as follows: * └─ 0 * ├─ 1 * │ └─ 3 * └─ 5 * └─ 2 * / var tree = new TreeNode {Value = 0 Children = new [] {new TreeNode {Value = 1 Children = new [] {new TreeNode {Value = 3},}} New TreeNode {Value = 5 Children = new [] {new TreeNode {Value = 2},}} }} / var result = DFS (tree, x = > x > = 3 & & x% 2 = = 1) in the case of depth-first traversal algorithm, a depth-first search can be realized by adding a conditional judgment. / * the result of the search is 3. * specifically, if breadth-first search is used, the result should be 5. * readers can implement the breadth-first traversal algorithm in the T13 traversal tree with the same conditions in FirstOrDefault. * readers are advised to try the above code. * / result.Should () .Be (3); int DFS (TreeNode root, Func predicate) {var re = DFTByEnumerable (root) .FirstOrDefault (predicate); return re } / / Iterator depth first traverses IEnumerable DFTByEnumerable (TreeNode root) {yield return root.Value; foreach (var child in root.Children) {foreach (var item in DFTByEnumerable (child)) {yield return item } [Fact] public void T15 pagination () {var arraySource = new [] {0,1,2,3,4,5,6,7,8,9} / / use an iterator for paging, var enumerablePagedResult = PageByEnumerable (arraySource, 3). ToArray () every 3 pages; / / results a total of 4 pages enumerablePagedResult.Should () .HaveCount (4); / / the last page has only one number, 9 enumerablePagedResult.Last () .Should () .Equal (9) / / paging through regular Skip and Take is the most common method. The result should be the same as the paging result above: var result3 = NormalPage (arraySource, 3). ToArray (); result3.Should (). HaveCount (4); result3.Last (). Should (). Equal (9); IEnumerable PageByEnumerable (IEnumerable source, int pageSize) {var onePage = new LinkedList () Foreach (var i in source) {onePage.AddLast (I); if (onePage.Count! = pageSize) {continue;} yield return onePage; onePage = new LinkedList () } / / Last page if the data is less than one page, the page if (onePage.Count > 0) {yield return onePage should also be returned. }} IEnumerable NormalPage (IReadOnlyCollection source, int pageSize) {var pageCount = Math.Ceiling (1.0 * source.Count / pageSize); for (var I = 0; I
< pageCount; i++) { var offset = i * pageSize; var onePage = source .Skip(offset) .Take(pageSize); yield return onePage; } } /** * 从写法逻辑上来看,显然 NormalPage 的写法更容易让大众接受 * PageByEnumerable 写法在仅仅只有在一些特殊的情况下才能体现性能上的优势,可读性上却不如 NormalPage */ } [Fact] public void T16分页与多级缓存() { /** * 获取 5 页数据,每页 2 个。 * 依次从 内存、Redis、ElasticSearch和数据库中获取数据。 * 先从内存中获取数据,如果内存中数据不足页,则从 Redis 中获取。 * 若 Redis 获取后还是不足页,进而从 ElasticSearch 中获取。依次类推,直到足页或者再无数据 */ const int pageSize = 2; const int pageCount = 5; var emptyData = Enumerable.Empty().ToArray(); /** * 初始化各数据源的数据,除了内存有数据外,其他数据源均没有数据 */ var memoryData = new[] {0, 1, 2}; var redisData = emptyData; var elasticSearchData = emptyData; var databaseData = emptyData; var result = GetSourceData() // ToPagination 是一个扩展方法。此处是为了体现链式调用的可读性,转而使用扩展方法,没有使用本地函数 .ToPagination(pageCount, pageSize) .ToArray(); result.Should().HaveCount(2); result[0].Should().Equal(0, 1); result[1].Should().Equal(2); /** * 初始化各数据源数据,各个数据源均有一些数据 */ memoryData = new[] {0, 1, 2}; redisData = new[] {3, 4, 5}; elasticSearchData = new[] {6, 7, 8}; databaseData = Enumerable.Range(9, 100).ToArray(); var result2 = GetSourceData() .ToPagination(pageCount, pageSize) .ToArray(); result2.Should().HaveCount(5); result2[0].Should().Equal(0, 1); result2[1].Should().Equal(2, 3); result2[2].Should().Equal(4, 5); result2[3].Should().Equal(6, 7); result2[4].Should().Equal(8, 9); IEnumerable GetSourceData() { // 将多数据源的数据连接在一起 var data = GetDataSource() .SelectMany(x =>X); return data; / / get the data source IEnumerable GetDataSource () {/ / return the data source to yield return GetFromMemory (), yield return GetFromRedis (), yield return GetFromElasticSearch () Yield return GetFromDatabase ();} IEnumerable GetFromMemory () {_ testOutputHelper.WriteLine ("getting data from memory"); return memoryData } IEnumerable GetFromRedis () {_ testOutputHelper.WriteLine ("getting data from Redis"); return redisData;} IEnumerable GetFromElasticSearch () {_ testOutputHelper.WriteLine ("getting data from ElasticSearch") Return elasticSearchData;} IEnumerable GetFromDatabase () {_ testOutputHelper.WriteLine ("getting data from database"); return databaseData }} / * it is worth noting that: * due to the on-demand iteration of Enumerable, if you change the number of pages to which result2 belongs to only 1 page. * when data acquisition is performed, data acquisition from Redis, ElasticSearch, and database will no longer be output in the console. * that is, these operations are not performed. Readers can modify the above code on their own to deepen their impression. * /}} public static class EnumerableExtensions {/ paging the original data / data source / number of pages / page size / public static IEnumerable ToPagination (this IEnumerable source, int pageCount Int pageSize) {var maxCount = pageCount * pageSize Var countNow = 0; var onePage = new LinkedList (); foreach (var i in source) {onePage.AddLast (I); countNow++ / / stop further iterating over if (countNow = = maxCount) {break;} if (onePage.Count! = pageSize) {continue if the number obtained has reached the total number needed for paging } yield return onePage; onePage = new LinkedList ();} / / the last page should also be returned to if (onePage.Count > 0) {yield return onePage if the data is less than one page. After reading the above, do you have any further understanding of the small examples of IEnumerable? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un