In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Today, I would like to share with you the relevant knowledge points about how Netty distributed ByteBuf uses hit cache allocation. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article. Let's take a look.
Before analyzing the logic, we first introduce the data structure of the cache object.
Reviewing the previous section, we talked about the maintenance of three cache arrays in PoolThreadCache (actually six. Here we only take Direct as an example, and the logic of the heap type is the same): tinySubPageDirectCaches, smallSubPageDirectCaches, and normalDirectCaches represent cache arrays of type tiny, type small, and type normal, respectively.
These three arrays are saved in the member variables of PoolThreadCache:
Private final MemoryRegionCache [] tinySubPageDirectCaches;private final MemoryRegionCache [] smallSubPageDirectCaches;private final MemoryRegionCache [] normalDirectCaches
It is initialized in the constructor:
TinySubPageDirectCaches = createSubPageCaches (tinyCacheSize, PoolArena.numTinySubpagePools, SizeClass.Tiny); smallSubPageDirectCaches = createSubPageCaches (smallCacheSize, directArena.numSmallSubpagePools, SizeClass.Small); normalDirectCaches = createNormalCaches (normalCacheSize, maxCachedBufferCapacity, directArena); We follow private static MemoryRegionCache [] createSubPageCaches (int cacheSize, int numCaches, SizeClass sizeClass) {if (cacheSize > 0) {@ SuppressWarnings ("unchecked") MemoryRegionCache [] cache = new MemoryRegionCache [numCaches] in the createSubPageCaches method using the tiny type as an example For (int I = 0; I
< cache.length; i++) { cache[i] = new SubPageMemoryRegionCache(cacheSize, sizeClass); } return cache; } else { return null; }} 这里上面的小节已经分析过, 这里创建了一个缓存数组, 这个缓存数组的长度,也就是numCaches, 在不同的类型, 这个长度不一样, tiny类型长度是32, small类型长度为4, normal类型长度为3 我们知道, 缓存数组中每个节点代表一个缓存对象, 里面维护了一个队列, 队列大小由PooledByteBufAllocator类中的tinyCacheSize, smallCacheSize, normalCacheSize属性决定的, 这里之前小节已经剖析过 其中每个缓存对象, 队列中缓存的ByteBuf大小是固定的, netty将每种缓冲区类型分成了不同长度规格, 而每个缓存中的队列缓存的ByteBuf的长度, 都是同一个规格的长度, 而缓冲区数组的长度, 就是规格的数量 比如, 在tiny类型中, netty将其长度分成32个规格, 每个规格都是16的整数倍, 也就是包含0B, 16B, 32B, 48B, 64B, 80B, 96B......496B总共32种规格, 而在其缓存数组tinySubPageDirectCaches中, 这每一种规格代表数组中的一个缓存对象缓存的ByteBuf的大小, 我们以tinySubPageDirectCaches[1]为例(这里下标选择1是因为下标为0代表的规格是0B, 其实就代表一个空的缓存, 这里不进行举例), 在tinySubPageDirectCaches[1]的缓存对象中所缓存的ByteBuf的缓冲区长度是16B, 在tinySubPageDirectCaches[2]中缓存的ByteBuf长度都为32B, 以此类推, tinySubPageDirectCaches[31]中缓存的ByteBuf长度为496B 有关类型规则的分配如下: tiny:总共32个规格, 均是16的整数倍, 0B, 16B, 32B, 48B, 64B, 80B, 96B......496B small:4种规格, 512b, 1k, 2k, 4k nomal:3种规格, 8k, 16k, 32k 这样, PoolThreadCache中缓存数组的数据结构为 大概了解缓存数组的数据结构, 我们再继续剖析在缓冲中分配内存的逻辑 回到PoolArena的allocate方法中private void allocate(PoolThreadCache cache, PooledByteBuf buf, final int reqCapacity) { //规格化 final int normCapacity = normalizeCapacity(reqCapacity); if (isTinyOrSmall(normCapacity)) { int tableIdx; PoolSubpage[] table; //判断是不是tinty boolean tiny = isTiny(normCapacity); if (tiny) { // < 512 //缓存分配 if (cache.allocateTiny(this, buf, reqCapacity, normCapacity)) { return; } //通过tinyIdx拿到tableIdx tableIdx = tinyIdx(normCapacity); //subpage的数组 table = tinySubpagePools; } else { if (cache.allocateSmall(this, buf, reqCapacity, normCapacity)) { return; } tableIdx = smallIdx(normCapacity); table = smallSubpagePools; } //拿到对应的节点 final PoolSubpage head = table[tableIdx]; synchronized (head) { final PoolSubpage s = head.next; //默认情况下, head的next也是自身 if (s != head) { assert s.doNotDestroy && s.elemSize == normCapacity; long handle = s.allocate(); assert handle >= 0; s.chunk.initBufWithSubpage (buf, handle, reqCapacity); if (tiny) {allocationsTiny.increment ();} else {allocationsSmall.increment ();} return;}} allocateNormal (buf, reqCapacity, normCapacity); return } if (normCapacity = chunkSize) {return reqCapacity;} / / if > tiny if (! isTiny (reqCapacity)) {/ / > = 512 / / find a value of power 2 and make sure the value is greater than or equal to reqCapacity int normalizedCapacity = reqCapacity; normalizedCapacity -; normalizedCapacity | = normalizedCapacity > 1; normalizedCapacity | = normalizedCapacity > 2; normalizedCapacity | = normalizedCapacity > 4; normalizedCapacity | = normalizedCapacity > 8 NormalizedCapacity | = normalizedCapacity > > 16; normalizedCapacity + +; if (normalizedCapacity
< 0) { normalizedCapacity >> > = 1;} return normalizedCapacity;} / / if it is a multiple of 16 if ((reqCapacity & 15) = = 0) {return reqCapacity;} / / is not a multiple of 16, it becomes a maximum value less than the current value + 16 return (reqCapacity & ~ 15) + 16;}
If (! isTiny (reqCapacity)) means that if it is greater than the size of the tiny type, that is, 512, it will find a value of the power of 2 to make sure that this value is greater than or equal to reqCapacity.
If it is tiny, then continue.
If ((reqCapacity & 15) = = 0) it is judged here that if it is a multiple of 16, it will be returned directly.
If it is not a multiple of 16, (reqCapacity & ~ 15) + 16 is returned, that is, it becomes a multiple of 16 with a minimum value greater than the current value.
From the above normalization logic, we can see that the cache size is normalized to a fixed size to ensure that the cache capacity of each cache object is uniform.
Go back to the allocate method
If (isTinyOrSmall (normCapacity)) here determines whether the tiny or small type is based on the normalized size, and we follow the method:
Boolean isTinyOrSmall (int normCapacity) {return (normCapacity & subpageOverflowMask) = = 0;}
Here is to judge if the normCapacity is less than the size of a page, that is, 8k represents actually tiny or small.
Continue to look at the allocate method:
If the current size is tiny or small, isTiny (normCapacity) determines whether it is of type tiny and follows:
Static boolean isTiny (int normCapacity) {return (normCapacity & 0xFFFFFE00) = = 0;}
Here is the judgment that if it is less than 512, it is considered to be tiny.
Let's move on to the allocate method:
If it is tiny, it is allocated on the cache through cache.allocateTiny (this, buf, reqCapacity, normCapacity)
Let's take the tiny type as an example to analyze the process of allocating ByteBuf on the cache
AllocateTiny is the entry for cache allocation
We followed and entered the allocateTiny method of PoolThreadCache:
Boolean allocateTiny (PoolArena area, PooledByteBuf buf, int reqCapacity, int normCapacity) {return allocate (cacheForTiny (area, normCapacity), buf, reqCapacity);}
Here is a method cacheForTiny (area, normCapacity), which is used to find a cache object in the tiny type cache array based on normCapacity.
We follow up on cacheForTiny:
Private MemoryRegionCache cacheForTiny (PoolArena area, int normCapacity) {int idx = PoolArena.tinyIdx (normCapacity); if (area.isDirect ()) {return cache (tinySubPageDirectCaches, idx);} return cache (tinySubPageHeapCaches, idx);}
PoolArena.tinyIdx (normCapacity) is the subscript for finding the cache array of type tiny
Continue to follow tinyIdx:
Static int tinyIdx (int normCapacity) {return normCapacity > 4;}
Here we divide normCapacity by 16 directly. From the previous content, we know that the normalized data of each element in the tiny type cache array is a multiple of 16, so the subscript can be found in this way. Refer to figure 5-2. If 16B gets the element with subscript 1, if it is 32B, it will get the element with subscript 2.
Go back to the acheForTiny method
If (area.isDirect ()) determines whether out-of-heap memory is allocated, because we are using out-of-heap memory as an example, so here is true.
Then go on to the cache (tinySubPageDirectCaches, idx) method:
Private static MemoryRegionCache cache (MemoryRegionCache [] cache, int idx) {if (cache = = null | | idx > cache.length-1) {return null;} return cache [idx];}
Here we see that we get the objects in the cache array directly through the subscript.
Go back to the allocateTiny method of PoolThreadCache:
Boolean allocateTiny (PoolArena area, PooledByteBuf buf, int reqCapacity, int normCapacity) {return allocate (cacheForTiny (area, normCapacity), buf, reqCapacity);}
After getting the cache object, we go to the allocate (cacheForTiny (area, normCapacity), buf, reqCapacity) method:
Private boolean allocate (MemoryRegionCache cache, PooledByteBuf buf, int reqCapacity) {if (cache = = null) {return false;} boolean allocated = cache.allocate (buf, reqCapacity); if (+ + allocations > = freeSweepAllocationThreshold) {allocations = 0; trim ();} return allocated;}
Here the allocation is continued through cache.allocate (buf, reqCapacity)
Continue to follow in the allocate (PooledByteBuf buf, int reqCapacity) method of the inner class MemoryRegionCache:
Public final boolean allocate (PooledByteBuf buf, int reqCapacity) {Entry entry = queue.poll (); if (entry = = null) {return false;} initBuf (entry.chunk, entry.handle, buf, reqCapacity); entry.recycle (); + + allocations; return true;}
Here, we first pop up an entry through queue.poll (). As we analyzed in the previous section, MemoryRegionCache maintains a queue, and each value in the queue is an entry.
Let's take a brief look at the class Entry static final class Entry {final Handle
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.