Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the recycling logic used by Netty distributed ByteBuf

2025-01-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "what is the recycling logic used by Netty distributed ByteBuf". In daily operations, I believe that many people have doubts about what the recycling logic used in Netty distributed ByteBuf is. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the questions of "what is the recycling logic used by Netty distributed ByteBuf?" Next, please follow the editor to study!

ByteBuf recovery

As we mentioned in the previous section, out-of-heap memory is not controlled by the jvm garbage collection mechanism, so when we allocate a piece of out-of-heap memory for ByteBuf operations, we need to collect objects after use. In this section, we will take PooledUnsafeDirectByteBuf as an example to explain the relevant logic of memory allocation.

The entry method for memory release in PooledUnsafeDirectByteBuf is the release method in its parent class AbstractReferenceCountedByteBuf:

@ Override public boolean release () {return release0 (1);} release0 is called here, followed by private boolean release0 (int decrement) {for (;;) {int refCnt = this.refCnt; if (refCnt)

< decrement) { throw new IllegalReferenceCountException(refCnt, -decrement); } if (refCntUpdater.compareAndSet(this, refCnt, refCnt - decrement)) { if (refCnt == decrement) { deallocate(); return true; } return false; } }} if (refCnt == decrement) 中判断当前byteBuf是否没有被引用了, 如果没有被引用, 则通过deallocate()方法进行释放 因为我们是以PooledUnsafeDirectByteBuf为例, 所以这里会调用其父类PooledByteBuf的deallocate方法: protected final void deallocate() { if (handle >

= 0) {final long handle = this.handle; this.handle =-1; memory = null; chunk.arena.free (chunk, handle, maxLength, cache); recycle ();}}

This.handle =-1 indicates that the current ByteBuf no longer points to any block of memory

Memory = null here memory is also set to null

Chunk.arena.free (chunk, handle, maxLength, cache) this step is to release the memory of ByteBuf.

Recycle () is the recycle bin where objects are placed and recycled.

We first analyze whether the free method void free (PoolChunk chunk, long handle, int normCapacity, PoolThreadCache cache) {/ / is unpooled if (chunk.unpooled) {int size = chunk.chunkSize (); destroyChunk (chunk); activeBytesHuge.add (- size); deallocationsHuge.increment ();} else {/ / that level Size SizeClass sizeClass = sizeClass (normCapacity) / / add if (cache! = null & & cache.add (this, chunk, handle, normCapacity, sizeClass)) {return;} / / Mark the cache object as not using freeChunk (chunk, handle, sizeClass);}}

First of all, determine whether it is unpooled or not. We have Pooled here, so we will go to the else block:

What level of size is calculated by sizeClass (normCapacity)? we analyze it according to the tiny level.

Cache.add (this, chunk, handle, normCapacity, sizeClass) is to cache the current ByteBuf

As we said before, when reallocating ByteBuf, it is allocated on the cache first, and this step is to follow the process of caching:

Boolean add (PoolArena area, PoolChunk chunk, long handle, int normCapacity, SizeClass sizeClass) {/ / get the MemoryRegionCache node MemoryRegionCache cache = cache (area, normCapacity, sizeClass); if (cache = = null) {return false;} / / encapsulate chunk and handle into entities and add return cache.add (chunk, handle) to queue;}

First of all, get the relevant type cache node according to the type. Here we will find different objects according to different memory specifications. Let's briefly review that each cache object contains a queue, each node in queue is an entry, and each entry contains a chunk and handle, which can point to a unique contiguous memory.

We follow private MemoryRegionCache cache (PoolArena area, int normCapacity, SizeClass sizeClass) {switch (sizeClass) {case Normal: return cacheForNormal (area, normCapacity); case Small: return cacheForSmall (area, normCapacity); case Tiny: return cacheForTiny (area, normCapacity); default: throw new Error ();}}

Assuming that we are of type tiny, we will go to the cacheForTiny (area, normCapacity) method and follow in:

Private MemoryRegionCache cacheForTiny (PoolArena area, int normCapacity) {int idx = PoolArena.tinyIdx (normCapacity); if (area.isDirect ()) {return cache (tinySubPageDirectCaches, idx);} return cache (tinySubPageHeapCaches, idx);}

We have analyzed this method before, that is, to find the cache in several caches according to the size, and after getting the subscript, use cache to exceed the corresponding cache object:

Private static MemoryRegionCache cache (MemoryRegionCache [] cache, int idx) {if (cache = = null | | idx > cache.length-1) {return null;} return cache [idx];}

What we see here is the cache object that is taken directly through the subscript.

Go back to the add method boolean add (PoolArena area, PoolChunk chunk, long handle, int normCapacity, SizeClass sizeClass) {/ / get the MemoryRegionCache node MemoryRegionCache cache = cache (area, normCapacity, sizeClass); if (cache = = null) {return false;} / / encapsulate chunk and handle into entities and add return cache.add (chunk, handle) into queue;}

The cache object here calls an add method, which encapsulates chunk and handle into an entry and adds it to the queue.

We follow the add method:

Public final boolean add (PoolChunk chunk, long handle) {Entry entry = newEntry (chunk, handle); boolean queued = queue.offer (entry); if (! queued) {entry.recycle ();} return queued;}

As we mentioned earlier, when an entry pops up from queue when allocated in the cache, it is put into an object pool, where Entry entry = newEntry (chunk, handle) takes an entry object from the object pool, and then assigns chunk and handle

Then add it to queue through queue.offer (entry)

Let's go back to the free method void free (PoolChunk chunk, long handle, int normCapacity, PoolThreadCache cache) {/ / whether it is unpooled if (chunk.unpooled) {int size = chunk.chunkSize (); destroyChunk (chunk); activeBytesHuge.add (- size); deallocationsHuge.increment ();} else {/ / that level Size SizeClass sizeClass = sizeClass (normCapacity) / / add if (cache! = null & & cache.add (this, chunk, handle, normCapacity, sizeClass)) {return;} freeChunk (chunk, handle, sizeClass);}}

After the cache is added here, if it is successful, it will return, and if not, the freeChunk (chunk, handle, sizeClass) method will be called. The meaning of this method is to mark the memory segment originally allocated to ByteBuf as unused.

Follow up the simple analysis of freeChunk:

Void freeChunk (PoolChunk chunk, long handle, SizeClass sizeClass) {final boolean destroyChunk; synchronized (this) {switch (sizeClass) {case Normal: + deallocationsNormal; break; case Small: + + deallocationsSmall; break; case Tiny: + + deallocationsTiny; break; default: throw new Error () } destroyChunk =! chunk.parent.free (chunk, handle);} if (destroyChunk) {destroyChunk (chunk);}}

Let's follow up on the free method:

Boolean free (PoolChunk chunk, long handle) {chunk.free (handle); if (chunk.usage ()

< minUsage) { remove(chunk); return move0(chunk); } return true;} chunk.free(handle)的意思是通过chunk释放一段连续的内存 再跟到free方法中: void free(long handle) { int memoryMapIdx = memoryMapIdx(handle); int bitmapIdx = bitmapIdx(handle); if (bitmapIdx != 0) { PoolSubpage subpage = subpages[subpageIdx(memoryMapIdx)]; assert subpage != null && subpage.doNotDestroy; PoolSubpage head = arena.findSubpagePoolHead(subpage.elemSize); synchronized (head) { if (subpage.free(head, bitmapIdx & 0x3FFFFFFF)) { return; } } } freeBytes += runLength(memoryMapIdx); setValue(memoryMapIdx, depth(memoryMapIdx)); updateParentsFree(memoryMapIdx);} if (bitmapIdx != 0)这 里判断是当前缓冲区分配的级别是Page还是Subpage, 如果是Subpage, 则会找到相关的Subpage将其位图标记为0 如果不是subpage, 这里通过分配内存的反向标记, 将该内存标记为未使用 这段逻辑可以读者自行分析, 如果之前分配相关的知识掌握扎实的话, 这里的逻辑也不是很难 回到PooledByteBuf的deallocate方法中: protected final void deallocate() { if (handle >

= 0) {final long handle = this.handle; this.handle =-1; memory = null; chunk.arena.free (chunk, handle, maxLength, cache); recycle ();} at this point, the study of "what is the recycling logic used by Netty distributed ByteBuf" is over, hoping to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 263

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report