In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly talks about "what is the size of Java HashMap". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what is the size of Java HashMap"?
The default size of HashMap is 16, which can be set. If you know a specific example in advance, you can modify the default initial size to reduce the number of dynamic capacity expansion and improve performance. When you change the value of the default initial size, for example, if you set 500, you will not actually use 500, but may use 512, which is a power of 2.
Why set a value that is a power of 2? This is related to the calculation of the value of index below, please see point 4.
The maximum load factor is 0.75. When the load factor exceeds this value, the capacity will be expanded, and each expansion will be expanded to twice the original size.
So why is the load factor 0.75? The research shows that when the load factor is 0.75, the space utilization is relatively high, and quite a lot of Hash conflicts are avoided, so that the height of the underlying linked list or red-black tree is relatively low, which improves the space efficiency.
Why is it expanded to double the original capacity? Please see point 4 for this.
The linked list method is used to resolve conflicts, and then red and black numbers are introduced into JDK1.8. When the length of the main linked list is too long (the default is more than 8), the linked list is converted to a red-black tree. When less than 6, the red-black tree is converted into a linked list, because the red-black tree has to maintain balance, and the performance advantage over the linked list is not particularly obvious.
So why convert a red-black tree to a linked list when it's less than 6 instead of 8? Suppose it is designed to convert a linked list to a red-black tree when it is greater than 8 and to a linked list when it is less than 8. If a hashmap is constantly inserted and deleted. If the number of hashmap hovers around 8, then the conversion between linked lists and red-black trees will occur frequently, and the efficiency is very low. Therefore, a transition value between 6 and 8 can mitigate the impact of this situation.
The hash value is obtained in two steps:
/ / 1. Calculation of hash value
Static final int hash (Object key) {
Int hash
Return key = = null? 0: (hash = key.hashCode ()) ^ hash > 16
}
/ / 2. When inserting / finding, calculate where the key should be mapped to the hash table
Int index = hash (key) & (capacity-1)
Where the method hashcode () returns the hash_code of the Java object, which is a value of type int (32 bits). So why do you need to move 16 bits to the right to do XOR with yourself after you get this value? Because when the capacity is small, in the calculation of index, what is really used is only a few lower bits, if you do not integrate high and low bits, then it is easy to cause the hash values to be the same if the values returned by hashcode () are all high-order changes. However, if the high-order and low-order fusion, the high-order data changes will eventually affect the transformation of index, so you can still maintain the randomness of the hash.
So why not use hash (key)% capacity when calculating index? This is because the shift operation is faster than the remainder operation. So why can hash (key) & (capacity-1) also work? This is because when B is a power of 2: a% B = A & (B-1). If An and B take the remainder, it is tantamount to preserving the parts of A that are not divisible by B. From a binary point of view, in fact, the low bit of An is preserved. Bmur1 is equivalent to a "low mask", and the result of the operation is that the high order of the hash value is set to 0, leaving only the low bit, and the low bit is exactly the value after the remainder. Let's take an example, A = 24, B = 16, then A%B=8, from a binary point of view, A = 11000, B = 10000. The part of A that is not divisible by B is actually the part 1000. Next, if we need to keep this part, we can keep 1000 as the value of index by using the mask of 01111 and operating with A. And the value of 01111 is equal to BMMI 1. So A & (Bmer1) = A% B. But this premise is that the capacity of B is a power of 2, so how to guarantee it? We can see that when setting the initial size, no matter how much you set, it will be converted to a number of powers of 2. In addition, the expansion is also carried out in accordance with 2 times the capacity. So it's okay that the value of B is a power of 2.
At this point, I believe you have a deeper understanding of "what is the size of Java HashMap?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.