In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article is to share with you about the reason why the loading factor of HashMap is 0.75. the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.
Why does HashMap need a load factor?
The underlying layer of HashMap is the hash table, which is the type of structure that stores key-value pairs. It needs some calculation to determine where the data is stored in the hash table:
Static final int hash (Object key) {int h; return (key = = null)? 0: (h = key.hashCode ()) ^ (h > 16);} / / AbstractMappublic int hashCode () {int h = 0; Iterator I = entrySet (). Iterator (); while (i.hasNext ()) h + = i.next (). HashCode (); return h;}
The general data structure is either fast to query or fast to insert. HashMap is a data structure with slow insertion and fast query.
However, this data structure is prone to two problems: ①, if the space utilization is high, then when the hash algorithm calculates the storage location, you will find that many storage locations already have data (hash conflict); if ② increases the array capacity in order to avoid hash conflicts, it will lead to low space utilization.
The load factor represents the degree of filling of the elements in the Hash table.
Load factor = number of elements filled in the table / length of hash table
The larger the loading factor is, the more elements are filled, and the higher the space utilization is, but the chance of conflict is greater.
The smaller the loading factor is, the fewer elements are filled, the chance of conflict is reduced, but more space is wasted, and the number of expansion rehash operations is increased.
The greater the chance of conflict, the more expensive it is to find the data you need to find. Therefore, it is necessary to find a balance and compromise between "opportunity for conflict" and "space utilization".
So we can also know that there are several main factors that affect the efficiency of search:
Can the hash function hash the data in the hash table evenly?
How to deal with conflict?
How to choose the loading factor of the hash table?
This paper mainly introduces the latter two problems.
Is there any way to resolve the conflict? 1. Open addressing method
Hi = (H (key) + di) MOD m. , k (k
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.