In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what is the use of Java's HashMap and HashTable". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the use of Java's HashMap and HashTable?"
HashMap
HashMap is also the Collection that we use a lot. It is the implementation of the Map interface based on the hash table and exists in the form of key-value. In HashMap, key-value will always be treated as a whole, the system will calculate the storage location of key-value according to hash algorithm, and we can always save and fetch value quickly through key.
Define
HashMap implements the Map interface and inherits AbstractMap. The Map interface defines the rules for mapping keys to values, while the AbstractMap class provides the backbone implementation of the Map interface to minimize the work required to implement this interface. In fact, the AbstractMap class has already implemented Map. Marking Map LZ here should be clearer!
The public class HashMap extends AbstractMap implements Map, Cloneable, and Serializable constructor HashMap provides three constructors: HashMap (): constructs an empty HashMap with a default initial capacity of 16 and a default load factor of 0.75. HashMap (int initialCapacity): constructs an empty HashMap with the specified initial capacity and default load factor of0.75. HashMap (int initialCapacity, float loadFactor): constructs an empty HashMap with the specified initial capacity and load factor.
Two parameters are mentioned here: the initial capacity and the loading factor.
These two parameters are important parameters that affect the performance of HashMap, in which the capacity represents the number of buckets in the hash table, the initial capacity is the capacity when the hash table is created, and the loading factor is a measure that the hash table can reach more fullness before its capacity is automatically increased. It measures the use of space in a hash table. The higher the load factor is, the higher the loading degree of the hash table is, and vice versa.
For hash tables using linked list method, the average time to find an element is O (1), so if the load factor is larger, the space will be more fully utilized, but the result will be a decrease in search efficiency; if the load factor is too small, then the data of the hash table will be too sparse, resulting in a serious waste of space. The default load factor of the system is 0.75. In general, we do not need to modify it.
HashMap is a data structure that supports fast access, and its data structure must be understood in order to understand its performance.
Data structure
We know that the two most commonly used structures in Java are arrays and analog pointers (references), and almost all data structures can be implemented by combining them, as can HashMap. HashMap is actually a "linked list hash", and here is its data structure:
HashMap data structure diagram
Each cell in the table array shown below is a bucket. The load factor is the percentage of capacity occupied by the elements in the map. For example, if the load factor is 0.75 and the initial capacity (number of barrels) is 16:00, then the maximum number of elements allowed to be loaded is 16-0.75 = 12, which is also called the threshold, which is the threshold defined in map. When this threshold is exceeded, map will automatically expand its capacity.
From the figure above, we can see that the underlying implementation of HashMap is still an array, but each item of the array is a chain. The parameter initialCapacity represents the length of the array. The following is the source code of the HashMap constructor:
Public HashMap (int initialCapacity, float loadFactor) {/ / initial capacity cannot be the maximum capacity, and the maximum capacity of HashMap is 2 ^ 30 if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; / / load factor cannot
< 0 if (loadFactor 20) ^ (h >> 12); return h ^ (h > 7) ^ (h > 4);}
We know that for the table of HashMap, the data needs to be evenly distributed (preferably only one element for each item, so that it can be found directly), neither too tight nor too loose, which will lead to slow query speed and waste of space. After calculating the hash value, how can we ensure that the distribution of table elements is the same? We will think of modularization, but because modularization is expensive, HashMap handles it like this: call the indexFor method.
Static int indexFor (int h, int length) {return h & (length-1);}
The length of the underlying array of HashMap is always 2 to the n power, and it exists in the constructor: capacity = TREEIFY_THRESHOLD-1) / /-1 for 1st / / when the number of linked table nodes exceeds 8, the red and black tree will be directly carried out. TreeifyBin (tab, hash); break;} if (e.hash = = hash & & (k = e.key) = = key | | (key! = null & & key.equals (k) break; p = e }} if (e! = null) {/ / existing mapping for key V oldValue = e.value; if (! onlyIfAbsent | | oldValue = = null) e.value = value; afterNodeAccess (e); return oldValue;}} + + modCount If (+ + size > threshold) resize (); afterNodeInsertion (evict); return null;}
JDK1.8 is converted to a red-black tree when the length of the linked list exceeds 8. The conversion method is as follows:
Final void treeifyBin (Node [] tab, int hash) {int n, index; Node e; if (tab = = null | | (n = tab.length)
< MIN_TREEIFY_CAPACITY) //如果节点数变小小于红黑树的节点数阈值时,调整空间 resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { TreeNode hd = null, tl = null; do { //该方法直接返回一个红黑树结点。 TreeNode p = replacementTreeNode(e, null); if (tl == null) hd = p; else { //从链表头开始依次插入红黑树 p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); if ((tab[index] = hd) != null) hd.treeify(tab); } } // For treeifyBinTreeNode replacementTreeNode(Node p, Node next) { return new TreeNode(p.hash, p.key, p.value, next);}扩容final Node[] resize() { Node[] oldTab = table; int oldCap = (oldTab == null) ? 0 : oldTab.length; int oldThr = threshold; int newCap, newThr = 0; if (oldCap >0) {/ / if the original capacity is greater than the maximum space, let the threshold be the maximum. Because the capacity can no longer be expanded, the maximum capacity is the integer maximum. If (oldCap > = MAXIMUM_CAPACITY) {threshold = Integer.MAX_VALUE; return oldTab;} / / double the capacity, and the threshold is then doubled to else if ((newCap = oldCap = DEFAULT_INITIAL_CAPACITY) newThr = oldThr 0) / / initial capacity was placed in threshold newCap = oldThr Else {/ / zero initial threshold signifies using defaults newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);} if (newThr = = 0) {float ft = (float) newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY & & ft < (float) MAXIMUM_CAPACITY? (int) ft: Integer.MAX_VALUE);} threshold = newThr; @ SuppressWarnings ({"rawtypes", "unchecked"}) Node [] newTab = (Node []) new Node [newCap]; table = newTab; if (oldTab! = null) {for (int j = 0; j < oldCap; + + j) {Node e If ((e = oldTab [j])! = null) {oldTab [j] = null If (e.next = = null) / / when there is no node, you can directly insert / / each element recalculates the index position, where the hash value does not change, but changes the index value newTab [e.hash & (newCap-1)] = e Else if (e instanceof TreeNode) ((TreeNode) e) .split (this, newTab, j, oldCap); otherwise, else {/ / preserve order / / otherwise, the nodes are indexed from beginning to end and then inserted into the new array, so that the order of the inserted list is reverse. Node loHead = null, loTail = null; Node hiHead = null, hiTail = null; Node next; do {next = e.next If ((e.hash & oldCap) = = 0) {if (loTail = = null) loHead = e; else loTail.next = e; loTail = e } else {if (hiTail = = null) hiHead = e; else hiTail.next = e HiTail = e;}} while ((e = next)! = null); if (loTail! = null) {loTail.next = null; newTab [j] = loHead } if (hiTail! = null) {hiTail.next = null; newTab [j + oldCap] = hiHead;} return newTab } read implementation: get (key) is relatively easy to fetch compared to HashMap memory. Find the Entry at the index in the table array through the hash value of key, and then return the value corresponding to that key. Public V get (Object key) {/ / if null, call the getForNullKey method to return the corresponding value if (key = = null) return getForNullKey (); / / calculate its hash code int hash = hash (key.hashCode ()) based on the hashCode value of the key; / / fetch the value for at the specified index in the table array (Entry e = table [indexFor (hash, table.length)] E! = null; e = e.next) {Object k; / / if the key searched is the same as the key found, return the corresponding value if (e.hash = = hash & & (k = e.key) = = key | | key.equals (k)) return e.value;} return null;}
Here, value can be quickly fetched according to key in addition to being inseparable from the data structure of HashMap, but also has a great relationship with Entry. As mentioned earlier, HashMap does not store key,value separately in the stored procedure, but is treated as a whole key-value, which is the Entry object.
At the same time, value is only a subsidiary of key. In the process of storage, the system determines the storage location of the Entry in the table array according to the hashcode of key, and the corresponding Entry object is also fetched according to the hashcode of key in the process of fetching.
In java, there are two classes that provide a multi-purpose hashTable mechanism. Both of them can combine key and value to form a key-value pair and save it through the put (key,value) method, and then obtain the corresponding value through the get (key) method.
HashTable
One is the HashMap mentioned earlier, and the other is the HashTable that we will explain soon. For HashTable, it is similar to the implementation of HashMap to a large extent. If we know more about HashMap, it will be of great help to improve our understanding of HashTable. There are only a few differences between them, which will be explained later.
The definition of HashTable in Java is as follows: public class Hashtable extends Dictionary implements Map, Cloneable, java.io.Serializable can see that HashTable inherits the Dictionary class and implements the Map interface. Where the Dictionary class is the abstract parent class of any class that maps keys to corresponding values, such as Hashtable. Each key and each value is an object. In any Dictionary object, each key is associated with at most one value. Map is the key-value key-value pair interface. HashTable uses the zipper method to implement the hash table, which defines several important parameters: table, count, threshold, loadFactor, modCount. Table: for an Entry [] array type, Entry represents a "zipper" node, each Entry represents a key-value pair, and the "key-value key-value pair" of the hash table is stored in the Entry array. The size of the count:HashTable, note that this size is not the container size of the HashTable, but the number of Entry key-value pairs it contains. The threshold of threshold:Hashtable, which is used to determine whether the capacity of Hashtable needs to be adjusted. The value of threshold = "capacity * load factor". LoadFactor: load factor. ModCount: used to implement the "fail-fast" mechanism (that is, quick failure). The so-called fast failure is in the concurrent collection, when it iterates, if other threads make structural changes to it, the iterator will immediately perceive it and immediately throw a ConcurrentModificationException exception, instead of waiting until the iteration is completed to tell you (you have made a mistake). Constructor there are five constructors in HashTabel. Through these five constructors we build a HashTable that I want. Public Hashtable () {this (11,0.75f);} default constructor with capacity of 11 and load factor of 0.75. Public Hashtable (int initialCapacity) {this (initialCapacity, 0.75f);} constructs a new empty hash table with the specified initial capacity and default load factor. Public Hashtable (int initialCapacity, float loadFactor) {/ / verify initial capacity if (initialCapacity < 0) throw new IllegalArgumentException ("Illegal Capacity:" + initialCapacity); / / verify load factor if (loadFactor)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.