Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the Java interview questions?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "what are the Java set interview questions". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what are the Java set interview questions"?

What are the commonly used collection classes? ***

The Map interface and the Collection interface are the parent interfaces of all collection frameworks. The solid and dashed lines in the following figure look a bit messy, in which if the connection between the interface and the interface is the inheritance relationship, if the connection between the class and the class is the inheritance relationship, the class implementation interface is between the class and the interface. The key abstract class is HashMap,LinkedList,HashTable,ArrayList,HashSet,Stack,TreeSet,TreeMap. Note: the Collection interface is not the parent interface of Map.

What is the difference among List,Set,Map? ***

List: ordered collection (order means that it is stored in the same order as checked out, not sorted by some characteristics of the elements), repeating elements can be stored, and multiple null can be stored.

Set: unordered collection (elements may not be stored in and out in the same order). Duplicate elements cannot be stored. Only one null can be stored.

Map: use key-value pairs to store elements. Key is unordered and unique. The value value is not unique. Different key values can correspond to the same value value.

The underlying data structure of common collection framework * *

List:

ArrayList: array

LinkedList: two-wire linked list

Set:

HashSet: the underlying implementation is based on HashMap, and HashSet stores read elements in the same way as Key in HashMap.

TreeSet: red and black tree

Map:

HashMap: before JDK1.8, HashMap consists of array + linked list, followed by array + linked list / red-black tree. When the length of the linked list is greater than 8, the linked list is converted into a red-black tree, and when the length is less than 6, it is converted from a red-black tree to a linked list. The purpose of this is to improve the performance of HashMap, because the time complexity of finding elements in red-black trees is much less than that of linked lists.

HashTable: array + linked list

TreeMap: red and black tree

Which collection classes are thread safe? ***

Vector: equivalent to ArrayList with synchronization mechanism

Stack: stack

HashTable

Enumeration: enumerating

What is the iterator Iterator?

Iterator is the simplest implementation of the Java iterator. It is not a collection, it is a method for accessing the collection, and the Iterator interface provides an interface to traverse any Collection.

What is the quick failure mechanism "fail-fast" and the security failure mechanism "fail-safe" of Java collections? ***

Quick failure

Java's fast failure mechanism is an error detection mechanism in the Java collection framework, which may throw a ConcurrentModificationException exception when multiple threads modify the contents of the collection at the same time. In fact, not only in multithreaded state, but also in a single-threaded ConcurrentModificationException loop using the enhanced for loop to modify elements of the collection while traversing the collection. Look at the following code

Public class Main {public static void main (String [] args) {List list = new ArrayList (); for (Integer I: list) {list.remove (I); / / Runtime throws a ConcurrentModificationException exception}

The correct thing to do is to use the remove () method of the iterator to work properly.

Public class Main {public static void main (String [] args) {List list = new ArrayList (); Iterator it = list.iterator (); while (it.hasNext ()) {it.remove ();}

What is the reason for this? Careful students may have found that the remove () method called twice is different, one with parameter data and the other without parameters. Later on, after looking at the ArrayList source code, we found the code that threw the exception.

Final void checkForComodification () {if (modCount! = expectedModCount) throw new ConcurrentModificationException ();}

You can see from the above code that if the variables modCount and expectedModCount are not equal, a ConcurrentModificationException exception will be thrown. So what are these two variables? Continue to look at the source code

Protected transient int modCount = 0; / / variables defined in AbstractList

Int expectedModCount = modCount;// variables defined in the inner class Itr in ArrayList

As you can see from the above code, the initial value of modCount is 0, while the initial value of expectedModCount is equal to modCount. That is to say, calling the collection's remove () method directly while traversing will cause modCount to be not equal to expectedModCount and then throw a ConcurrentModificationException exception, while the remove () method using iterators will not have this problem. Then we can only look at the source code of the remove () method to find out why.

Public E remove (int index) {rangeCheck (index); modCount++; E oldValue = elementData (index); int numMoved = size-index-1; if (numMoved > 0) System.arraycopy (elementData, index+1, elementData, index, numMoved); elementData [--size] = null; / / clear to let GC do its work return oldValue;}

You can see from the above code that there is only modCount++, but expectedModCount has no operation. At each iteration, the iterator compares whether the values of expectedModCount and modCount are equal, so after calling the remove () method, modCount is no longer equal to expectedModCount, and a ConcurrentModificationException exception is reported. But why not throw an exception using the remove () method in the iterator? The original remove () method called by the * * iterator is not the same as the remove () method above! * * the remove () method called by the iterator looks like this:

Public void remove () {if (lastRet

< 0) throw new IllegalStateException(); checkForComodification(); try { ArrayList.this.remove(lastRet); cursor = lastRet; lastRet = -1; expectedModCount = modCount; //这行代码保证了expectedModCount和modCount是相等的 } catch (IndexOutOfBoundsException ex) { throw new ConcurrentModificationException(); } } 从上面代码可以看到expectedModCount = modCount,所以迭代器的remove()方法保证了expectedModCount和modCount是相等的,进而保证了在增强for循环中修改集合内容不会报ConcurrentModificationException异常。 上面介绍的只是单线程的情况,用迭代器调用remove()方法即可正常运行,但如果是多线程会怎么样呢? 答案是在多线程的情况下即使用了迭代器调用remove()方法,还是会报ConcurrentModificationException异常。这又是为什么呢?还是要从expectedModCount和modCount这两个变量入手分析,刚刚说了modCount在AbstractList类中定义,而expectedModCount在ArrayList内部类中定义,所以modCount是个共享变量而expectedModCount是属于线程各自的。简单说,线程1更新了modCount和属于自己的expectedModCount,而在线程2看来只有modCount更新了,expectedModCount并未更新,所以会抛出ConcurrentModificationException异常。 安全失败 采用安全失败机制的集合容器,在遍历时不是直接在集合内容上访问的,而是先复制原有集合内容,在拷贝的集合上进行遍历。所以在遍历过程中对原集合所作的修改并不能被迭代器检测到,所以不会抛出ConcurrentModificationException异常。缺点是迭代器遍历的是开始遍历那一刻拿到的集合拷贝,在遍历期间原集合发生了修改,迭代器是无法访问到修改后的内容。java.util.concurrent包下的容器都是安全失败,可以在多线程下并发使用。 如何边遍历边移除 Collection 中的元素? *** 从上文**"快速失败机制"**可知在遍历集合时如果直接调用remove()方法会抛出ConcurrentModificationException异常,所以使用迭代器中调用remove()方法。 Array 和 ArrayList 有何区别? *** Array可以包含基本类型和对象类型,ArrayList只能包含对象类型。 Array大小是固定的,ArrayList的大小是动态变化的。(ArrayList的扩容是个常见面试题) 相比于Array,ArrayList有着更多的内置方法,如addAll(),removeAll()。 对于基本类型数据,ArrayList 使用自动装箱来减少编码工作量;而当处理固定大小的基本数据类型的时候,这种方式相对比较慢,这时候应该使用Array。 comparable 和 comparator的区别? **  comparable接口出自java.lang包,可以理解为一个内比较器,因为实现了Comparable接口的类可以和自己比较,要和其他实现了Comparable接口类比较,可以使用compareTo(Object obj)方法。compareTo方法的返回值是int,有三种情况: 返回正整数(比较者大于被比较者) 返回0(比较者等于被比较者) 返回负整数(比较者小于被比较者) comparator接口出自 java.util 包,它有一个compare(Object obj1, Object obj2)方法用来排序,返回值同样是int,有三种情况,和compareTo类似。 它们之间的区别:很多包装类都实现了comparable接口,像Integer、String等,所以直接调用Collections.sort()直接可以使用。如果对类里面自带的自然排序不满意,而又不能修改其源代码的情况下,使用Comparator就比较合适。此外使用Comparator可以避免添加额外的代码与我们的目标类耦合,同时可以定义多种排序规则,这一点是Comparable接口没法做到的,从灵活性和扩展性讲Comparator更优,故在面对自定义排序的需求时,可以优先考虑使用Comparator接口。 Collection 和 Collections 有什么区别? ** Collection 是一个集合接口。它提供了对集合对象进行基本操作的通用接口方法。 Collections 是一个包装类。它包含有各种有关集合操作的静态多态方法,例如常用的sort()方法。此类不能实例化,就像一个工具类,服务于Java的Collection框架。 List集合遍历一个 List 有哪些不同的方式? ** 先说一下常见的元素在内存中的存储方式,主要有两种: 顺序存储(Random Access):相邻的数据元素在内存中的位置也是相邻的,可以根据元素的位置(如ArrayList中的下表)读取元素。 链式存储(Sequential Access):每个数据元素包含它下一个元素的内存地址,在内存中不要求相邻。例如LinkedList。 主要的遍历方式主要有三种: for循环遍历:遍历者自己在集合外部维护一个计数器,依次读取每一个位置的元素。 Iterator遍历:基于顺序存储集合的Iterator可以直接按位置访问数据。基于链式存储集合的Iterator,需要保存当前遍历的位置,然后根据当前位置来向前或者向后移动指针。 foreach遍历:foreach 内部也是采用了Iterator 的方式实现,但使用时不需要显示地声明Iterator。 那么对于以上三种遍历方式应该如何选取呢? 在Java集合框架中,提供了一个RandomAccess接口,该接口没有方法,只是一个标记。通常用来标记List的实现是否支持RandomAccess。所以在遍历时,可以先判断是否支持RandomAccess(list instanceof RandomAccess),如果支持可用 for 循环遍历,否则建议用Iterator或 foreach 遍历。 ArrayList的扩容机制 *** 先说下结论,一般面试时需要记住,ArrayList的初始容量为10,扩容时对是旧的容量值加上旧的容量数值进行右移一位(位运算,相当于除以2,位运算的效率更高),所以每次扩容都是旧的容量的1.5倍。 具体的实现大家可查看下ArrayList的源码。 ArrayList 和 LinkedList 的区别是什么? *** 是否线程安全:ArrayList和LinkedList都是不保证线程安全的 底层实现:ArrayList的底层实现是数组,LinkedList的底层是双向链表。 内存占用:ArrayList会存在一定的空间浪费,因为每次扩容都是之前的1.5倍,而LinkedList中的每个元素要存放直接后继和直接前驱以及数据,所以对于每个元素的存储都要比ArrayList花费更多的空间。 应用场景:ArrayList的底层数据结构是数组,所以在插入和删除元素时的时间复杂度都会收到位置的影响,平均时间复杂度为o(n),在读取元素的时候可以根据下标直接查找到元素,不受位置的影响,平均时间复杂度为o(1),所以ArrayList更加适用于多读,少增删的场景。LinkedList的底层数据结构是双向链表,所以插入和删除元素不受位置的影响,平均时间复杂度为o(1),如果是在指定位置插入则是o(n),因为在插入之前需要先找到该位置,读取元素的平均时间复杂度为o(n)。所以LinkedList更加适用于多增删,少读写的场景。 ArrayList 和 Vector 的区别是什么? *** 相同点 都实现了List接口 底层数据结构都是数组 不同点 线程安全:Vector使用了Synchronized来实现线程同步,所以是线程安全的,而ArrayList是线程不安全的。 性能:由于Vector使用了Synchronized进行加锁,所以性能不如ArrayList。 扩容:ArrayList和Vector都会根据需要动态的调整容量,但是ArrayList每次扩容为旧容量的1.5倍,而Vector每次扩容为旧容量的2倍。 简述 ArrayList、Vector、LinkedList 的存储性能和特性? *** ArrayList底层数据结构为数组,对元素的读取速度快,而增删数据慢,线程不安全。 LinkedList底层为双向链表,对元素的增删数据快,读取慢,线程不安全。 Vector的底层数据结构为数组,用Synchronized来保证线程安全,性能较差,但线程安全。 Set集合说一下 HashSet 的实现原理 *** HashSet的底层是HashMap,默认构造函数是构建一个初始容量为16,负载因子为0.75 的HashMap。HashSet的值存放于HashMap的key上,HashMap的value统一为PRESENT。 HashSet如何检查重复?(HashSet是如何保证数据不可重复的?) *** 这里面涉及到了HasCode()和equals()两个方法。 equals() 先看下String类中重写的equals方法。 public boolean equals(Object anObject) { if (this == anObject) { return true; } if (anObject instanceof String) { String anotherString = (String)anObject; int n = value.length; if (n == anotherString.value.length) { char v1[] = value; char v2[] = anotherString.value; int i = 0; while (n-- != 0) { if (v1[i] != v2[i]) return false; i++; } return true; } } return false; } 从源码中可以看到: equals方法首先比较的是内存地址,如果内存地址相同,直接返回true;如果内存地址不同,再比较对象的类型,类型不同直接返回false;类型相同,再比较值是否相同;值相同返回true,值不同返回false。总结一下,equals会比较内存地址、对象类型、以及值,内存地址相同,equals一定返回true;对象类型和值相同,equals方法一定返回true。 如果没有重写equals方法,那么equals和==的作用相同,比较的是对象的地址值。 hashCode hashCode方法返回对象的散列码,返回值是int类型的散列码。散列码的作用是确定该对象在哈希表中的索引位置。 关于hashCode有一些约定: 两个对象相等,则hashCode一定相同。 两个对象有相同的hashCode值,它们不一定相等。 hashCode()方法默认是对堆上的对象产生独特值,如果没有重写hashCode()方法,则该类的两个对象的hashCode值肯定不同 介绍完equals()方法和hashCode()方法,继续说下HashSet是如何检查重复的。 HashSet的特点是存储元素时无序且唯一,在向HashSet中添加对象时,首相会计算对象的HashCode值来确定对象的存储位置,如果该位置没有其他对象,直接将该对象添加到该位置;如果该存储位置有存储其他对象(新添加的对象和该存储位置的对象的HashCode值相同),调用equals方法判断两个对象是否相同,如果相同,则添加对象失败,如果不相同,则会将该对象重新散列到其他位置。 HashSet与HashMap的区别 ***HashMapHashSet实现了Map接口实现了Set接口存储键值对存储对象key唯一,value不唯一存储对象唯一HashMap使用键(Key)计算HashcodeHashSet使用成员对象来计算hashcode值速度相对较快速度相对较慢Map集合HashMap在JDK1.7和JDK1.8中有哪些不同?HashMap的底层实现 *** JDK1.7的底层数据结构(数组+链表) JDK1.8的底层数据结构(数组+链表) JDK1.7的Hash函数 static final int hash(int h){ h ^= (h >

> > 20) ^ (h > 12); return h ^ (h > 7) ^ (h > 4);}

Hash function of JDK1.8

Static final int hash (Onject key) {int h; return (key = = null)? 0: (h = key.hashCode ()) ^ (h > 16);}

The function of JDK1.8 has undergone one XOR operation and two perturbations, while JDK1.7 has gone through four XOR operations and five XOR operations for a total of nine perturbations. Here is a brief explanation of JDK1.8 's hash function, which is often asked in interviews. The two disturbances are key.hashCode () and key.hashCode () moving 16 bits to the right for XOR. The purpose of this is to keep the high 16-bit unchanged, low 16-bit and high 16-bit XOR operation, and then reduce the occurrence of collisions. Both high and low Bit are involved in the calculation of Hash. How not to disturb, because the hash value has 32 bits, the direct complement to the length of the array is only a few low bits of the hash value.

What are the differences between HashMap in JDK1.7 and JDK1.8:

JDK1.7JDK1.8JDK1.8 's advantages: underlying structure array + linked list array + linked list / red-black tree (linked list is greater than 8) avoid excessive length of single linked list and affect query efficiency Improve query efficiency Hash value calculation method 9 perturbations = 4 bit operations + 5 XOR operations 2 disturbances = 1 bit operation + 1 XOR operation can evenly distribute the previously conflicting nodes to the new bucket (see the expansion section below for details) insert data (first move the data of the original location to the last 1 bit) Then insert the data to this location) tail interpolation (directly inserted into the tail of the linked list / red-black tree) to solve the problem caused by multithreading after the expansion of the storage location to re-calculate the original location or the original location + the old capacity does not need to recalculate the hash value. Why is the length of HashMap to the power of 2?

Because HashMap determines the location of storage by key's hash values, but the range of hash values is-2147483648 to 2147483647, it is impossible to build such a large array to cover all hash values. So after calculating the hash value, we will take the remainder operation on the length of the array. If the length of the array is the power of 2, (length-1) & hash is equivalent to hash%length, we can use the bit operation (length-1) & hash to replace the% remainder operation to improve performance.

What is the specific flow of HashMap's put method? **

The main flow of HashMap can be seen in the following flow chart, the logic is very clear.

How is the expansion of HashMap implemented? ***

The initial value is 16, the load factor is 0.75, and the threshold is load factor * capacity.

The resize () method calls the resize () method to expand capacity when the key-value pair in hashmap is greater than the threshold or when initialization.

With each expansion, the capacity is twice as large as before.

During capacity expansion, there is a way to determine whether e.hash & oldCap is zero, which is equivalent to the remainder operation of the hash value to the length of the array. If it is equal to 0, the position remains unchanged, and if equal to 1, the position becomes the original location plus the old capacity.

The source code is as follows

Final Node [] resize () {Node [] oldTab = table; int oldCap = (oldTab = = null)? 0: oldTab.length; int oldThr = threshold; int newCap, newThr = 0; if (oldCap > 0) {if (oldCap > = MAXIMUM_CAPACITY) {/ / if the old capacity has exceeded the maximum, the threshold is the integer maximum threshold = Integer.MAX_VALUE; return oldTab } else if ((newCap = oldCap = DEFAULT_INITIAL_CAPACITY) newThr = oldThr 0) newCap = oldThr; else {newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);} if (newThr = = 0) {float ft = (float) newCap * loadFactor NewThr = (newCap < MAXIMUM_CAPACITY & & ft < (float) MAXIMUM_CAPACITY? (int) ft: Integer.MAX_VALUE);} threshold = newThr; @ SuppressWarnings ({"rawtypes", "unchecked"}) Node [] newTab = (Node []) new Node [newCap]; table = newTab; if (oldTab! = null) {for (int j = 0; j < oldCap; + + j) {Node e If ((e = oldTab [j])! = null) {oldTab [j] = null; if (e.next = = null) newTab [e.hash & (newCap-1)] = e; else if (e instanceof TreeNode) ((TreeNode) e) .split (this, newTab, j, oldCap) Else {Node loHead = null, loTail = null;//loHead,loTail means Node hiHead = null in the original location after expansion, and hiTail = null;//hiHead,hiTail represents the original location + old capacity Node next; do {next = e.next after expansion If ((e.hash & oldCap) = = 0) {/ / determine whether it is zero, assign zero to loHead, not zero to hiHead if (loTail = = null) loHead = e Else loTail.next = e; loTail = e } else {if (hiTail = = null) hiHead = e; else hiTail.next = e HiTail = e;}} while ((e = next)! = null); if (loTail! = null) {loTail.next = null; newTab [j] = loHead / / loHead on original location} if (hiTail! = null) {hiTail.next = null; newTab [j + oldCap] = hiHead; / / hiHead on original location + old capacity}} return newTab }

Why does the default load factor of HashMap choose 0.75?

This is mainly a trade-off between space utilization and query costs. If the loading factor is too high, the space utilization will increase, but it will increase the probability of hash conflict; if the loading factor is too low, the capacity will be expanded frequently, the probability of hash conflict will be reduced, but the space utilization will become lower. The exact reason is 0.75, not 0.74 or 0.76, which is a conclusion based on mathematical analysis (Poisson distribution) and industry regulations.

Why set the threshold of the linked list to the red-black tree to 8? Why not use red-black trees directly in the first place?

Many people may ask, since the performance of the red-black tree is so good, why not start using the red-black tree directly, but first use the linked list, which is converted to a red-red-black tree when the list length is greater than 8.

Because the node of the red-black tree takes up twice as much space as the ordinary linked list node, but the time complexity of searching is low, so the advantages of the red-black tree can be realized only when there are a lot of nodes. As for why it is 8, it is a result of data analysis, and the probability of the length of the linked list reaching 8 is very low. Considering the performance advantages and disadvantages of the linked list and the red-black tree, the linked list greater than 8 will be converted into a red-black tree.

To convert a linked list into a red-black tree, not only the length of the linked list is greater than 8, but also the length of the array in HashMap is greater than 64. That is, if the HashMap length is less than 64 and the linked list length is greater than 8, it will not be converted into a red-black tree, but will be expanded directly.

How does HashMap resolve hash conflicts? ***

Hash conflict: when storing elements, hashMap will first calculate the hash value of key to determine the storage location, because there is a logarithmic array length remainder operation at the end of key hash value calculation, so even different key may calculate the same hash value, which leads to hash conflict. The linked list / red-black tree in the underlying structure of hashMap is used to solve this problem.

Hash conflict resolution in HashMap can be considered mainly from three aspects (with JDK1.8 as the background)

Zipper method

The data structure in HasMap is array + linked list / red-black tree. When the hash values calculated by different key are the same, the Node nodes (conflicting key and value corresponding to key) are hung after the array in the form of linked lists.

Hash function

The hash value of key is perturbed twice, the hashCode value of key is XOR with the hashCode value of key shifted 16 bits to the right, and then the length of the array is taken as the remainder (actually, the bit operation is used to improve the performance, but the purpose is the same as the remainder). This allows the high bits of the hashCode value to also participate in the operation, further reducing the probability of hash conflicts and making the data distribution more evenly.

Red and black tree

In the zipper method, if the hash conflict is particularly serious, the length of the linked list on the array is too long and the performance becomes worse, so when the length of the linked list is greater than 8, converting the linked list into a red-black tree can improve the speed of traversing the linked list.

Why doesn't HashMap directly use the hash value processed by hashCode () as the subscript for table? ***

The range of hash values processed by hashCode () is too large to build such a large array in memory.

Can I use any class as the key of Map? ***

Yes, but note the following two points:

If the class overrides the equals () method, you should also override the hashCode () method.

It is best to define that the key class is immutable so that the hashCode () value corresponding to key can be cached for better performance, which is why String is particularly suitable as a key for HashMap.

Why are wrapper classes such as String and Integer in HashMap suitable for Key? ***

These wrapper classes are modified by final and are immutable, which ensures that key is immutable, and there is no difference in hash values between putting and fetching.

Methods such as hashcode (), equal (), and so on have been overridden internally.

What should I do if I use Object as the Key for HashMap? **

Override the hashCode () method because the hash value needs to be calculated to determine the storage location

Override the equals () method because you need to guarantee the uniqueness of the key.

HashMap multithreading causes endless loop problems *

Because the hashMap of JDK1.7 uses header insertion when it encounters hash conflicts, there will be a problem of endless loop in the case of multithreading, but JDK1.8 has been changed to tail interpolation and this problem no longer exists. However, it should be noted that HashMap in JDK1.8 is still unsafe, and thread safety problems still occur when using it in multithreaded situations. Basically, it's all right to talk about this during the interview, and the specific process is difficult to explain by dictation. Those who are interested can read this article. The endless cycle of HASHMAP

Do you know the underlying implementation of ConcurrentHashMap? **

JDK1.7

In JDK1.7, ConcurrentHashMap is implemented in the way of Segment array + HashEntry array. Segment implements ReentrantLock, so Segment has the nature of a lock, and HashEntry is used to store key-value pairs. A ConcurrentHashMap contains an Segment array, a Segment contains an HashEntry array, and HashEntry is a linked list structure. If you want to get the elements in the HashEntry, you must first obtain the lock of the Segment.

JDK1.8

In JDK1.8, it is no longer the structure of Segment+HashEntry, but a structure similar to HashMap, Node array + linked list / red-black tree, using CAS+synchronized to ensure thread safety. When the length of the linked list is greater than 8, the linked list is converted to a red-black tree. In JDK1.8, synchronized only locks the head node of linked list or red-black tree, which is a more fine-grained lock than segment, and the lock competition becomes less, so it is more efficient.

To sum up:

The bottom layer of JDK1.7 is ReentrantLock+Segment+HashEntry,JDK1.8, the bottom layer is synchronized+CAS+ linked list / red-black tree.

JDK1.7 uses segmented locks, and several HashEntry,JDK1.8 locks are Node nodes at the same time. As long as there is no hash conflict, there will be no lock competition. Therefore, compared with JDK1.7, JDK1.8 provides a smaller granularity lock, reduces lock competition, and improves the concurrency ability of ConcurrentHashMap.

Do you know the underlying implementation of HashTable?

The underlying data structure of HashTable is array + linked list, which is mainly to solve hash conflicts, and the whole array is modified by synchronized, so HashTable is thread-safe, but the granularity of lock is too large, the competition of lock is very fierce, and the efficiency is very low.

The difference between HashMap, ConcurrentHashMap and Hashtable

HashMap (JDK1.8) ConcurrentHashMap (JDK1.8) Hashtable underlying implementation array + linked list / red-black tree array + linked list / red-black tree array + whether the linked list thread is safe (Synchronized decorates the Node node) safe (Synchronized decorates the entire table) high efficiency, high and low capacity expansion initial 16, each expansion to 2n initial 16, each expansion to 2n initial 11, each expansion to whether 2n+1 supports Null key and Null Value can have a Null key Several common methods of Null Value that do not support Java collections * *

These common methods need to be memorized, although the interview is not needed, but written tests or interviews are often used to write algorithm questions.

Common methods of Collection

Booean add (E e) adds an element boolean remove (Object o) to the end of the collection if there is an element in this class whose value is equal to the value of o. Remove this element and return truevoid clear () clear all elements in this class boolean contains (Object o) determine whether the set contains the element boolean isEmpty () determine whether the set is empty int size () return the number of elements in the set boolean addAll (Collection c) add all elements in one set to another set Object [] toArray () returns an array containing all the elements of this set The array type is Object [] `boolean equals (Object c) ``to determine whether the element is equal or not. Int hashCode () returns the hash value of the element List-specific method

Void add (int index,Object obj) add an element at the specified location Object remove (int index) Delete the specified element and return Object set (int index,Object obj) change the element at the specified index location to the specified value and return the value before modification int indexOf (Object o) returns the index Object get (int index) that the specified element appears for the first time in the collection returns the element at the specified location List subList (int fromIndex,int toIndex) intercepts the collection (left closed right open) LinkedList unique method

AddFirst () add element addLast () add element removeFirst () add element removeFirst () delete element removeLat () delete element getFirst () get header element getLast () get tail element Map method

Void clear () clears the elements in the collection boolean containsKey (Object key) queries whether the Map contains the specified key, if so, returns trueSet entrySet () returns the Set collection composed of key-value pairs contained in Map, each collection element is an object Object get (Object key) of Map.Entry, returns the value specified by key, if the Map does not contain key, returns nullboolean isEmpty () to query whether Map is empty. If you return trueSet keySet () to add a key-value pair to the set of all key in Map, Object put (Object key,Object value). If there is already the same key-value pair, the new key-value pair will overwrite the old key-value pair, and the return value will be the value before overwriting. Otherwise, for nullvoid putAll (Map m), copy the key-value pair in the established Map to the Map Object remove (Object key) to delete the key-value pair corresponding to the specified key. Returns the associated value. If the key does not exist, return the number of key-value pairs in the nullint size () return Map. Collection values () returns the Collectionboolean containsValue (Object value) of all the values in the Map to determine whether the value valueStack method exists in the image.

Boolean empty () tests whether the stack is empty. E peek () looks at the object at the top of the stack but does not remove it from the stack. E pop () removes the object at the top of the stack and returns it as the value of this function. E push (E item) pushes the item on top of the stack. Int search (Object o) returns the position of the object in the stack, with a base of 1. Queue method

Boolean add (E) inserts the specified element into the tail of the queue (an exception will be thrown if the queue is full) boolean offer (E) inserts the specified element into the tail of the queue (false will be returned if the queue is full) E remove () returns the element that fetches the header of the queue and deletes it (throws an exception if the queue is empty) E poll () returns the element of the header of the queue and deletes it (if the queue is empty) Then return null) E element () returns the element of the queue header, does not delete the element (if the queue is empty, then throws an exception) E peek () returns the element of the queue header, does not delete the element (if the queue is empty, then returns null) so far, I believe everyone has a deeper understanding of "which Java collection interview questions", might as well do some actual operation! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report