Two column set Map

Keywords: Java data structure linked list set map


Double column set: key: value

public class MapDDDDemo {
    public static void main(String[] args) {
        Map map = new HashMap<Integer,String>();
        map.put(1,"Hello");
        map.put(2,"world");
        map.put(3,"hello");
        System.out.println("map Length:"+map.size());
        System.out.println("map Include key Data of 1:"+map.containsKey(1));
        System.out.println("take out key Is 2:"+map.get(2));
        System.out.println("map Empty:"+map.isEmpty());
        System.out.println("All key: "+map.keySet());
        System.out.println("All value: "+map.values());
        // Delete the specified key
//        map.remove(3);
//        map.clear();

        // Traversal set ① -- keySet
        System.out.println("-------------Ergodic combination keySet-------------");
        Set<Integer> keysets = map.keySet();
        Iterator<Integer> iterator = keysets.iterator();
        while (iterator.hasNext()){
            Integer key = iterator.next();
            String value = (String) map.get(key);
            System.out.println(key+"--------"+map.get(key));
        }

        // Traversal set ② -- entrySet
        System.out.println("-------------Ergodic combination entrySet-------------");
        Set<Map.Entry<Integer,String>> entrySet = map.entrySet();
        Iterator<Map.Entry<Integer,String>> it = entrySet.iterator();
        while (it.hasNext()){
            Map.Entry<Integer, String> entry = it.next();
            Integer key = entry.getKey();
            String value = entry.getValue();
            System.out.println(key+"--------"+map.get(key));
        }
    }
}

Output:
map length: 3
Whether the map contains data with key 1: true
Take out key 2: World
Whether the map is empty: false
All key s: [1, 2, 3]
All value s: [Hello, world, hello]
-------------Traversal binding keySet-------------
1 -------- hello
2 --------- world
3--------hello
-------------Traversal binding entrySet-------------
1 -------- hello
2 --------- world
3--------hello

HashMap

  • Initial capacity: 16
  • Capacity expansion: 0.75
  • Element is an array of linked lists

Query fast - array

Fast addition and deletion -- linked list

map.put(key,value)

  1. Judge whether the array is empty. If it is empty, initialize it
  2. If it is not empty, key.hashCode() calculates the hash value and converts the hash into an array subscript for query
  3. If the current subscript has no element, put is at this node
  4. If there is a linked list in the current subscript, traverse the linked list (the hash value of the key is repeated at this time)
  5. When the key of a node on the linked list is the same as the key in put calling equals(), value is overwritten
  6. When there is no same key in the linked list, put is in the linked list
  7. After the insertion is completed, judge whether the current number of nodes is greater than the threshold. If it is greater than the threshold, start to expand the capacity to twice the original array.
  • If the number of nodes in the linked list exceeds 8 and the array length is greater than 64, it will be turned into a red black tree
  • If the number of nodes in the linked list is less than 6, turn back to the linked list

Objective: there is a difference of 7 in the middle, which can effectively prevent frequent conversion of linked list and tree

map initialization

  • The default size is 16 and the load factor is 0.75
  • If you pass in the initial size k, the initial size is: an integer power greater than 2 of K.
HashMap<Object, Object> hashMap1 = new HashMap<>(10);---—Size 16

map expansion

  • threshold = size * loadFactor

When the array size is 16 and the element exceeds 16 * 0.75 = 12, expand the array size to 2 times, that is, 32.

  • load factor: a measure of how much space is used in a hash table
  • The larger the load factor, the higher the filling degree of the hash table and the lower the search efficiency.

HashMap Optimization: customize the array capacity, or record the size of the factor

After JDK1.8

1. The HashMap is changed from array + linked list to array + linked list or red black tree - to prevent hash conflict, the length of the linked list is too long and reduce the time complexity

2. The insertion mode of the linked list is changed from the head insertion method to the tail insertion method - to prevent the formation of a closed loop

3. Judge whether expansion is required after insertion - prevent invalid expansion

4. In 1.7, the hash needs to be relocated to the position of the new array. In 1.8, the position remains unchanged, and the index + old capacity size

extend

1. Why are wrapper classes such as string and interger suitable as keys?

——————Because of the immutability of the wrapper class (the hashCode() and equals() methods have been overridden), the probability of hash collision is reduced

2. Can I use a custom object as a key?

——————If the custom object is immutable, you can. Because it cannot be changed after it is created.

3. Why can both key and value be empty?

——————When the key is empty, the hash is empty by default, so only one key can be empty. value is not limited, and can be multiple

4. Why must the capacity of HashMap be to the nth power of 2?

——————Formula for calculating index position: (n-1) & hash. Based on the formula, in order to achieve uniform distribution

5. Where is the HashMap thread unsafe?

——————Concurrency in multithreading causes value to be overwritten. Modifying while traversing will throw concurrency modificationexception and other problems. Before JDK1.8, concurrency will form a ring (dead loop)

6. Why does JDK1.7 have the problem of dead cycle?

——————1.7 the header insertion method is adopted, resulting in the reverse order of nodes at the same index position after capacity expansion. The tail interpolation method will not be reversed.

Two methods of traversal

Method 1: take out values according to the key

Method 2: take out the key value pair according to the Entry

public class HashMapDemo {
    public static void main(String[] args) {
        HashMap<Integer, String> hashMap = new HashMap<>();
        hashMap.put(1,"Xiao Ming");
        hashMap.put(2,"Xiaohua");
        hashMap.put(3,"Xiao Zhang");
        hashMap.put(3,"Xiao Hong");
        // output
        System.out.println(hashMap);
        System.out.println("key For 2 value: "+hashMap.get(2));
        System.out.println(hashMap.size());
        System.out.println(hashMap.containsKey(3));
        System.out.println(hashMap.containsValue("Xiaohua"));
        Collection<String> values = hashMap.values();
        // Method 1: take out values according to the key
        System.out.println("========ergodic keySet========");
        Set<Integer> keySet = hashMap.keySet();
        for(Integer key:keySet){
            System.out.println(key+": "+hashMap.get(key));
        }
        System.out.println("========ergodic keySet02========");
        Iterator<Integer> it = keySet.iterator();
        while (it.hasNext()){
            Integer key = it.next();
            String value = hashMap.get(key);
            System.out.println(key+": "+value);
        }

        // Method 2: take out the key value pair according to the Entry
        System.out.println("========ergodic entrySet========");
        Set<Map.Entry<Integer, String>> entrySet = hashMap.entrySet();
        for (Map.Entry<Integer, String> entry:entrySet) {
            Integer key = entry.getKey();
            String value = entry.getValue();
            System.out.println(key+"======="+value);
        }
        HashMap<Object, Object> hashMap1 = new HashMap<>(10);
    }
}

Output:
{1 = Xiaoming, 2 = Xiaohua, 3 = Xiaohong}
value with key 2: Xiaohua
3
true
true
-------Traverse keySet-------
1: Xiao Ming
2: Xiaohua
3: Xiao Hong
-------Traverse keySet02-------
1: Xiao Ming
2: Xiaohua
3: Xiao Hong
-------Traverse entrySet-------
1 ------- Xiao Ming
2 ------- Xiaohua
3 ------- Xiaohong

HashTable

  • Initial capacity: 11
  • Capacity expansion: 0.75
  • Element is an array of linked lists
public class HashTableDemo {
    public static void main(String[] args) {
        Hashtable<Integer, String> hashtable = new Hashtable<>();
        hashtable.put(1,"Hello");
        hashtable.put(2,"world");
        hashtable.put(3,"");
        Set<Integer> keySet = hashtable.keySet();
        Iterator<Integer> iterator = keySet.iterator();
        while (iterator.hasNext()){
            Integer key = iterator.next();
            System.out.println(key+"------------"+hashtable.get(key));
        }
    }
}

Output:
3------------
2 ----------------- world
1 ----------------- hello
Bucket structure, first in and last out, and null value and null key are not allowed

Differences between HashTable and HashMap

  1. NULL value
  • HashMap: allow a null value null key
  • Hashtable: null value and null key are not allowed
  1. The inherited parent classes are different, but they all implement the Map interface
  • HashMap inherits from AbstractMap class
  • Hashtable inherits from Dictionary class (obsolete class)
  1. The insertion and expansion sequences are inconsistent
  • The HashMap is inserted first and then checked for capacity expansion
  • The Hashtable is inserted after checking whether it needs to be expanded
  1. Thread safety issues
  • The HashMap thread is unsafe -- the Iterator iterator is the Iterator of the fail fast mechanism -- when other threads change the HashMap structure (add, delete, modify elements), they will throw a ConcurrentModificationException.
  • Hashtable thread safety -- many methods use synchronize to modify (put, get) --- the enumerator iterator of hashtable is not fail fast. If the remove() method of the iterator itself removes an element, it will not throw a ConcurrentModificationException
  1. Different expansion sizes
  • Hashtable does not require that the capacity of the underlying array must be an integer power of 2
  • HashMap requires an integer power of 2
  1. Different expansion and resize methods
  • Hashtable: it will change the capacity to double the original one
  • HashMap: will double the capacity.
  1. Different methods of calculating hash value
  2. Resolve hash address conflicts
  • HashMap: if there are more than eight nodes, it will be transformed into a red black tree. If there are less than six nodes, it will be transformed back into a linked list
  • HashTable: linked list storage

Single thread: HashMap is generally used
Multithreading: generally use ConcurrentHashMap

ConcurrentHashMap

JDK1.7

  • Structure: array + linked list.
  • Sectional lock technology is adopted. Every time a thread accesses a Segment using a lock, it will not affect other segments.

If the capacity size is 16, the concurrency is 16. 16 threads can operate 16 segments at the same time, and it is thread safe.

put: try to obtain the lock first, and the spin cannot be obtained until it is blocked to ensure success

get: the Key is located to the specific Segment (array subscript) through the Hash, and then located to the specific element through the Hash—————— Because the value attribute in HashEntry is decorated with volatile keyword to ensure memory visibility, it is the latest value every time it is obtained. So you don't need to lock it

JDK1.8

  • Structure: array + linked list (red black tree)

  • Abandoning the original Segment lock, CAS + synchronized is used to ensure concurrency security.

  • The previous HashEntry is changed to Node, but the function remains unchanged. Value and next are modified with volatile to ensure visibility. In addition, red and black trees are also introduced to convert when the linked list is greater than a certain value (8 by default).

CAS is an implementation of optimistic lock and a lightweight lock,

  • Step: the thread does not lock when reading data. When preparing to write back data, compare whether the original value is modified. If it has not been modified by other threads, write back. If it has been modified, re execute the reading process.

——- ABA problem, CAS cannot judge
——- explanation: the original value is a, a thread changes the value back to B, and another thread changes the value back to A. for the thread judged at this time, it is found that its value is still a, so he doesn't know whether the value has been changed or not.

However, in the actual process, it is still necessary to record the modification process, such as fund modification. Each modification should be recorded to facilitate backtracking.

————How to solve ABA problem?
————It's good to use the version number to guarantee. When querying the original value before modification, bring another version number. Each judgment is made together with the value and version number. If the judgment is successful, add 1 to the version number.

TreeMap

Similar to TreeSet, you can customize sorting

Posted by joshuamd3 on Fri, 24 Sep 2021 21:36:05 -0700