Difference between HashMap and LinkedHashMap in Java

Difference between LinkedHashMap and HashMap in Java

HashMap and LinkedHashMap are two of the most common used Map implementation in Java. Main difference between HashMap and LinkedHashMap is that LinkedHashMap maintains insertion order of keys, order in which keys are inserted in to LinkedHashMap. On the other hand HashMap doesn’t maintain any order or keys or values. In terms of Performance there is not much difference between HashMap and LinkedHashMap but yes LinkedHashMap has more memory foot print than HashMap to maintain doubly LinkedList which it uses to keep track of insertion order of keys. Some time you notice that HashMap also returns elements in order e.g. before Java 8 when you use Integer key and then iterate over Map, you would see it returning entries in a particular order, but those are not guaranteed. Any code which is dependent upon ordering provided by HashMap will likely to break in future release when those behavior changes.

LinkedHashMap and HashMap in Java – Similarities

There are lot of similarity between LinkedHashMap and HashMap in Java, as they both implement Map interface.  let’s have a look :

1) Both LinkedHashMap and HashMap are not synchronized and subject to race condition if shared between multiple threads without proper synchronization. Use Collections.synchronizedMap() for making them synchronized.

2) Iterator returned by HashMap and LinkedHashMap are fail-fast in nature.

3) Performance of HashMap and LinkedHashMap are similar also.

Difference between LinkedHashMap and HashMap in Java

Now let’s see some differences between LinkedHashMap and HashMap in Java:

1) First and foremost difference between LinkedHashMap and HashMap is order, HashMap doesn’t maintain any order while LinkedHashMap maintains insertion order of elements in Java.

2) LinkedHashMap also requires more memory than HashMap because of this ordering feature. As I said before LinkedHashMap uses doubly LinkedList to keep order of elements.

3) LinkedHashMap actually extends HashMap and implements Map interface.

Few things to note, while using LinkedHashMap in Java

1) Default ordering provided by LinkedHashMap is the order on which key is inserted, known as insertion order, but LinkedHashMap can be created with another ordering called access order, which is defined by accessing entries.

2) Re-entering a mapping, doesn’t alter insertion order of LinkedHashMap. For example, if you already have mapping for a key, and want to update it’s value by calling put(key, newValue), insertion order of LinkedHashMap will remain same.

3) Access order is affected by calling get(key), put(key, value) or putAll(). When a particular entry is accessed, it moves towards end of the doubly linked list, maintained by LinkedHashMap.

4) LinkedHashMap can be used to create LRU cache in Java. Since in LRU or Least Recently Used Cache, oldest non accessed entry is removed, which is the head of the doubly linked list maintained by LinkedHashMap.

5) Iterator of LinkedHashMap returns elements in the order e.g. either insertion order or access order.

6)  LinkedHashMap also provides a method called removeEldestEntry(), which is protected and default implementation return false. If overridden, an implementation can return true to remove oldest entry, when a new entry is added.

Given the insertion order guarantee of LinkedHashMap, Its a good compromise between HashMap and TreeMap in Java because with TreeMap you get increased cost of iteration due to sorting and performance drops on to log(n) level from constant time. That’s all about difference between LinkedHashMap and HashMap in Java.

How dose Hashmap works internally?

Java HashMap

HashMap is HashTable based implementation of Map. This is the reason why interviewer always ask for difference between HashMap and HashTable. HashMap is mostly equals to HashTable except below two differences.

  1. HashMap is unsynchronized while HashTable is synchronised.
  2. HashMap permits null while HashTable doesn’t.

Important Property of HashMap

DEFAULT_INITIAL_CAPACITY  Default Initial Capacity(2 power n). Number of element HashMap can  contain.
MAXIMUM_CAPACITY  Maximum Capacity of HashMap (2 power n).
LOADFACTOR  Defines threshold of HashMap. When re-sizing will occur of HashMap.
DEFAULT_LOAD_FACTOR    Will be used when no load factor is defined in constructor of HashMap.
 Number of key-value pair mapping, HashMap contains.

Creation of HashMap

When there is no parameter defined while creating HashMap default Initial Capacity(16) and Default load factor(0.75) will be used. This HashMap can contain up to 16 element and resizing of HashMap will occur when 13th element will be inserted. This is because load factor is 75%(.75) and this threshold will be crossed when you add 13th element(12+1).

You can also provide initial capacity and loadFactor. But initial capacity can not be more than maximum capacity (2 power 30) and load factor can not be zero or negative number.

Addition of element in HashMap

In order to add any element you need to provide 2 thing, key and value.

Key : key with which specified value will be associated. null is allowed.

Value : value to be associated with specified key.

First HashMap will generate hashcode for given key and then check if there is any value already associated with given key or not. If yes then it will return already associated value. Else it will add value in HashMap in with provided key.

Bullet Point

  1. HashMap doesn’t give any Guarantee in order of elements in Map(Means Order can change over time).
  2. HashMap provide Constant time performance for get & set operation(If proper Hashing algorithm is used).
  3. Time requires to Iterate collection is proportional to “Capacity“(Elements it can hold) & Size(Elements it is holding currently) of HashMap.
  4. In case iteration performance is more important then it is advisable to not set initial capacity too high and load factor too low. As performance is directly proportional to Initial Capacity and load Factor.
    • capacity is the number of buckets in the hash table.
    • initial capacity(Default Value is 16) is simply the capacity at the time the hash table is created.
    • The load factor(Default value .75) is a measure of how full the hash table is allowed to get before its capacity is automatically increased.
    • When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) .
  5. Use “Collections.synchronizedMap()” method to make Map synchronised.
  6. Iterators returned by HashMap class is “fail-fast“.
  7. HashMap is backed by an Array(Key) and LinkedList(Value).
  8. HashMap uses hashcode(Using Key) to identify exact location where object should be placed or retrieved in HashMap.
  9. In the end HashCode return the exact location(Index) in backing array.
  10. Backed Array have a fixed size. So whenever Array is full(Number of keys in this map reaches its threshold). A new Array with new capacity will be created and all element will be added to this new Array.
  11. HashCode will be used in both cases(Adding  and Retrieving Object) while equals() method may or may not be used in any case.
  12. Best candidate for Key in HashMap would be an Immutable Class with properly implement Equals and Hashcode method(Example: String Class).
  13. The better hashcode and equals method implementation is better performance of HashMap would be.
  14. In such way String and Wrapper classes of all Primitives will be great candidate for keys in HashMap.

What is ReHashing

Every HashMap has predefined size (Initial Capacity) and a logic to increment this size(Load Factor) whenever required(When threshold limit crossed).

Example :

Create HashMap with below configuration

Initial Capacity = 16 (Default Initial Capacity)

Load Factor : .75 (Default load factor)

Moment you add 13th element in given HashMap, Threshold limit is crossed for given HashMap and system will create a new backing keyset array(Size of this array will be double of previous array). System will have to again calculate exact bucket where elements from previous bucket should be placed and all elements from old HashMap will be copied to new HashMap. This whole process is called ReHashing because Hashcode is calculated for each element again.

Because overtime HashMap might be reHashed and order could get change.

Difference between HashMap and ConcurrentHashMap in Java Collection?

ConcurrentHashMap in Java is introduced as an alternative of Hashtable in Java, which is a synchronized collection class, that makes the main difference between HashMap and ConcurrentHashMap which is one is non-synchronized , non-thread safe and not for use in Concurrent multi-threaded environment while ConcurrentHashMap is a thread-safe collection and intended to be used as primary Map implementation especially for multi-threaded and Concurrent environment. Apart from thread-safety, there are some subtle differences between HashMap and ConcurrentHashMap which we will see in this article. By the way, Difference between HashMap and ConcurrentHashMap as well as ConcurrentHashMap vs Hashtable are two popular core Java interview question, mostly asked on senior level Java programmers.

Difference between HashMap and ConcurrentHashMap in Java

In this section, we will see some more details about HashMap and ConcurrentHashMap and compare them on various parameters like thread-safety, synchronization, performance, ease of use etc.

1) As I said the earlier first significant difference between HashMap and ConcurrentHashMap is that later is thread-safe and can be used in a concurrent environment without external synchronization. Though it doesn’t provide the same level of synchronization as achieved by using Hashtable but it’s enough for the most practical purpose.

2)You can make HashMap synchronized by wrapping it on Collections.synchornizedMap(HashMap) which will return a collection which is almost equivalent to Hashtable, where every modification operation on Map is locked on Map object while in case of ConcurrentHashMap, thread-safety is achieved by dividing whole Map into different partition based upon Concurrency level and only locking particular portion instead of locking the whole Map.

3) ConcurrentHashMap is more scalable and performs better than Synchronized HashMap in the multi-threaded environment while in Single threaded environment both HashMap and ConcurrentHashMap gives comparable performance, where HashMap only slightly better.

In Summary Main difference between ConcurrentHashMap and HashMap in Java Collection turns out to be thread-safety, Scalability, and Synchronization. ConcurrentHashMap is a better choice than synchronized HashMap if you are using them as cache, which is the most popular use case of a Map in Java application. ConcurrentHashMap is more scalable and outperforms when a number of reader threads outnumber the number of writer threads.