Java 8 Functional Interfaces – When & How To Use Them?

Functional interfaces, lambda expressions and Stream API – these three features of Java 8 has turned Java programming into new style of programming called functional-style programming. Java is still an object-oriented programming language, but from Java 8, with the introduction of new features, most of the programming is done keeping functions in mind rather than objects. In this article, we will see Java 8 functional interfaces, @FunctionalInterface annotation, java.util.function package and how to use new Java 8 functional interfaces to compose lambda expressions with some simple examples.

Java 8 Functional Interfaces

1) Definition

Functional interfaces are the interfaces which has exactly one abstract method. They may have any number of default methods but must have only one abstract method. Functional interfaces provide only one functionality to implement.

There were functional interfaces exist before Java 8. It is not like that they are the whole new concept introduced only in Java 8. RunnableActionListenerCallable and Comaprator are some old functional interfaces which exist even before Java 8.

The new set of functional interfaces are introduced in Java 8 to make programmer’s job easy while writing lambda expressions. Your lambda expression must implement any one of these functional interfaces. These new functional interfaces are organised under java.util.function package.

2) @FunctionalInterface Annotation

@FunctionalInterface annotation is introduced in Java 8 to represent functional interfaces. Although, it is not compulsory to write functional interface using this annotation. But, if you are using @FunctionalInterface annotation then your interface should contain only one abstract method. If you try to write more than one abstract method, compiler will show the error.

3) java.util.function package

All Java 8 functional interfaces are organised in java.util.function package. Each functional interface in this package represents an operation that can be performed by the lambda expression.

Below table shows the list of all Java 8 functional interfaces along with their abstract method, which operation they represent and when to use them?

Java 8 functional interfaces

4) How To Use Java 8 Functional Interfaces In Real Time?

Let’s define Student class like below. We will be using this class in the subsequent examples.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
class Student
{
    int id;
    
    String name;
    
    double percentage;
    
    String specialization;
    
    public Student(int id, String name, double percentage, String specialization)
    {
        this.id = id;
        
        this.name = name;
        
        this.percentage = percentage;
        
        this.specialization = specialization;
    }
    
    public int getId() {
        return id;
    }
    public String getName() {
        return name;
    }
    public double getPercentage() {
        return percentage;
    }
    public String getSpecialization() {
        return specialization;
    }
    @Override
    public String toString()
    {
        return id+"-"+name+"-"+percentage+"-"+specialization;
    }
}

Let listOfStudents be the list of 10 students.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
List<Student> listOfStudents = new ArrayList<Student>();
        
listOfStudents.add(new Student(111, "John", 81.0, "Mathematics"));
        
listOfStudents.add(new Student(222, "Harsha", 79.5, "History"));
        
listOfStudents.add(new Student(333, "Ruth", 87.2, "Computers"));
        
listOfStudents.add(new Student(444, "Aroma", 63.2, "Mathematics"));
        
listOfStudents.add(new Student(555, "Zade", 83.5, "Computers"));
        
listOfStudents.add(new Student(666, "Xing", 58.5, "Geography"));
        
listOfStudents.add(new Student(777, "Richards", 72.6, "Banking"));
        
listOfStudents.add(new Student(888, "Sunil", 86.7, "History"));
        
listOfStudents.add(new Student(999, "Jordan", 58.6, "Finance"));
        
listOfStudents.add(new Student(101010, "Chris", 89.8, "Computers"));

Let’s see how to use 4 important functional interfaces – PredicateConsumerFunction and Supplier using above listOfStudents.

a) Predicate – Tests an object

Predicate represents an operation which takes an argument T and returns a boolean. Use this functional interface, if you want to define a lambda expression which performs some test on an argument and returns true or false depending upon outcome of the test.

For example,

Imagine an operation where you want only a list of “Mathematics” students from the above listOfStudents. Let’s see how to do it using Predicate.

Lambda expression implementing Predicate : Checking specialization of a Student

1
2
3
4
5
6
7
8
9
10
11
Predicate<Student> mathematicsPredicate = (Student student) -> student.getSpecialization().equals("Mathematics");
        
List<Student> mathematicsStudents = new ArrayList<Student>();
        
for (Student student : listOfStudents)
{
    if (mathematicsPredicate.test(student))
    {
        mathematicsStudents.add(student);
    }
}

b) Consumer – Consumes an object

Consumer represents an operation which takes an argument and returns nothing. Use this functional interface If you want to compose a lambda expression which performs some operations on an object.

For example, displaying all students with their percentage.

Lambda expression implementing Consumer : Displaying all students with their percentage

1
2
3
4
5
6
7
8
Consumer<Student> percentageConsumer = (Student student) -> {
        System.out.println(student.getName()+" : "+student.getPercentage());
    };
        
for (Student student : listOfStudents)
{
    percentageConsumer.accept(student);
}

c) Function – Applies to an object

Function represents an operation which takes an argument of type T and returns a result of type R. Use this functional interface if you want to extract some data from an existing data.

For example, extracting only the names from listOfStudents.

Lambda expression implementing Function : Extracting only the names of all students

1
2
3
4
5
6
7
8
Function<Student, String> nameFunction = (Student Student) -> Student.getName();
        
List<String> studentNames = new ArrayList<String>();
        
for (Student student : listOfStudents)
{
    studentNames.add(nameFunction.apply(student));
}

d) Supplier – Supplies the objects

Supplier represents an operation which takes no argument and returns the results of type R. Use this functional interface when you want to create new objects.

Lambda expression implementing Supplier : Creating a new Student

1
2
3
Supplier<Student> studentSupplier = () -> new Student(111111, "New Student", 92.9, "Java 8");
        
listOfStudents.add(studentSupplier.get());

5) Functional Interfaces Supporting Primitive Type

Java 8 has also introduced functional interfaces which support primitive types. For example IntPredicateDoublePredicateLongConsumer etc… (See above table).

If an input or output is a primitive type then using these functional interfaces will enhance the performance of your code. For example, if input to a Predicate is primitive type intthen using intPredicate instead of Predicate will remove unnecessary boxing of input.

Thread Livelock

A livelock is a recursive situation where two or more threads would keep repeating a particular code logic. The intended logic is typically giving opportunity to the other threads to proceed in favor of ‘this’ thread.

A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time.
From Oracle reference docs:

 A thread often acts in response to the action of another thread. If the other thread’s action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked – they are simply too busy responding to each other to resume work.

For example consider a situation where two threads want to access a shared common resource via a Worker object but when they see that other Worker (invoked on another thread) is also ‘active’, they attempt to hand over the resource to other worker and wait for it to finish. If initially we make both workers active they will suffer from livelock.
main

The Common Resource Class

public class CommonResource {
    private Worker owner;

    public CommonResource (Worker d) {
        owner = d;
    }

    public Worker getOwner () {
        return owner;
    }

    public synchronized void setOwner (Worker d) {
        owner = d;
    }
}

The Worker Class

public class Worker {
    private String name;
    private boolean active;

    public Worker (String name, boolean active) {
        this.name = name;
        this.active = active;
    }

    public String getName () {
        return name;
    }

    public boolean isActive () {
        return active;
    }

    public synchronized void work (CommonResource commonResource, Worker otherWorker) {
        while (active) {
            // wait for the resource to become available.
            if (commonResource.getOwner() != this) {
                try {
                    wait(10);
                } catch (InterruptedException e) {
                   //ignore
                }
                continue;
            }

            // If other worker is also active let it do it's work first
            if (otherWorker.isActive()) {
                System.out.println(getName() +
                            " : handover the resource to the worker " +
                                                       otherWorker.getName());
                commonResource.setOwner(otherWorker);
                continue;
            }

            //now use the commonResource
            System.out.println(getName() + ": working on the common resource");
            active = false;
            commonResource.setOwner(otherWorker);
        }
    }
}

The main class

public class Livelock {

    public static void main (String[] args) {
        final Worker worker1 = new Worker("Worker 1 ", true);
        final Worker worker2 = new Worker("Worker 2", true);

        final CommonResource s = new CommonResource(worker1);

        new Thread(() -> {
            worker1.work(s, worker2);
        }).start();

        new Thread(() -> {
            worker2.work(s, worker1);
        }).start();
    }
}

Output:

There will be never ending recursion of the following output:

Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
    ........

Avoiding Livelock

In above example we can fix the issue by processing the common resource sequentially rather than in different threads simultaneously.

Just like deadlock, there’s no general guideline to avoid livelock, but we have to be careful in scenarios where we change the state of common objects also being used by other threads, for example in above scenario. the Worker object.

Deep, Shallow and Lazy Copy with Java Examples

In object-oriented programming, object copying is creating a copy of an existing object, the resulting object is called an object copy or simply copy of the original object.There are several ways to copy an object, most commonly by a copy constructor or cloning.

We can define Cloning as “create a copy of object” Shallow, deep and lazy copy is related to cloning process
these are actually the ways for creating copy object.

Shallow Copy

  • Whenever we use default implementation of clone method we get shallow copy of object means it creates new instance and copies all the field of object to that new instance and returns it as object type, we need to explicitly cast it back to our original object. This is shallow copy of the object.
  • clone() method of the object class support shallow copy of the object. If the object contains primitive as well as nonprimitive or reference type variable in shallow copy, the cloned object also refers to the same object to which the original object refers as only the object references gets copied and not the referred objects themselves.
  • That’s why the name shallow copy or shallow cloning in Java. If only primitive type fields or Immutable objects are there then there is no difference between shallow and deep copy in Java.
//code illustrating shallow copy
public class Ex {
 
    private int[] data;
 
    // makes a shallow copy of values
    public Ex(int[] values) {
        data = values;
    }
 
    public void showData() {
        System.out.println( Arrays.toString(data) );
    }
}

The above code shows shallow copying. data simply refers to the same array as vals.

This can lead to unpleasant side effects if the elements of values are changed via some other reference.

public class UsesEx{
 
    public static void main(String[] args) {
        int[] vals = {3, 7, 9};
        Ex e = new Ex(vals);
        e.showData(); // prints out [3, 7, 9]
        vals[0] = 13;
        e.showData(); // prints out [13, 7, 9]
 
        // Very confusing, because we didn't
        // intentionally change anything about 
        // the object e refers to.
    }
}
Output 1 : [3, 7, 9]
Output 2 : [13, 7, 9]

Deep Copy

  • Whenever we need own copy not to use default implementation we call it as deep copy, whenever we need deep copy of the object we need to implement according to our need.
  • So for deep copy we need to ensure all the member class also implement the Cloneable interface and override the clone() method of the object class.

A deep copy means actually creating a new array and copying over the values.

// Code explaining deep copy
public class Ex {
     
    private int[] data;
 
    // altered to make a deep copy of values
    public Ex(int[] values) {
        data = new int[values.length];
        for (int i = 0; i < data.length; i++) {
            data[i] = values[i];
        }
    }
 
    public void showData() {
        System.out.println(Arrays.toString(data));
    }
}

public class UsesEx{
 
    public static void main(String[] args) {
        int[] vals = {3, 7, 9};
        Ex e = new Ex(vals);
        e.showData(); // prints out [3, 7, 9]
        vals[0] = 13;
        e.showData(); // prints out [3, 7, 9]
 
       // changes in array values will not be 
       // shown in data values. 
    }
}
Output 1 : [3, 7, 9]
Output 2 : [3, 7, 9]

Changes to the array vals will not result in changes to the array data.

when to use what
There is no hard and fast rule defined for selecting between shallow copy and deep copy but normally we should keep in mind that if an object has only primitive fields, then obviously we should go for shallow copy, but if the object has references to other objects, then based on the requirement, shallow copy or deep copy should be done. If the references are not updated then there is no point to initiate a deep copy.

Lazy Copy
A lazy copy can be defined as a combination of both shallow copy and deep copy. The mechanism follows a simple approach – at the initial state, shallow copy approach is used. A counter is also used to keep a track on how many objects share the data. When the program wants to modify the original object, it checks whether the object is shared or not. If the object is shared, then the deep copy mechanism is initiated.

Summary
In shallow copy, only fields of primitive data type are copied while the objects references are not copied. Deep copy involves the copy of primitive data type as well as objet references. There is no hard and fast rule as to when to do shallow copy and when to do a deep copy. Lazy copy is a combination of both of these approaches.

This article is contributed by Abhishek Gupta. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Variance in Java

The other day I came across this post describing the pros and cons of using Go after 8 months. I mostly agree after working full-time with Go for a comparable duration.

Despite that preamble, this is a post about variance in Java, where my goal is to refresh my understanding of what variance is and some of the nuances of its implementation in Java.

ProTip: You’ll need to know this for your OCJP certificate exam.

I will write down my thoughts on using Go in a later post.

What Is Variance?

The Wikipedia article on variance says:

Variance refers to how subtyping between more complex types relates to subtyping between their components.

“More complex types” here refers to higher level structures like containers and functions. So, variance is about the assignment compatibility between containers and functions composed of parameters that are connected via a Type Hierarchy. It allows the safe integration of parametric and subtype polymorphism1. For example, can I assign the result of a function that returns a list of cats to a variable of type “list of animals”? Can I pass in a list of Audi cars to a method that accepts a list of cars? Can I insert a wolf in this list of animals?

In Java, variance is defined at the use-site 2.

Four Kinds of Variance

Paraphrasing the Wiki article, a type constructor is:

  • Covariant if it accepts subtypes but not supertypes
  • Contravariant if it accepts supertypes but not subtypes
  • Bivariant if it accepts both supertypes and subtypes
  • Invariant does not accept either supertypes nor subtypes

(Obviously, the declared type parameter is accepted in all cases.)

Invariance in Java

The use-site must have no open bounds on the type parameter.

If A is a supertype of B, then GenericType<A> is not a supertype ofGenericType<B> and vice versa.

This means these two types have no relation to each other and neither can be exchanged for the other under any circumstance.

Invariant Containers

In Java, invariants are likely the first examples of generics you’ll encounter and are the most intuitive. The methods of the type parameter are useable as one would expect. All methods of the type parameter are accessible.

They cannot be exchanged:

// Type hierarchy: Person :> Joe :> JoeJr
List<Person> p = new ArrayList<Joe>(); // COMPILE ERROR (a bit counterintuitive, but remember List<Person> is invariant)
List<Joe> j = new ArrayList<Person>(); // COMPILE ERROR

You can add objects to them:

// Type hierarchy: Person :> Joe :> JoeJr
List<Person> p = new ArrayList<>();
p.add(new Person()); // ok
p.add(new Joe()); // ok
p.add(new JoeJr()); // ok

You can read objects from them:

// Type hierarchy: Person :> Joe :> JoeJr
List<Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // ok
Person p = joes.get(0); // ok

Covariance in Java

The use-site must have an open lower bound on the type parameter.

If B is a subtype of A, then GenericType<B> is a subtype of GenericType<? extends A>.

Arrays in Java Have Always Been Covariant

Before generics were introduced in Java 1.5, arrays were the only generic containers available. They have always been covariant, eg. Integer[] is a subtype of Object[]. The danger has always been that if you pass your Integer[] to a method that accepts Object[], that method can literally put anything in there. It’s a risk you take — no matter how small — when using third-party code.

Covariant Containers

Java allows subtyping (covariant) generic types but it places restrictions on what can “flow into and out of” these generic types in accordance with the Principle of Least Astonishment3. In other words, methods with return values of the type parameter are accessible, while methods with input arguments of the type parameter are inaccessible.

You can exchange the supertype for the subtype:

// Type hierarchy: Person :> Joe :> JoeJr
List<? extends Joe> = new ArrayList<Joe>(); // ok
List<? extends Joe> = new ArrayList<JoeJr>(); // ok
List<? extends Joe> = new ArrayList<Person>(); // COMPILE ERROR

Reading from them is intuitive:

// Type hierarchy: Person :> Joe :> JoeJr
List<? extends Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // ok
Person p = joes.get(0); // ok
JoeJr jr = joes.get(0); // compile error (you don't know what subtype of Joe is in the list)

Writing to them is prohibited (counterintuitive) to guard against the pitfalls with arrays described above. In the example code below, the caller/owner of a List<Joe> would be astonished if someone else’s method with covariant arg List<? extends Person> added a Jill.

// Type hierarchy: Person > Joe > JoeJr
List<? extends Joe> joes = new ArrayList<>();
joes.add(new Joe());  // compile error (you don't know what subtype of Joe is in the list)
joes.add(new JoeJr()); // compile error (ditto)
joes.add(new Person()); // compile error (intuitive)
joes.add(new Object()); // compile error (intuitive)

Contravariance in Java

The use-site must have an open upper bound on the type parameter.

If A is a supertype of B, then GenericType<A> is a supertype ofGenericType<? super B>.

Contravariant Containers

Contravariant containers behave counterintuitively: contrary to covariant containers, access to methods with return values of the type parameter are inaccessible while methods with input arguments of the type parameter are accessible:

You can exchange the subtype for the supertype:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<Joe>();  // ok
List<? super Joe> joes = new ArrayList<Person>(); // ok
List<? super Joe> joes = new ArrayList<JoeJr>(); // COMPILE ERROR

But you cannot capture a specific type when reading from them:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // compile error (could be Object or Person)
Person p = joes.get(0); // compile error (ditto)
Object o = joes.get(0); // allowed because everything IS-A Object in Java

You can add subtypes of the “lower bound”:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
joes.add(new JoeJr()); // allowed

But you cannot add supertypes:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
joes.add(new Person()); // compile error (again, could be a list of Object or Person or Joe)
joes.add(new Object()); // compile error (ditto)

Bi-variance in Java

The use-site must declare an unbounded wildcard on the type parameter.

A generic type with an unbounded wildcard is a supertype of all bounded variations of the same generic type. For example, GenericType<?> is a supertype of GenericType<String>. Since the unbounded type is the root of the type hierarchy, it follows that of its parametric types and it can only access methods inherited from java.lang.Object.

Think of GenericType<?> as GenericType<Object>.

Variance of Structures With N-Type Parameters

What about more complex types such as Functions? The same principles apply; you just have more type parameters to consider:

// Type hierarchy: Person > Joe > JoeJr
// Invariance
Function<Person, Joe> personToJoe = null;
Function<Joe, JoeJr> joeToJoeJr = null;
personToJoe = joeToJoeJr; // COMPILE ERROR (personToJoe is invariant)
// Covariance
Function<? extends Person, ? extends Joe> personToJoe = null; // covariant
Function<Joe, JoeJr> joeToJoeJr = null;
personToJoe = joeToJoeJr;  // ok
// Contravariance
Function<? super Joe, ? super JoeJr> joeToJoeJr = null; // contravariant
Function<? super Person, ? super Joe> personToJoe = null;
joeToJoeJr = personToJoe; // ok

Variance and Inheritance

Java allows overriding methods with covariant return types and exception types:

interface Person {
  Person get();
  void fail() throws Exception;
}
interface Joe extends Person {
  JoeJr get();
  void fail() throws IOException;
}
class JoeImpl implements Joe {
  public JoeJr get() {} // overridden
  public void fail() throws IOException {} // overridden
}

But attempting to override methods with covariant arguments results in merely an overload:

interface Person {
  void add(Person p);
}
interface Joe extends Person {
  void add(Joe j);
}
class JoeImpl implements Joe {
  public void add(Person p) {}  // overloaded
  public void add(Joe j) {} // overloaded
 }

Final Thoughts

Variance introduces additional complexity to Java. While the typing rules around variance are easy to understand, the rules regarding accessibility of methods of the type parameter are counterintuitive. Understanding them isn’t just “obvious” — it requires pausing to think through the logical consequences.

However, my daily experience has been that the nuances generally stay out of the way:

  • I cannot recall an instance where I had to declare a contravariant argument, and I rarely encounter them (although they do exist).
  • Covariant arguments seem slightly more common (example4), but they’re easier to reason about (fortunately).

Covariance is its strongest virtue considering that subtyping is a fundamental technique of object-oriented programming (case in point: see note 4).

Conclusion: variance provides moderate net benefits in my daily programming, particularly when compatibility with subtypes is required (which is a regular occurrence in OOP).

Immutable Data Structures in Java

As part of some of the coding interviews I’ve been conducting recently, the topic of immutability sometimes comes up. I’m not overly dogmatic in it myself, but whenever there’s no need for mutable state, I try to get rid of code which makes code mutable, which is often most visible in data structures. However, there seems to be a bit of a misunderstanding on the concept of immutability, where developers often believe that having a final reference, or val in Kotlin or Scala, is enough to make an object immutable. This blogpost dives a bit deeper in immutable references and immutable data structures.

Benefits of Immutable Data Structures

Immutable data structures have significant benefits, such as:

  • No invalid state
  • Thread safety
  • Easier-to-understand code
  • Easier-to-test code
  • Can be used for value types

No Invalid State

When an object is immutable, it’s hard to have the object in an invalid state. The object can only be instantiated through its constructor, which will enforce the validity of objects. This way, the required parameters for a valid state can be enforced. An example:

Address address = new Address();
address.setCity("Sydney");
// address is in invalid state now, since the country hasn’t been set.
Address address = new Address("Sydney", "Australia");
// Address is valid and doesn’t have setters, so the address object is always valid.

Thread Safety

Since the object cannot be changed, it can be shared between threads without having race conditions or data mutation issues.

Easier-to-Understand Code

Similar to the code example of the invalid state, it’s generally easier to use a constructor than initialization methods. This is because the constructor enforces the required arguments, while setter or initializer methods are not enforced at compile time.

Easier-to-Test Code

Since objects are more predictable, it’s not necessary to test all permutations of the initializer methods, i.e. when calling the constructor of a class, the object is either valid or invalid. Other parts of the code that are using these classes become more predictable, having fewer chances of NullPointerExceptions. Sometimes, when passing objects around, there are methods that could potentially mutate the state of the object. For example:

public boolean isOverseas(Address address) {
    if(address.getCountry().equals("Australia") == false) {
        address.setOverseas(true); // address has now been mutated!
        return true;
    } else {
        return false;
    }
}

The above code, in general, is bad practice. It returns a boolean as well as potentially changing the state of the object. This makes the code harder to understand and to test. A better solution would be to remove the setter from theAddress class, and return a boolean by testing for the country name. An even better way would be to move this logic to the Address class itself (address.isOverseas()). When state really needs to be set, make a copy of the original object without mutating the input.

Can Be Used for Value Types

Imagine a money amount, say 10 dollars. 10 dollars will always be 10 dollars. In code, this could look like  public Money(final BigInteger amount, final Currency currency). As you can see in this code, it’s not possible to change the value of 10 dollars to anything other than that, and thus, the above can be used safely for value types.

Final References Don’t Make Objects Immutable

As mentioned before, one of the issues I regularly encounter with developers is that a large portion of these developers don’t fully understand the difference between final references and immutable objects. It seems that the common understanding of these developers is that the moment a variable becomes final, the data structure becomes immutable. Unfortunately, it’s not that simple, and I’d like to get this misunderstanding out of the world once and for all:

A final reference does not make your objects immutable!

In other words, the following code does not make your objects immutable:

final Person person = new Person("John");

Why not? Well, whileperson is a final field and cannot be reassigned, the Person class might have a setter method or other mutator methods, making an action like:

person.setName("Cindy");

This is quite an easy thing to do, regardless of the final modifier. Alternatively, the Person class might expose a list of addresses like this. Accessing this list allows you to add an address to it and, therefore, mutate the person object like so:

person.getAddresses().add(new Address("Sydney"));

Our final reference again didn’t help us in stopping us from mutating the person object.

OK, now that we’ve got that out the way, let’s dive a little bit into how we can make a class immutable. There are a couple of things that we need to keep in mind while designing our classes:

  • Don’t expose internal state in an mutable way
  • Don’t change the state internally
  • Make sure subclasses don’t override the above behaviour

With the following guidelines in place, let’s design a better version of our Person class.

public final class Person {// final class, can’t be overridden by subclasses
    private final String name;     // final for safe publication in multithreaded applications
    private final List<Address> addresses;
    public Person(String name, List<Address> addresses) {
        this.name = name;
        this.addresses = List.copyOf(addresses);   // makes a copy of the list to protect from outside mutations (Java 10+).
                // Otherwise, use Collections.unmodifiableList(new ArrayList<>(addresses));
    }
    public String getName() {
        return this.name;   // String is immutable, okay to expose
    }
    public List<Address> getAddresses() {
        return addresses; // Address list is immutable
    }
}
public final class Address {    // final class, can’t be overridden by subclasses
    private final String city;   // only immutable classes
    private final String country;
    public Address(String city, String country) {
        this.city = city;
        this.country = country;
    }
    public String getCity() {
        return city;
    }
    public String getCountry() {
        return country;
    }
}

Now, the following code can be used like this:

import java.util.List;
final Person person = new Person("John", List.of(new Address(“Sydney”, "Australia"));

Now, the above code is immutable, but due to the design of the Person and Address class, while also having a final reference, it makes it impossible to reassign the person variable to anything else.

Update: As some people mentioned, the above code was still mutable because I didn’t make a copy of the list of Addresses in the constructor. So, without calling the new ArrayList() in the constructor, it’s still possible to do the following:

final List<Address> addresses = new ArrayList<>();
addresses.add(new Address("Sydney", "Australia"));
final Person person = new Person("John", addressList);
addresses.clear();

However, since a new a copy is made in the constructor, the above code will no longer affect the copied address list reference in the Person class, making the code safe. Thanks all for spotting!

I hope the above helps in understanding the differences between final and immutability. If you have any comments or feedback, please let me know in the comments below.

Java Garbage Collection Basics

Objects dynamically created using a new operator are deallocated automatically. The technique that accomplishes this is called garbage collection. It works like this: When no references to an object exist, that object is assumed to be no longer needed, and the memory occupied by the object can be reclaimed.

How Does Automatic Garbage Collection Work?

Automatic garbage collection works by looking at heap memory, identifying which objects are being referenced and which are not, and deleting the unused objects. This is widely known as the “Mark and Sweep Algorithm.”

The process of deallocating memory is handled automatically by the garbage collector in two steps:

Step 1: Marking

This is first step where the garbage collector identifies which pieces of memory are in use and which are not and marks those objects that are not referenced anymore.

Garbage Collection Marking

Source: Oracle Java Docs

Step 2: Deleting

In this step, all the marked objects, which are not referenced anymore, in Step 1 are deleted and reclaim the memory.

This can be done in two types:

2a) Normal Deletion

In this deletion, the memory allocator holds references to blocks of free space where a new object can be allocated, as shown in figure below.

Garbage Collection-Normal Deletion

Source: Oracle Java Docs

2b) Deletion With Compacting

In this process, instead of just removing objects from memory, the remaining objects will be compacted to further improve performance, in addition to deleting unreferenced objects. By moving referenced object together, this makes new memory allocation much easier and faster.

Garbage Collection- Compact Deletion

Source: Oracle Java Docs

For further information on how garbage collection works, please refer to this Oracle Java Docs.

One Method to Rule Them All: Map.merge()

 I don’t often explain a single method in JDK, but when I do, it’s about Map.merge(). This is probably the most versatile operation in the key-value universe. But it’s also rather obscure and rarely used.

merge() can be explained as follows: it either puts new value under the given key (if absent) or updates existing key with a given value (UPSERT). Let’s start with the most basic example: counting unique word occurrences. Pre-Java 8 (read: pre-2014!) code was quite messy and the essence was lost in implementation details:

var map = new HashMap<String, Integer>();
words.forEach(word -> {
    var prev = map.get(word);
    if (prev == null) {
        map.put(word, 1);
    } else {
        map.put(word, prev + 1);
    }
});

However, it works, and for the given input, it produces the desired output:

var words = List.of("Foo", "Bar", "Foo", "Buzz", "Foo", "Buzz", "Fizz", "Fizz");
//...
{Bar=1, Fizz=2, Foo=3, Buzz=2}

OK, but let’s try to refactor it to avoid conditional logic:

words.forEach(word -> {
    map.putIfAbsent(word, 0);
    map.put(word, map.get(word) + 1);
});

That’s nice!

putIfAbsent()  is a necessary evil; otherwise, the code breaks on the first occurrence of a previously unknown word. Also, I find map.get(word) insidemap.put() to be a bit awkward. Let’s get rid of it as well!

words.forEach(word -> {
    map.putIfAbsent(word, 0);
    map.computeIfPresent(word, (w, prev) -> prev + 1);
});

computeIfPresent() invokes the given transformation only if the key in question (word) exists. Otherwise, it does nothing. We made sure the key exists by initializing it to zero, so incrementation always works. Can we do better? Well, we can cut the extra initialization, but I wouldn’t recommend it:

words.forEach(word ->
        map.compute(word, (w, prev) -> prev != null ? prev + 1 : 1)
);

compute ()is likecomputeIfPresent(), but it is invoked irrespective to the existence of the given key. If the value for the key does not exist, the prevargument is null. Moving a simpleif to a ternary expression hidden in lambda is far from optimal. This is where themerge()operator shines. Before I show you the final version, let’s see a slightly simplified default implementation ofMap.merge():

default V merge(K key, V value, BiFunction<V, V, V> remappingFunction) {
    V oldValue = get(key);
    V newValue = (oldValue == null) ? value :
               remappingFunction.apply(oldValue, value);
    if (newValue == null) {
        remove(key);
    } else {
        put(key, newValue);
    }
    return newValue;
}

The code snippet is worth a thousand words. merge() works in two scenarios. If the given key is not present, it simply becomes put(key, value). However, if the said key already holds some value, our remappingFunction may merge (duh!) the old and the one. This function is free to:

  • overwrite old value by simply returning the new one: (old, new) -> new
  • keep the old value by simply returning the old one: (old, new) -> old
  • somehow merge the two, e.g.: (old, new) -> old + new
  • or even remove old value: (old, new) -> null

As you can see, merge() is quite versatile. So, how does our academic problem look like with merge()? It’s quite pleasing:

words.forEach(word ->
        map.merge(word, 1, (prev, one) -> prev + one)
);

You can read it as follows: put 1 under the word key if absent; otherwise, add 1 to the existing value. I named one of the parameters “ one” because in our example it’s always…  1.

Sadly, remappingFunctiontakes two parameters, where the second one is the value we are about to upsert (insert or update). Technically, we know this value already, so (word, 1, prev -> prev + 1)  would be much easier to digest. But there’s no such API.

All right, but is merge() really useful? Imagine you have an account operation (constructor, getters, and other useful properties omitted):

class Operation {
    private final String accNo;
    private final BigDecimal amount;
}

And a bunch of operations for different accounts:

var operations = List.of(
    new Operation("123", new BigDecimal("10")),
    new Operation("456", new BigDecimal("1200")),
    new Operation("123", new BigDecimal("-4")),
    new Operation("123", new BigDecimal("8")),
    new Operation("456", new BigDecimal("800")),
    new Operation("456", new BigDecimal("-1500")),
    new Operation("123", new BigDecimal("2")),
    new Operation("123", new BigDecimal("-6.5")),
    new Operation("456", new BigDecimal("-600"))
);

We would like to compute balance (total over operations’ amounts) for each account. Without merge(), this is quite cumbersome:

var balances = new HashMap<String, BigDecimal>();
operations.forEach(op -> {
    var key = op.getAccNo();
    balances.putIfAbsent(key, BigDecimal.ZERO);
    balances.computeIfPresent(key, (accNo, prev) -> prev.add(op.getAmount()));
});

But with a little help of merge():

operations.forEach(op ->
        balances.merge(op.getAccNo(), op.getAmount(), 
                (soFar, amount) -> soFar.add(amount))
);

Do you see a method reference opportunity here?

operations.forEach(op ->
        balances.merge(op.getAccNo(), op.getAmount(), BigDecimal::add)
);

I find this astoundingly readable. For each operation, add the given amount to the given accNo. The results are as expected:

{123=9.5, 456=-100}

ConcurrentHashMap

Map.merge() shines even brighter when you realize it’s properly implemented inConcurrentHashMap. This means we can atomically perform an insert-or-update operation — single line and thread-safe.

ConcurrentHashMap is obviously thread-safe, but not across many operations, e.g. get() and thenput(). However, merge() makes sure no updates are lost.