Abstraction in Java

Abstraction in Java

If you have started to learn Java then I believe you must have somewhere come across term called object-oriented programming or OOPs concept. Now there are four pillars in Oops i.e., Abstraction, polymorphism, encapsulation and inheritance. In this article we will discuss about one of the four pillars of Oops i.e., Abstraction.

Abstraction basically is the art of hiding implementation details from user and provide the user what they want. Let’s try to understand with real world example. Most of us are quite fond of owning a car. When we go to place order for the car we are really not interested to understand very fine details of implementation of each and every component insider the car engine, Gear box etc., we leave those technical details and implementation for manufacturing engineers and mechanics to understand we are simply interested in the car so does the manufacturing company. They are interested to exactly provide us what we want and hide the fine implementation details from us. Likewise, there are tons of real-world examples where abstraction is in play whether smartphone you are using or smart television you are watching all have implemented abstraction in one way or the other.

Coming back to Java programming or any object-oriented programming to be more precise same principle follows code’s implementation details are hidden and only the necessary functionality is provided to the user. There are two ways to achieve abstraction in java: –

  1. By using interfaces
  2. By using abstract classes

Interfaces- Consider a television remote which only contains functionality to operate a television and it doesn’t serve any other purpose besides operating the television. You won’t be able to operate a refrigerator with a television remote. Here remote acts as an interface between you and the television. It contains all the necessary functionalities which you require while hiding the implementation details from you. In java Interfaces are similar to classes except they contain empty methods and can contain variables also. By empty methods it means that they don’t provide any implementation details and its for the classes or clients to provide the necessary implementation details for that method (or methods) when they implement the interface.

Syntax :-

public interface XYZ {

public void method ();

}

Example :

public interface TelevisionRemote {
public void turnOnTelevision();

public void turnOffTelevision();

}

A java class can just use the implements keyword to implement the interface and provide the implementation of the methods of the interface.

Example: –

public class Television implements TelevisionRemote{

   @Override

   public void turnOnTelevision(){

   //method implementation details

   }

   @Override

   public  void turnOffTelevision(){

   //method implementation details

   }

}

Interfaces provide contract specification or sets of rules for the classes implementing them. They set rules for classes and tell them what to do and not to do. In case the class does not provide implementation for all the methods of the interface then the class must be declared abstract. We will cover abstract classes later. They provide total abstraction which means that all the methods are empty and field variables are public static and final by default. Interfaces serve several features: –

1.They provide total abstraction.

2.They help to achieve what we call multiple inheritance as java doesn’t support multiple inheritance, but you can implement several interfaces in one class and thus it helps to achieve multiple inheritance.

3.They help to achieve loose coupling in design patterns implementation.

Abstract classes

Abstract classes are just like normal java class except they use keyword abstract in front of class declaration and method declaration.

Syntax: –

public abstract class XYZ {

public abstract methodName();

}

For example :

public abstract class Automobile {

   public abstract void engine();

   public void  gearBoxGearOne(){

     //method implementation

   }

}

Abstract classes are created using abstract keyword and they may have or may not have method implementation. If a method is declared abstract then its implementation has to be provided by the class extending the abstract class. We can have abstract class without abstract method as well as they can contain final methods also. A class that extends the abstract class is a child class for that abstract class and has to provide implementation for the abstract method declared in the abstract class.

Example :-

public class Car extends Automobile{

   @Override

   public void engine(){

     //Method implementation

   }

}

Now question must be arising why we have interfaces and abstract classes. There are few key differences worth noticing : –

1.Interfaces are implicitly abstract and cannot have implementations. Abstract classes can have method implementations.

2.Variables of interfaces are final by default. Abstract classes may or may not have final variable.

3.Interface methods are public whereas abstract classes can provide all types of access modifiers for its members i.e., public, protected, private.

4.Interface can extend interface only while classes can implement multiple interfaces and can extend one class only.

Thus, both abstract classes and interfaces are used to achieve abstraction and both have their own importance while designing a java solution but most preferable choice for most developers is to use interfaces as they provide complete abstraction. I hope this article helps to clear your doubts regarding abstraction.

Kafka Consumer Overview

This article is a continuation of Part 1 – Kafka Technical OverviewPart 2 – Kafka Producer Overview and Part 3 – Kafka Producer Delivery Semantics articles. Let’s look into Kafka consumer group, consumer, and protocol used in detail.

Consumer Role

Like a Kafka Producer that optimizes writes to Kafka, a Consumer is used for optimal consumption of Kafka data. The primary role of a Kafka consumer is to take Kafka connection and consumer properties to read records from the appropriate Kafka broker. Complexities of concurrent application consumption, offset management, delivery semantics, and a lot more are taken care of by Consumer APIs.

Properties

Some of the consumer properties in the bootstrap servers are: fetch.min.bytesmax.partition.fetch.bytesfetch.max.bytesenable.auto.commit, and many more. We will discuss some of these properties later in the next part of the article series.

Role of Kafka Consumers
Role of Kafka consumer

Multi-App Consumption

Multiple applications can consume records from the same Kafka topic, as shown in the diagram below. Each application that consumes data from Kafka gets it’s own copy and can read at its own speed. In other words, offsets consumed by one application could be different from another application. Kafka keeps tracks of the offsets consumed by each application in an internal__consumer_offset topic.

Kafka multi app consumption

Consumer Group and Consumer

Each application consuming data from Kafka is treated as a consumer group. For example, if two applications are consuming the same topic from Kafka, then, internally, Kafka creates two consumer groups. Each consumer group can have one or more consumers. If a topic has three partitions and an application consumes it, then a consumer group would be created and a consumer in the consumer group will consume all partitions of the topic. The diagram below depicts a consumer group with a single consumer.

Kafka multi partition single consumer

Kafka multi-partition single consumer

When an application wants to increase the speed of processing and process partitions in parallel then it can add more consumers to the consumer group. Kafka takes care of keeping track of offsets consumed per consumer in a consumer group, rebalancing consumers in the consumer group when a consumer is added or removed and lot more.

Kafka multi partition multi consumer

Kafka multi-partition multi-consumer

When there are multiple consumers in a consumer group, each consumer in the group is assigned one or more partitions. Each consumer in the group will process records in parallel from each leader partition of the brokers. A consumer can read from more than one partition.

Kafka consumer and multi partition consumption

It’s very important to understand that no single partition will be assigned to two consumers in the same consumer group; in other words, the same partition will not be processed by two consumers as shown in the diagram below.

Kafka same partition multiple consumer

Kafka same partition multiple-consumer

When consumers in a consumer group are more than partitions in a topic then over-allocated consumers in the consumer group will be unused.

Kafka unused consumer

Kafka unused consumer

When you have multiple topics and multiple applications consuming the data, consumer groups and consumers of Kafka will look similar to the diagram shown below.

Multiple application and multiple kafka topic

Multiple application and multiple Kafka topic

Coordinator and Leader Discovery

In order to manage the handshake between Kafka and the application that forms the consumer group and consumer, a coordinator on the Kafka side and a leader (one of the consumers in the consumer group) is elected. The first consumer that initiates the process is automatically elected as leader in the consumer group. As explained in the diagram below, for a consumer to join a consumer group, the following handshake processes take place:

  • Find coordinator
  • Join group
  • Sync group
  • Heartbeat
  • Leave group

Kafka consumer and coordinator protocol

Kafka consumer and coordinator protocol

Coordinator

In order to create or join a group, a consumer has to first find the coordinator on the Kafka side that manages the consumer group. The consumer makes a “find coordinator” request to one of the bootstrap servers. If a coordinator already doesn’t exist it’s identified based on a hashing formula and returned as a response to “find coordinator” request.

Join Group

Once the coordinator is identified, the consumer makes a “join group” request to the coordinator. The coordinator returns the consumer group leader and metadata details. If a leader already doesn’t exist then the first consumer of the group is elected as leader. Consuming application can also control the leader elected by the coordinator node.

Kafka consumer join group

Kafka consumer join group

Sync Group

After leader details are received for the join group request, the consumer makes a “Sync group” request to the coordinator. This request triggers the rebalancing process across consumers in the consumer group, as the partitions assigned to the consumers will change after the “sync group” request.

Kafka consumer sync group

Kafka consumer sync group

Rebalance

All consumers in the consumer group will receive updated partition assignments that they need to consume when a consumer is added/removed or “sync group” request is sent. Data consumption by all consumers in the consumer group will be halted until the rebalance process is complete.

Kafka consumer rebalance group

Kafka consumer rebalance group

Heartbeat

Each consumer in the consumer group periodically sends a heartbeat signal to its group coordinator. In the case of heartbeat timeout, the consumer is considered lost and rebalancing is initiated by the coordinator.

Kafka consumer heartbeat

Kafka consumer heartbeat

Leave Group

A consumer can choose to leave the group anytime by sending a “leave group” request. The coordinator will acknowledge the request and initiate a rebalance. In case the leader node leaves the group, a new leader is elected from the group and a rebalance is initiated.

Kafka consumer leave group

Kafka consumer leave group

Summary

As explained in Part 1of this series, “partitions” are units of parallelism. As consumers in a consumer group are limited by the partition in a topic, it’s very important to decide you partitions based on the SLA and scale your consumers accordingly. Consumer offsets are managed and stored by Kafka in an internal __consumer_offset topic. Each consumer in a consumer group follows the find coordinator, join group, sync group, heartbeat, and leave group protocols. In the next article in this series, we’ll look into Kafka consumer properties and delivery semantics.

 

Java Concurrency: Thread Confinement

Thread Confinement

Most concurrency problems occur only when we want to share a mutable variable, or mutable state, between threads. If a mutable state is shared between multiple threads, then all of them will be able to read and modify the value of the state, thus resulting in incorrect or unexpected behavior. One way to avoid this problem is to simply not share the data between the threads. This technique is known as thread confinement and is one of the simplest ways of achieving thread safety in our application.

The Java language, in itself, does not have any way of enforcing thread confinement. Thread confinement is achieved by designing your program in a way that does not allow your state to be used by multiple threads and is, thus, enforced by the implementation. There are a few types of thread confinement, as described below.

Ad-Hoc Thread Confinement

Ad-hoc thread confinement describes a way of thread confinement, where it is the total responsibility of the developer, or the group of developers working on that program, to ensure that the use of the object is restricted to a single thread. This approach is very very fragile and should be avoided in most cases.

One special case that comes under Ad-hoc thread confinement applies to volatile variables. It is safe to perform read-modify-write operations on the shared volatile variable as long as you ensure that the volatile variable is only written from a single thread. In this case, you are confining the modification to a single thread to prevent race conditions, and the visibility guarantees for volatile variables ensure that other threads see the most up to date value.

Stack Confinement

Stack confinement is confining a variable, or an object, to the stack of the thread. This is much stronger than Ad-hoc thread confinement, as it is limiting the scope of the object even more, by defining the state of the variable in the stack itself. For example, consider the following piece of code:

private long numberOfPeopleNamedJohn(List<Person> people) {
  List<Person> localPeople = new ArrayList<>();
  localPeople.addAll(people);
  return localPeople.stream().filter(person -> person.getFirstName().equals("John")).count();
}

In the above code, we pass on a list of person but do not directly use it. We, instead, create our own list, which is local to the currently executing thread, and add all the person in people to localPeople. Since we are defining our list in the  numberOfPeopleNamedJohn method only, this makes the variable  localPeople stack confined, as it exists on stack of one thread, and thus cannot be accessed by any other thread. This makes localPeople thread safe. The only thing we need to take care of here is that we should not allow localPeople to escape the scope of this method, to keep it stack confined. This should also be documented or commented when defining this variable, as generally, it’s only in the current developer’s mind to not let it escape, and in future, another developer may mess up.

ThreadLocal

ThreadLocalallows you to associate a per-thread value with a value-holding object. It allows you to store different objects for different threads and maintains which object corresponds to which thread. It has set and get accessor methods which maintain a separate copy of the value for each thread that uses it. The  get() method always returns the most updated value passed to  set() from the currently executing thread. Let’s look at an example:

public class ThreadConfinementUsingThreadLocal {
    public static void main(String[] args) {
        ThreadLocal<String> stringHolder = new ThreadLocal<>();
        Runnable runnable1 = () -> {
            stringHolder.set("Thread in runnable1");
            try {
                Thread.sleep(5000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Runnable runnable2 = () -> {
            stringHolder.set("Thread in runnable2");
            try {
                Thread.sleep(2000);
                stringHolder.set("string in runnable2 changed");
                Thread.sleep(2000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Runnable runnable3 = () -> {
            stringHolder.set("Thread in runnable3");
            try {
                Thread.sleep(5000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Thread thread1 = new Thread(runnable1);
        Thread thread2 = new Thread(runnable2);
        Thread thread3 = new Thread(runnable3);
        thread1.start();
        thread2.start();
        thread3.start();
    }
}

In the above example, we have executed three threads, using the same  ThreadLocal object stringHolder. As you can see here, we have, first of all, set one string in every thread in the stringHolder object, making it contain three strings. Then, after some pause, we have changed the value from just the second thread. Below was the output of the program:

string in runnable2 changed
Thread in runnable1
Thread in runnable3

As you can see in the above output, the String for thread 2 changed, but the strings for thread 1 and thread 3 were unaffected. If we do not set any value before getting the value on a specific thread from ThreadLocal, then it returns null. After a thread is terminated, thread-specific objects in ThreadLocal become ready for garbage collection.

Java Streams Overview, Part II

In my previous article, I wrote about the fundamentals of streams in Java 8. Now, let’s augment our skills with some additional information about streams, like how we can chain them, and we can use them to access files.

Chaining Streams

When working with streams, they are often chained together.

Let us see what are the advantages of using chained streams:

  • One stream instance leverages another stream instance.
  • This creates a higher level of functionality. We can have one stream accessing the data, then we have another stream that takes the results of that and processes more complex functionality.
  • This simplifies reusability because you can organize your streams in a way that allows each of them performs a specific job. In this way, they do not need to know each other’s inner workings.

We perform chaining using a constructor. We construct a higher level instance of the stream and then pass an instance of a lower level stream.

A good example of a chained stream is the InputStreamReader class, which is what we talked about in my previous article.

This class leverages chaining by providing reader behavior over an  InputStream. It translates the binary response to character behavior.

Let us see how it works.

void doChain(InputStream in) throws IOException{
int length;
char[] buffer = new char[128];
try(InputStreamReader rdr = new InputStreamReader(in)) {
while((length = rdr.read(buffer)) >= 0) {
//do something
}
}
}

As you can see, we do not need to care about how the  InputStream works. Whether it is backed by a file or network, it does not matter.

The only thing we know that it gives us binary data, we will pass it to our InputStreamReader and it converts it and can work with it as a character data.

Notice that we use try-with-resources here as well. If we close the InputStreamReader, it automatically closes theInputStream as well. This a very powerful concept, that you should know about.

File and Buffered Streams

We often use streams for accessing files.

There are several classes for that in the java.io package to use, like:

  • FileReader
  • FileWriter 
  • FileInputStream 
  • FileOutputStream 

The real thing is that these file streams are deprecated now. Despite that, they are still widely used in codes. So, you probably will face them in the near future, so it’s worth a notation.

Let us look at new ways to interact with files.

Buffered Streams

Buffered Streams are introduced to replace the FileStream classes in the java.io package. These new Streams are placed under the java.nio package.

It was necessary because direct file access can be inefficient and buffered streams can significantly improve efficiency with the following:

  • Buffer content in memory
  • Perform reads/writes in large chunks
  • Reduce underlying stream interaction

Buffering available for all four stream types:

  • BufferReader 
  • BufferWriter 
  • BufferedInputStream 
  • BufferedOutputStream 

Using them is very straightforward.

try(BufferedReader br = new BufferedReader(new FileReader("file.txt"))){
int value;
while((value = br.read()) >= 0) {
char charValue = (char)value;
//do something
}
}

Additional benefits to using BufferedStreams includes:

  • It handles linebreaks for various platforms like Windows or Unix
  • Uses correct value for the current platform
  • The BufferedWriter has a method:newLine(). It will create a new line with the appropriate character.
  • TheBufferedReader has a method for line based read: readLine().

Let us see how they work.

BufferedWriter:

void writeData(String[] data) throws IOException {
try(BufferedWriter bw = new BufferedWriter(new FileWriter("file.txt"))){
int value;
for(String str : data) {
bw.write(str);
bw.newLine();
}
}

BufferedReader:

void readData(String[] data) throws IOException {
try(BufferedReader br = new BufferedReader(new FileReader("file.txt"))){
String inValue;
while((inValue = br.readLine()) != null) {
System.out.println(inValue);
}
}

The code above will write out the file’s content line by line.

Accessing Files With the java.nio.file package

In Java 8, the java.io.FileXXX streams are deprecated. There is a new package to handle file streams called the java.nio.file package.

The package has several benefits over java.io:

  • Better exception reporting
  • Greater scalability, they work much better with large files
  • More file system feature support
  • Simplifies common tasks

Bellow, we will talk about the most fundamental Types In this new package.

Paths and Path Types

Path

  • Used to locate a file system item
  • It can be a file or directory

Paths

  • Used to get the Path objects through static Path factory methods
  • It translates a string-based hierarchical path or URI to Path.

Example: Path p = Paths.get(“\\documents\\foo.txt”)

Files Type

  • Static methods for interacting with files
  • Create, copy, delete, etc…
  • Open files streams
    • newBufferedReader 
    • newBufferedWriter 
    • newInputStream 
    • newOutputStream 
  • Read/Write file contents
    • readAllLines
    • write 

Reading Lines With BufferedReader

Let us see some quick example of how you can use it.

void readData(String[] data) throws IOException {
try(BufferedReader br = Files.newBufferedReader(Paths.get("data.txt")){
String inValue;
while((inValue = br.readLine()) != null) {
System.out.println(inValue);
}
}
}

Read All lines

void readAllLines(String[] data) throws IOException {
List<String> lines = Files.readAllLines(Paths.get("data.txt"));
for(String line:lines) {
System.out.println();
}
}

File Systems

When we work with files from a Java program, those files are contained within a file system. Most commonly, we use the computer’s default file system.

Java also supports specialized file systems, such as the Zip file system.

Path instances are tied to a file system and thePath class works only for the default one. So, we need another solution. Fortunately, in the Java.nio package, we have the opportunity to deal with this.

File System Types

FileSystem

  • Represents an individual file system
  • Factory for Path instances

FileSystems

  • Used to get the FileSystem objects through static FileSystem factory methods
  • Open or create a file system
    •  newFileSystem

Accessing File Systems

File systems identified by URIs

  • Specifics of URI vary greatly among the file systems
  • Zip file system uses “jar:file” scheme
    • jar:file:/documents/data.zip

File systems support custom properties

  • Different for each file system type
  • Examples: String encoding, whether to create if it does not exist

Creating a Zip Filesystem

public static void main(String[] args) throws FileNotFoundException, IOException {
try (FileSystem zipFileSystem = openZip(Paths.get("data.zip"))){ //pass the Path where we would like to create our FileSystem
}catch (Exception e) {
System.out.println(e.getClass().getSimpleName() + " - " + e.getLocalizedMessage());;
}
}
private static FileSystem openZip(Path path) throws URISyntaxException, IOException {
Map<String, String> properties = new HashMap<>();
properties.put("create", "true"); //set the property to allow creating
URI zipUri = new URI("jar:file", path.toUri().getPath(), null); //make a new URI from the path
FileSystem zipFileSystem = FileSystems.newFileSystem(zipUri, properties); //create the filesystem
return zipFileSystem;
}

After the code above, you should see your data.zip file in your directory.

Copying Files to Zip Filesystem

Let us augment the above example with a File copy operation.

In this example, I created a file called file.txt in my project library. We will copy this file to our data.zip Filesystem.

Streams in Java 8
public static void main(String[] args) throws FileNotFoundException, IOException {
try (FileSystem zipFileSystem = openZip(Paths.get("data.zip"))){
copyFileToZip(zipFileSystem); //Here we call the file copy
}catch (Exception e) {
System.out.println(e.getClass().getSimpleName() + " - " + e.getLocalizedMessage());;
}
}
private static FileSystem openZip(Path path) throws URISyntaxException, IOException {
Map<String, String> properties = new HashMap<>();
properties.put("create", "true");
URI zipUri = new URI("jar:file", path.toUri().getPath(), null);
FileSystem zipFileSystem = FileSystems.newFileSystem(zipUri, properties);
return zipFileSystem;
}
static void copyFileToZip(FileSystem zipFileSystem) throws IOException{
Path sourceFile = FileSystems.getDefault().getPath("file.txt"); //Read the file to copy
Path destFile = zipFileSystem.getPath("/fileCopied.txt"); //get the path of the new file
Files.copy(sourceFile, destFile);//Copy the file to our zip FileSystem
}

After you run the code, you should see the fileCopied.txt int our zip-file. Its context should be the same as in our file.txt.

Summary

In this article, we went further into streams in Java 8. I demonstrated how stream chaining works, as well as how you can deal with files through the new java.nio package. We also touched on why you should use more up-to-date, buffered versions of the Filestreams.

Hope you enjoyed!

Using Cache in Spring Boot

Let’s imagine a web application, where for each request received, it must read some configuration data of a database. That data doesn’t change usually, but the application, in each request, must connect, execute the correct instructions to read the data, pick it up from the network, etc. Imagine also that the database is very busy or the connection is slow. What would happen? We would have a slow application because it is reading continuously data that hardly changes.

A solution to that problem could be using a cache, but how do you implement it? In that article, I explain how to use a basic cache in Spring Boot.

A Little Theory

The cache is replicated over functions, where for the same entry value, we are waiting for the same return value. That’s why we always have at least one parameter for entry and exit.

A typical example will be this:

@Cacheable(cacheNames="headers")
public int cachedFunction(int value){
..... complicated and difficult calculations ....
  return N;
}

And now, let’s suppose we have the next code for calling that function:

int value=cachedFunction(1);
int otherValue=cachedFunction(2);
int thirdValue=cachedFunction(1);

When executing the program, in the first line, Spring will execute the function and save the result that returns. In the second line, if it doesn’t know the value it must return for the input “2.” Nevertheless, in the third line, Spring will detect that a function tagged as @Cacheable   with the name “headers” was already called with the value “1.” It won’t execute the function, it will only return the value that in the first call it saved.

The cache’s name is important because, among other things, it permits us to have different independent caches, which we could clean to instruct Spring Boot to execute the functions again.

So, the idea is that in each call to a function tagged as @Cacheable it will save the return values for each call in an internal table, in such a way that if it already has a return value for one entry, it doesn’t call to the function.

The Practice

And now, let’s get to the practice.

An example project can be found here.

First, we must include the following dependency in our project.

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-cache</artifactId>
</dependency>

Now, we can use the tags that will allow us to use Cache  in our application.

The first tag set is  @EnableCaching. With this label, we tell Spring that it must prepare the support to use Cache. If we do not put it, it will simply not use Cache, regardless of whether we then mark the functions with cache tags.

@SpringBootApplication
@EnableCaching
public class CacheExampleApplication {
    public static void main(String[] args) {
          SpringApplication.run(CacheExampleApplication.class, args);
    }
}

In this example, we read the data of a database using REST requests.

Data  in the CacheDataImpl.java class which is in the package com.profesorp.cacheexample.impl

The function that reads the data is the following:

@Cacheable(cacheNames="headers", condition="#id > 1")
public DtoResponse getDataCache(int id) {         
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
    }                              
    DtoResponse requestResponse=new DtoResponse();                     
    Optional<Invoiceheader> invoice=invoiceHeaderRepository.findById(id);
    .....MORE CODE WITHOUT IMPORTANCE ...
}

As can be seen, we have the tag  @Cacheable(cacheNames="headers", condition="#id > 1") 

With this, we told Spring two things:

  1. We want to cache the result of this function.
  2. We put it as a condition that it must store the results in cache if the input is greater than one.

Later, in the function flushCache we put the tag @CacheEvict that cleans the indicated cache. Also, in this case, we tell it to delete all the entries that it has in cache.

@CacheEvict(cacheNames="headers", allEntries=true)
public void flushCache() { }

In the function update we update the database and with the label @CachePut, we inform Spring that it updates the data for the existing value in dtoRequest.id.

Of course, this function must return an object equal to the function labeled with the tag @Cacheable , and we must indicate the input value on which we want to update the data

Running

To understand the application better, we will execute it and give it a request .

The application at the beginning has four invoices in the invoiceHeader table. You can see how it fills the table in the data.sql file

Let’s run the get function of the PrincipalController class. For this we write this:

> curl -s http://localhost:8080/2

The application will return the following:

{"interval":507,"httpStatus":"OK","invoiceHeader":{"id":2,"active":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

The field interval is the time in milliseconds that has takes the application making the request. As can be seen, it has taken more than half a second, because in the getDataCachefunction of CacheDataImpl.java we have a sleep 500instruction.

Now, we execute the call again:

> curl -s http://localhost:8080/2
{"interval":1,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Now the time the call has taken is 1, because Spring hasn’t executed the code of the function, and it has simply returned the value that it had cached.

However, if we request the id as 1, we have indicated that you should not cache this value, always execute the function and therefore we will have a time exceeding 500 milliseconds:

>curl -s http://localhost:8080/1
{"interval":503,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}
>curl -s http://localhost:8080/1
{"interval":502,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}
>curl -s http://localhost:8080/1
{"interval":503,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}

If we call to the flushcache function, we’ll clean the cache and therefore, the next call to the function will execute the code in it.

> curl -s http://localhost:8080/flushcache
Cache Flushed!
> curl -s http://localhost:8080/2
{"interval":508,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}
> curl -s http://localhost:8080/2
{"interval":0,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Finally, we will see as if we change the value of the field activo to N, since the function that makes the change is labeled with @CacheEvict, it will update the value of the cache, but the getDataCache function won’t execute in the next call.

> curl -X PUT   http://localhost:8080/   -H "Content-Type: application/json"   -d "{\"id\": 2, \"active\": \"N\"}"
>curl -s http://localhost:8080/2
{"interval":0,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Conclusions

Spring without any difficulty allows us to cache the results of the functions. However, you have to take into account that cache is very basic and it is realized in memory. Spring Boot permits us to use external libraries that will allow us to save the data in disc or database.

In the documentation, you can find the different implementations of cache that Spring Boot supports, one of which is EhCache with which you will can different kinds of backend for the data, as well as specify validity times for the data, and more.

How Much Memory Does a Java Thread Take?

A memory, which is taken by all Java threads, is a significant part of the total memory consumption of your application. There are a few techniques on how to limit the number of created threads, depending on whether your application is CPU-bound or IO-bound. If your application is rather IO-bound, you will very likely need to create a thread pool with a significant number of threads which can be bound to some IO operations (in blocked/waiting state, reading from DB, sending HTTP request).

However, if your app rather spends time on some computing task, you can, for instance, use HTTP server (e.g. Netty) with a lower number of threads and save a lot of memory. Let’s look at an example of how much memory we need to sacrifice to create a new thread.

Thread memory contains stack frames, local variables, method parameters, … and a thread size can is configured with defaults this way (in kilobytes):

$ java -XX:+PrintFlagsFinal -version | grep ThreadStackSize 
intx CompilerThreadStackSize    = 1024  {pd product} {default}
intx ThreadStackSize            = 1024  {pd product} {default}
intx VMThreadStackSize          = 1024  {pd product} {default}

Thread Memory Consumption on Java 8

$ java -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary /
-XX:+PrintNMTStatistics -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
Native Memory Tracking:
Total: reserved=6621614KB, committed=545166KB
- Java Heap (reserved=5079040KB, committed=317440KB)
  (mmap: reserved=5079040KB, committed=317440KB) 
-  Class (reserved=1066074KB, committed=13786KB)
    (classes #345)
    (malloc=9306KB #126) 
    (mmap: reserved=1056768KB, committed=4480KB) 
-  Thread (reserved=19553KB, committed=19553KB)
   (thread #19)
    (stack: reserved=19472KB, committed=19472KB)
    (malloc=59KB #105) 
    (arena=22KB #34)

We can see two types of memory:

  • Reserved — the size which is guaranteed to be available by a host’s OS (but still not allocated and cannot be accessed by JVM) — it’s just a promise
  • Committed — already taken, accessible, and allocated by JVM

In a section Thread, we can spot the same number in Reserved and Committed memory, which is very close to a number of threads * 1MB. The reason is that JVM aggressively allocates the maximum available memory for threads from the very beginning.

Thread Memory Consumption on Java 11

$ java -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary /-XX:+PrintNMTStatistics -version
openjdk version "11.0.2" 2019-01-15
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.2+9)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.2+9, mixed mode)
Native Memory Tracking:Total: reserved=6643041KB, committed=397465KB
-   Java Heap (reserved=5079040KB, committed=317440KB)
(mmap: reserved=5079040KB, committed=317440KB) 
-   Class (reserved=1056864KB, committed=4576KB)
(classes #426)(  instance classes #364, array classes #62)
(malloc=96KB #455) 
(mmap: reserved=1056768KB, committed=4480KB) 
(  Metadata:   )
(    reserved=8192KB, committed=4096KB)
(    used=2849KB)
(    free=1247KB)
(    waste=0KB =0,00%)
(  Class space:)
(    reserved=1048576KB, committed=384KB)
(    used=270K
(    free=114KB)
(    waste=0KB =0,00%)
-  Thread (reserved=15461KB, committed=613KB)
(thread #15)
(stack: reserved=15392KB, committed=544KB
(malloc=52KB #84) 
(arena=18KB #28)

You may notice that we are saving a lot of memory just because we are using Java 11, which no longer aggressively allocates up to Reserved Memory at the time of thread creation. Of course, this is just java -version command, but if you try it out, you will definitely notice a big improvement.

 

Introduction to Lombok

Java is often criticized for being unnecessarily verbose when compared with other languages. Lombok provides a bunch of annotations that generate boilerplate code in the background, removing it from your classes, and, therefore, helping to keep your code clean. Less boilerplate means more concise code that’s easier to read and maintain. In this post, I’ll cover the Lombok features I use more regularly and show you how they can be used to produce cleaner, more concise code.

Local Variable Type Inference: val and var

Lots of languages infer the local variable type by looking at the expression on the right-hand side of the equals. Although this is now supported in Java 10+, it wasn’t previously possible without the help of Lombok. The snippet below shows how you have to explicitly specify the local type:

final Map<String, Integer> map = new HashMap<>();
map.put("Joe", 21);

In Lombok, we can shorten this by using val as follows:

val valMap = new HashMap<String, Integer>();
valMap.put("Sam", 30);

Note that under the covers, val creates a variable that is final and immutable. If you need a mutable local variable, you can use var instead.

@NotNull

It’s generally not a bad idea to null check method arguments, especially if the method forms an API being used by other devs. While these checks are straightforward, they can become verbose, especially when you have multiple arguments. As shown below, the added bloat doesn’t help readability and can become a distraction from the main purpose of the method.

public void notNullDemo(Employee employee, Account account){
    if(employee == null){
      throw new IllegalArgumentException("Employee is marked @NotNull but is null");
    }
    if(account == null){
      throw new IllegalArgumentException("Account is marked @NotNull but is null");
  }
    // do stuff
}

Ideally, you want the null check — without all the noise. That’s where the @NotNull comes into play. By marking your parameters with @NotNull,  Lombok generates a null check for that parameter on your behalf. Your method suddenly becomes much cleaner, but without losing those defensive null checks.

public void notNullDemo(@NotNull Employee employee, @NotNull Account account){
      // just do stuff
}

By default, Lombok will throw a NullPointerException, but if you want, you can configure Lombok to throw an IllegalArgumentException. I personally prefer the IllegalArgumentException as I think its a better fit if you go to the bother of checking the arguments.

Cleaner Data Classes

Data classes are an area where Lombok can really help reduce boilerplate code. Before we look at the options, let’s consider what kinds of boilerplate we typically have to deal with. A data class typically includes one or all of the following:

  • A constructor (without or with arguments)
  • Getter methods for private member variables
  • Setter methods for private nonfinal member variables
  • toStringmethod to help with logging
  • equals and hashCode (dealing with equality/collections)

The above can be generated by your IDE, so the issue isn’t with the time taken to write them. The problem is that a simple class with a handful of member variables can quickly become very verbose. Let’s see how Lombok can help to reduce clutter by helping with each of the above.

@Getter and @Setter

Consider the Car class below. When we generate getters and setters, we end up with nearly 50 lines of code to describe a class with 5 member variables.

public class Car {
  private String make;
  private String model;
  private String bodyType;
  private int yearOfManufacture;
  private int cubicCapacity;
  public String getMake() {
    return make;
  }
  public void setMake(String make) {
    this.make = make;
  }
  public String getModel() {
    return model;
  }
  public void setModel(String model) {
    this.model = model;
  }
  public String getBodyType() {
    return bodyType;
  }
  public void setBodyType(String bodyType) {
    this.bodyType = bodyType;
  }
  public int getYearOfManufacture() {
    return yearOfManufacture;
  }
  public void setYearOfManufacture(int yearOfManufacture) {
    this.yearOfManufacture = yearOfManufacture;
  }
  public int getCubicCapacity() {
    return cubicCapacity;
  }
  public void setCubicCapacity(int cubicCapacity) {
    this.cubicCapacity = cubicCapacity;
  }
}

Lombok can help by generating the getter and setter boilerplate on your behalf. By annotating each member variable with @Getter and @Setter, you end up with an equivalent class that looks like this:

public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;
}

Note that you can only use @Setter on non-final member variables. Using it on final member variables will result in a compilation error.

@AllArgsConstructor

Data classes commonly include a constructor that takes a parameter for each member variable. An IDE generated constructor for the Car class is shown below:

public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;
  public Car(String make, String model, String bodyType, int yearOfManufacture, int cubicCapacity) {
    super();
    this.make = make;
    this.model = model;
    this.bodyType = bodyType;
    this.yearOfManufacture = yearOfManufacture;
    this.cubicCapacity = cubicCapacity;
  }
}

We can achieve the same thing using the @AllArgsConstructor annotation. Like @Getter and  @Setter@AllArgsConstructor reduces boilerplate and keeps the class cleaner and more concise.

@AllArgsConstructor
public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;
}

There are other options for generating constructors. @RequiredArgsConstructor will create a constructor with one argument per final member variable and @NoArgsConstructor will create a constructor with no arguments.

@ToString

It’s good practice to override the toString method on your data classes to help with logging. An IDE-generated toString method for the Car class looks like this:

@AllArgsConstructor
public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;
  @Override
  public String toString() {
    return "Car [make=" + make + ", model=" + model + ", bodyType=" + bodyType + ", yearOfManufacture="
        + yearOfManufacture + ", cubicCapacity=" + cubicCapacity + "]";
  }
}

We can do away with this by using the @ToString annotation as follows:

@ToString
@AllArgsConstructor
public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;

By default, Lombok generates a toString method that includes all member variables. This behavior can be overridden to exclude certain member variables the exclude attribute  @ToString(exclude={"someField"}, "someOtherField"}) .

@EqualsAndHashCode

If you’re doing any kind of object comparison with your data classes, you’ll need to override the equals and hashCode methods. Object equality is something you’ll define based on some business rules. For example, in my Car class, I might consider two objects equal if they have the same make, model, and body type. If I use the IDE to generate an equals method that checks the make, model, and body type, it will look something like this:

@Override
public boolean equals(Object obj) {
  if (this == obj)
    return true;
  if (obj == null)
    return false;
  if (getClass() != obj.getClass())
    return false;
  Car other = (Car) obj;
  if (bodyType == null) {
    if (other.bodyType != null)
      return false;
  } else if (!bodyType.equals(other.bodyType))
    return false;
  if (make == null) {
    if (other.make != null)
      return false;
  } else if (!make.equals(other.make))
    return false;
  if (model == null) {
    if (other.model != null)
      return false;
  } else if (!model.equals(other.model))
    return false;
  return true;
}

The equivalent hashCode implementation looks like this:

@Override
public int hashCode() {
  final int prime = 31;
  int result = 1;
  result = prime * result + ((bodyType == null) ? 0 : bodyType.hashCode());
  result = prime * result + ((make == null) ? 0 : make.hashCode());
  result = prime * result + ((model == null) ? 0 : model.hashCode());
  return result;
}

Although the IDE takes care of the heavy lifting, we still end up with considerable boilerplate code in the class. Lombok allows us to achieve the same thing using the @EqualsAndHashCode class annotation as shown below.

@ToString
@AllArgsConstructor
@EqualsAndHashCode(exclude = { "yearOfManufacture", "cubicCapacity" })
public class Car {
  @Getter @Setter
  private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;
}

By default, @EqualsAndHashCode  will create equals and hashCode methods that include all member variables. The exclude option can be used to tell Lombok to exclude certain member variables. In the code snippet above, I’ve excluded yearOfManufacture and cubicCapacity from the generated equals and hashCode methods.

@Data

If you want to keep your data classes as lean as possible, you can make use of the @Data  annotation. @Data is a shortcut for @Getter,  @Setter,  @ToString@EqualsAndHashCode, and  @RequiredArgsConstructor.

@ToString
@RequiredArgsConstructor
@EqualsAndHashCode(exclude = { "yearOfManufacture", "cubicCapacity" })
public class Car {
  @Getter @Setter  
private String make;
  @Getter @Setter
  private String model;
  @Getter @Setter
  private String bodyType;
  @Getter @Setter
  private int yearOfManufacture;
  @Getter @Setter
  private int cubicCapacity;  
}

By using @Data, we can reduce the class above to the following:

@Data
public class Car {
  private String make;
  private String model;
  private String bodyType;
  private int yearOfManufacture;
  private int cubicCapacity;  
}

Object Creation With @Builder

The builder design pattern describes a flexible approach to the creation of objects. Lombok helps you implement this pattern with minimal effort. Let’s look at an example using the simple Car class. Suppose we want to be able to create a variety of Car objects, but we want flexibility in terms of the attributes that we set at creation time.

@AllArgsConstructor
public class Car {
  private String make;
  private String model;
  private String bodyType;
  private int yearOfManufacture;
  private int cubicCapacity;  
  private List<LocalDate> serviceDate;
}

Let’s say we want to create a Car, but we only want to set the make and model. Using a standard all argument constructor on Car means that we’d supply only make and model and set the other arguments as null.

Car2 car = new Car2("Ford", "Mustang", null, null, null, null);

This works but it’s not ideal that we have to pass null for the arguments we’re not interested in. We could get around this by creating a constructor that takes only make and model. This is a reasonable solution but its not very flexible. What if we have lots of different permutations of fields that we might use to create a new Car? We’d end up with a bunch of different constructors representing all the possible ways we could instantiate a Car.

A clean, flexible way to solve this problem is with the builder pattern. Lombok helps you implement the builder pattern via the @Builder annotation. When you annotate the Car class with @Builder, Lombok does the following:

  • Adds a private constructor to Car
  • Creates a static CarBuilder class
  • Creates a setter style method on CarBuilder for each member variable in Car
  • Adds a build method on CarBuilder that creates a new instance of @Car.

Each setter style method on CarBuilder returns an instance of itself (CarBuilder). This allows you to chain method calls and provides you with a nice fluent API for object creation. Let’s see it in action.

Car muscleCar = Car.builder().make("Ford")
                             .model("mustang")
                             .bodyType("coupe")
                             .build();

Creating a Car with just make and model is now much cleaner than before. We simply call the generated builder method on Car to get an instance of CarBuilder, then call whatever setter style methods we’re interested in. Finally, we call a build to create a new instance of Car.

Another handy annotation worth mentioning is@Singular. By default, Lombok creates a standard setter style method for collections that takes a collection argument. In the example below, we create a new Car and set a list of service dates.

Car muscleCar = Car.builder().make("Ford")
                   .model("mustang")
                   .serviceDate(Arrays.asList(LocalDate.of(2016, 5, 4)))
                   .build();

Adding @Singularto collection member variables give you an extra method that allows you to add a single item to the collection.

@Builder
public class Car {
  private String make;
  private String model;
  private String bodyType;
  private int yearOfManufacture;
  private int cubicCapacity;  
  @Singular
  private List<LocalDate> serviceDate;
}

We can now add a single service date as follows:

Car muscleCar3 = Car.builder()
                    .make("Ford")
                    .model("mustang")
                    .serviceDate(LocalDate.of(2016, 5, 4))
                    .build();

This is a nice convenience method that helps keep our code clean when dealing with collections during object creation.

Logging

Another great Lombok feature is loggers. Without Lombok, to instantiate a standard SLF4J logger, you typically have something like this:

public class SomeService {
  private static final org.slf4j.Logger log = org.slf4j.LoggerFactory.getLogger(LogExample.class);
  public void doStuff(){
    log.debug("doing stuff....");
  }
}

These loggers are clunky and add unnecessary clutter to every class that requires logging. Thankfully, Lombok provides an annotation that creates the logger for you. All you have to do is add the annotation to the class and you’re good to go.

@Slf4j
public class SomeService {
  public void doStuff(){
    log.debug("doing stuff....");
  }
}

I’ve used the @SLF4J annotation here, but Lombok will generate loggers for most common Java logging frameworks. For more logger options, see the documentation.

Lombok Gives You Control

One of the things I really like about Lombok is that it’s unintrusive. If you decide that you want to provide your own method implementation when using the likes of @Getter,  @Setter, or  @ToString, your method will always take precedence over Lombok. This is nice because it allows you to use Lombok most of the time, but still take control when you need to.

Write Less, Do More

I’ve used Lombok on pretty much every project I’ve worked on for the past 4 or 5 years. I like it because it reduces clutter and you end up with cleaner, more concise code that’s easier to read. It won’t necessarily save you a lot of time, as most of the code it generates can be auto-generated by your IDE. With that said, I think the benefits of cleaner code more than justify adding it to your Java stack.

Further Reading

I’ve covered the Lombok features that I use regularly, but there are a bunch more that I haven’t touched on. If you like what you’ve seen so far and want to find out more, head on over and have a look at the Lombok docs.