Java Concurrency: Thread Confinement

Thread Confinement

Most concurrency problems occur only when we want to share a mutable variable, or mutable state, between threads. If a mutable state is shared between multiple threads, then all of them will be able to read and modify the value of the state, thus resulting in incorrect or unexpected behavior. One way to avoid this problem is to simply not share the data between the threads. This technique is known as thread confinement and is one of the simplest ways of achieving thread safety in our application.

The Java language, in itself, does not have any way of enforcing thread confinement. Thread confinement is achieved by designing your program in a way that does not allow your state to be used by multiple threads and is, thus, enforced by the implementation. There are a few types of thread confinement, as described below.

Ad-Hoc Thread Confinement

Ad-hoc thread confinement describes a way of thread confinement, where it is the total responsibility of the developer, or the group of developers working on that program, to ensure that the use of the object is restricted to a single thread. This approach is very very fragile and should be avoided in most cases.

One special case that comes under Ad-hoc thread confinement applies to volatile variables. It is safe to perform read-modify-write operations on the shared volatile variable as long as you ensure that the volatile variable is only written from a single thread. In this case, you are confining the modification to a single thread to prevent race conditions, and the visibility guarantees for volatile variables ensure that other threads see the most up to date value.

Stack Confinement

Stack confinement is confining a variable, or an object, to the stack of the thread. This is much stronger than Ad-hoc thread confinement, as it is limiting the scope of the object even more, by defining the state of the variable in the stack itself. For example, consider the following piece of code:

private long numberOfPeopleNamedJohn(List<Person> people) {
  List<Person> localPeople = new ArrayList<>();
  localPeople.addAll(people);
  return localPeople.stream().filter(person -> person.getFirstName().equals("John")).count();
}

In the above code, we pass on a list of person but do not directly use it. We, instead, create our own list, which is local to the currently executing thread, and add all the person in people to localPeople. Since we are defining our list in the  numberOfPeopleNamedJohn method only, this makes the variable  localPeople stack confined, as it exists on stack of one thread, and thus cannot be accessed by any other thread. This makes localPeople thread safe. The only thing we need to take care of here is that we should not allow localPeople to escape the scope of this method, to keep it stack confined. This should also be documented or commented when defining this variable, as generally, it’s only in the current developer’s mind to not let it escape, and in future, another developer may mess up.

ThreadLocal

ThreadLocalallows you to associate a per-thread value with a value-holding object. It allows you to store different objects for different threads and maintains which object corresponds to which thread. It has set and get accessor methods which maintain a separate copy of the value for each thread that uses it. The  get() method always returns the most updated value passed to  set() from the currently executing thread. Let’s look at an example:

public class ThreadConfinementUsingThreadLocal {
    public static void main(String[] args) {
        ThreadLocal<String> stringHolder = new ThreadLocal<>();
        Runnable runnable1 = () -> {
            stringHolder.set("Thread in runnable1");
            try {
                Thread.sleep(5000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Runnable runnable2 = () -> {
            stringHolder.set("Thread in runnable2");
            try {
                Thread.sleep(2000);
                stringHolder.set("string in runnable2 changed");
                Thread.sleep(2000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Runnable runnable3 = () -> {
            stringHolder.set("Thread in runnable3");
            try {
                Thread.sleep(5000);
                System.out.println(stringHolder.get());
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        };
        Thread thread1 = new Thread(runnable1);
        Thread thread2 = new Thread(runnable2);
        Thread thread3 = new Thread(runnable3);
        thread1.start();
        thread2.start();
        thread3.start();
    }
}

In the above example, we have executed three threads, using the same  ThreadLocal object stringHolder. As you can see here, we have, first of all, set one string in every thread in the stringHolder object, making it contain three strings. Then, after some pause, we have changed the value from just the second thread. Below was the output of the program:

string in runnable2 changed
Thread in runnable1
Thread in runnable3

As you can see in the above output, the String for thread 2 changed, but the strings for thread 1 and thread 3 were unaffected. If we do not set any value before getting the value on a specific thread from ThreadLocal, then it returns null. After a thread is terminated, thread-specific objects in ThreadLocal become ready for garbage collection.

Top 10 Spring Boot Interview Questions

25+ Spring Boots Interview Questions

In this article, we will discuss some top 10 interview questions in Spring Boot. These questions are a bit tricky and trending heavily, nowadays, in the job market.

1) What does the @SpringBootApplication annotation do internally?

As per the Spring Boot doc, the @SpringBootApplication annotation is equivalent to using @Configuration@EnableAutoConfiguration, and @ComponentScan with their default attributes. Spring Boot enables the developer to use a single annotation instead of using multiple. But, as we know, Spring provided loosely coupled features that we can use for each individual annotation as per our project needs.

2) How to exclude any package without using the basePackages filter?

There are different ways you can filter any package. But Spring Boot provides a trickier option for achieving this without touching the component scan. You can use the exclude attribute while using the annotation  @SpringBootApplication. See the following code snippet:

@SpringBootApplication(exclude= {Employee.class})
public class FooAppConfiguration {}

3) How to disable a specific auto-configuration class?

You can use the exclude attribute of@EnableAutoConfiguration, if you find any specific auto-configuration classes that you do not want are being applied.

//By using “exclude”
@EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})

On the other foot, if the class is not on the classpath, you can use the excludeName attribute of the annotation and specify the fully qualified name instead.

//By using “excludeName”
@EnableAutoConfiguration(excludeName={Foo.class})

Also, Spring Boot provides the facility to control the list of auto-configuration classes to exclude by using the spring.autoconfigure.exclude property. You can add into the application.properties. And you can add multiple classes with comma separated.

//By using property file
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration

4) What is Spring Actuator? What are its advantages?

This is one of the most common interview questions in Spring Boot. As per the Spring doc:

“An actuator is a manufacturing term that refers to a mechanical device for moving or controlling something. Actuators can generate a large amount of motion from a small change.”

As we know, Spring Boot provides lots of auto-configuration features that help developers quickly develop production components. But if you think about debugging and how to debug, if something goes wrong, we always need to analyze the logs and dig through the data flow of our application to check to see what’s going on. So, the Spring Actuator provides easy access to those kinds of features. It provides many features, i.e. what beans are created, the mapping in the controller, the CPU usage, etc. Automatically gathering and auditing health and metrics can then be applied to your application.

It provides a very easy way to access the few production-ready REST endpoints and fetch all kinds of information from the web. But by using these endpoints, you can do many things to see here the endpoint docs. There is no need to worry about security; if Spring Security is present, then these endpoints are secured by default using Spring Security’s content-negotiation strategy. Or else, we can configure custom security by the help of RequestMatcher.

5) How to enable/disable the Actuator?

Enabling/disabling the actuator is easy; the simplest way is to enable features to add the dependency (Maven/Gradle) to the spring-boot-starter-actuator, i.e. Starter. If you don’t want the actuator to be enabled, then don’t add the dependency.

Maven dependency:

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>

6) What is the Spring Initializer?

This may not be a difficult question, but the interviewer always checks the subject knowledge of the candidate. It’s often that you can’t always expect questions that you have prepared. However, this is a very common question asked almost all of the time.

The Spring Initializer is a web application that generates a Spring Boot project with everything you need to start it quickly. As always, we need a good skeleton of the project; it helps you to create a project structure/skeleton properly. You can learn more about the Initializer here.

7) What is a shutdown in the actuator?

Shutdown is an endpoint that allows the application to be gracefully shutdown. This feature is not enabled by default. You can enable this by using management.endpoint.shutdown.enabled=true in your application.properties file. But be careful about this if you are using this.

8) Is this possible to change the port of Embedded Tomcat server in Spring boot?

Yes, it’s possible to change the port. You can use the application.properties file to change the port. But you need to mention “server.port” (i.e. server.port=8081). Make sure you have application.properties in your project classpath; REST Spring framework will take care of the rest. If you mention server.port=0 , then it will automatically assign any available port.

9) Can we override or replace the Embedded Tomcat server in Spring Boot?

Yes, we can replace the Embedded Tomcat with any other servers by using the Starter dependencies. You can use spring-boot-starter-jetty  or spring-boot-starter-undertow as a dependency for each project as you need.

10) Can we disable the default web server in the Spring Boot application?

The major strong point in Spring is to provide flexibility to build your application loosely coupled. Spring provides features to disable the web server in a quick configuration. Yes, we can use the application.properties to configure the web application type, i.e.  spring.main.web-application-type=none.

All the best!

For More : https://www.bipinwebacademy.com/2019/04/spring-boot-interview-questions.html

Java Streams Overview, Part II

In my previous article, I wrote about the fundamentals of streams in Java 8. Now, let’s augment our skills with some additional information about streams, like how we can chain them, and we can use them to access files.

Chaining Streams

When working with streams, they are often chained together.

Let us see what are the advantages of using chained streams:

  • One stream instance leverages another stream instance.
  • This creates a higher level of functionality. We can have one stream accessing the data, then we have another stream that takes the results of that and processes more complex functionality.
  • This simplifies reusability because you can organize your streams in a way that allows each of them performs a specific job. In this way, they do not need to know each other’s inner workings.

We perform chaining using a constructor. We construct a higher level instance of the stream and then pass an instance of a lower level stream.

A good example of a chained stream is the InputStreamReader class, which is what we talked about in my previous article.

This class leverages chaining by providing reader behavior over an  InputStream. It translates the binary response to character behavior.

Let us see how it works.

void doChain(InputStream in) throws IOException{
int length;
char[] buffer = new char[128];
try(InputStreamReader rdr = new InputStreamReader(in)) {
while((length = rdr.read(buffer)) >= 0) {
//do something
}
}
}

As you can see, we do not need to care about how the  InputStream works. Whether it is backed by a file or network, it does not matter.

The only thing we know that it gives us binary data, we will pass it to our InputStreamReader and it converts it and can work with it as a character data.

Notice that we use try-with-resources here as well. If we close the InputStreamReader, it automatically closes theInputStream as well. This a very powerful concept, that you should know about.

File and Buffered Streams

We often use streams for accessing files.

There are several classes for that in the java.io package to use, like:

  • FileReader
  • FileWriter 
  • FileInputStream 
  • FileOutputStream 

The real thing is that these file streams are deprecated now. Despite that, they are still widely used in codes. So, you probably will face them in the near future, so it’s worth a notation.

Let us look at new ways to interact with files.

Buffered Streams

Buffered Streams are introduced to replace the FileStream classes in the java.io package. These new Streams are placed under the java.nio package.

It was necessary because direct file access can be inefficient and buffered streams can significantly improve efficiency with the following:

  • Buffer content in memory
  • Perform reads/writes in large chunks
  • Reduce underlying stream interaction

Buffering available for all four stream types:

  • BufferReader 
  • BufferWriter 
  • BufferedInputStream 
  • BufferedOutputStream 

Using them is very straightforward.

try(BufferedReader br = new BufferedReader(new FileReader("file.txt"))){
int value;
while((value = br.read()) >= 0) {
char charValue = (char)value;
//do something
}
}

Additional benefits to using BufferedStreams includes:

  • It handles linebreaks for various platforms like Windows or Unix
  • Uses correct value for the current platform
  • The BufferedWriter has a method:newLine(). It will create a new line with the appropriate character.
  • TheBufferedReader has a method for line based read: readLine().

Let us see how they work.

BufferedWriter:

void writeData(String[] data) throws IOException {
try(BufferedWriter bw = new BufferedWriter(new FileWriter("file.txt"))){
int value;
for(String str : data) {
bw.write(str);
bw.newLine();
}
}

BufferedReader:

void readData(String[] data) throws IOException {
try(BufferedReader br = new BufferedReader(new FileReader("file.txt"))){
String inValue;
while((inValue = br.readLine()) != null) {
System.out.println(inValue);
}
}

The code above will write out the file’s content line by line.

Accessing Files With the java.nio.file package

In Java 8, the java.io.FileXXX streams are deprecated. There is a new package to handle file streams called the java.nio.file package.

The package has several benefits over java.io:

  • Better exception reporting
  • Greater scalability, they work much better with large files
  • More file system feature support
  • Simplifies common tasks

Bellow, we will talk about the most fundamental Types In this new package.

Paths and Path Types

Path

  • Used to locate a file system item
  • It can be a file or directory

Paths

  • Used to get the Path objects through static Path factory methods
  • It translates a string-based hierarchical path or URI to Path.

Example: Path p = Paths.get(“\\documents\\foo.txt”)

Files Type

  • Static methods for interacting with files
  • Create, copy, delete, etc…
  • Open files streams
    • newBufferedReader 
    • newBufferedWriter 
    • newInputStream 
    • newOutputStream 
  • Read/Write file contents
    • readAllLines
    • write 

Reading Lines With BufferedReader

Let us see some quick example of how you can use it.

void readData(String[] data) throws IOException {
try(BufferedReader br = Files.newBufferedReader(Paths.get("data.txt")){
String inValue;
while((inValue = br.readLine()) != null) {
System.out.println(inValue);
}
}
}

Read All lines

void readAllLines(String[] data) throws IOException {
List<String> lines = Files.readAllLines(Paths.get("data.txt"));
for(String line:lines) {
System.out.println();
}
}

File Systems

When we work with files from a Java program, those files are contained within a file system. Most commonly, we use the computer’s default file system.

Java also supports specialized file systems, such as the Zip file system.

Path instances are tied to a file system and thePath class works only for the default one. So, we need another solution. Fortunately, in the Java.nio package, we have the opportunity to deal with this.

File System Types

FileSystem

  • Represents an individual file system
  • Factory for Path instances

FileSystems

  • Used to get the FileSystem objects through static FileSystem factory methods
  • Open or create a file system
    •  newFileSystem

Accessing File Systems

File systems identified by URIs

  • Specifics of URI vary greatly among the file systems
  • Zip file system uses “jar:file” scheme
    • jar:file:/documents/data.zip

File systems support custom properties

  • Different for each file system type
  • Examples: String encoding, whether to create if it does not exist

Creating a Zip Filesystem

public static void main(String[] args) throws FileNotFoundException, IOException {
try (FileSystem zipFileSystem = openZip(Paths.get("data.zip"))){ //pass the Path where we would like to create our FileSystem
}catch (Exception e) {
System.out.println(e.getClass().getSimpleName() + " - " + e.getLocalizedMessage());;
}
}
private static FileSystem openZip(Path path) throws URISyntaxException, IOException {
Map<String, String> properties = new HashMap<>();
properties.put("create", "true"); //set the property to allow creating
URI zipUri = new URI("jar:file", path.toUri().getPath(), null); //make a new URI from the path
FileSystem zipFileSystem = FileSystems.newFileSystem(zipUri, properties); //create the filesystem
return zipFileSystem;
}

After the code above, you should see your data.zip file in your directory.

Copying Files to Zip Filesystem

Let us augment the above example with a File copy operation.

In this example, I created a file called file.txt in my project library. We will copy this file to our data.zip Filesystem.

Streams in Java 8
public static void main(String[] args) throws FileNotFoundException, IOException {
try (FileSystem zipFileSystem = openZip(Paths.get("data.zip"))){
copyFileToZip(zipFileSystem); //Here we call the file copy
}catch (Exception e) {
System.out.println(e.getClass().getSimpleName() + " - " + e.getLocalizedMessage());;
}
}
private static FileSystem openZip(Path path) throws URISyntaxException, IOException {
Map<String, String> properties = new HashMap<>();
properties.put("create", "true");
URI zipUri = new URI("jar:file", path.toUri().getPath(), null);
FileSystem zipFileSystem = FileSystems.newFileSystem(zipUri, properties);
return zipFileSystem;
}
static void copyFileToZip(FileSystem zipFileSystem) throws IOException{
Path sourceFile = FileSystems.getDefault().getPath("file.txt"); //Read the file to copy
Path destFile = zipFileSystem.getPath("/fileCopied.txt"); //get the path of the new file
Files.copy(sourceFile, destFile);//Copy the file to our zip FileSystem
}

After you run the code, you should see the fileCopied.txt int our zip-file. Its context should be the same as in our file.txt.

Summary

In this article, we went further into streams in Java 8. I demonstrated how stream chaining works, as well as how you can deal with files through the new java.nio package. We also touched on why you should use more up-to-date, buffered versions of the Filestreams.

Hope you enjoyed!

Using Cache in Spring Boot

Let’s imagine a web application, where for each request received, it must read some configuration data of a database. That data doesn’t change usually, but the application, in each request, must connect, execute the correct instructions to read the data, pick it up from the network, etc. Imagine also that the database is very busy or the connection is slow. What would happen? We would have a slow application because it is reading continuously data that hardly changes.

A solution to that problem could be using a cache, but how do you implement it? In that article, I explain how to use a basic cache in Spring Boot.

A Little Theory

The cache is replicated over functions, where for the same entry value, we are waiting for the same return value. That’s why we always have at least one parameter for entry and exit.

A typical example will be this:

@Cacheable(cacheNames="headers")
public int cachedFunction(int value){
..... complicated and difficult calculations ....
  return N;
}

And now, let’s suppose we have the next code for calling that function:

int value=cachedFunction(1);
int otherValue=cachedFunction(2);
int thirdValue=cachedFunction(1);

When executing the program, in the first line, Spring will execute the function and save the result that returns. In the second line, if it doesn’t know the value it must return for the input “2.” Nevertheless, in the third line, Spring will detect that a function tagged as @Cacheable   with the name “headers” was already called with the value “1.” It won’t execute the function, it will only return the value that in the first call it saved.

The cache’s name is important because, among other things, it permits us to have different independent caches, which we could clean to instruct Spring Boot to execute the functions again.

So, the idea is that in each call to a function tagged as @Cacheable it will save the return values for each call in an internal table, in such a way that if it already has a return value for one entry, it doesn’t call to the function.

The Practice

And now, let’s get to the practice.

An example project can be found here.

First, we must include the following dependency in our project.

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-cache</artifactId>
</dependency>

Now, we can use the tags that will allow us to use Cache  in our application.

The first tag set is  @EnableCaching. With this label, we tell Spring that it must prepare the support to use Cache. If we do not put it, it will simply not use Cache, regardless of whether we then mark the functions with cache tags.

@SpringBootApplication
@EnableCaching
public class CacheExampleApplication {
    public static void main(String[] args) {
          SpringApplication.run(CacheExampleApplication.class, args);
    }
}

In this example, we read the data of a database using REST requests.

Data  in the CacheDataImpl.java class which is in the package com.profesorp.cacheexample.impl

The function that reads the data is the following:

@Cacheable(cacheNames="headers", condition="#id > 1")
public DtoResponse getDataCache(int id) {         
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
    }                              
    DtoResponse requestResponse=new DtoResponse();                     
    Optional<Invoiceheader> invoice=invoiceHeaderRepository.findById(id);
    .....MORE CODE WITHOUT IMPORTANCE ...
}

As can be seen, we have the tag  @Cacheable(cacheNames="headers", condition="#id > 1") 

With this, we told Spring two things:

  1. We want to cache the result of this function.
  2. We put it as a condition that it must store the results in cache if the input is greater than one.

Later, in the function flushCache we put the tag @CacheEvict that cleans the indicated cache. Also, in this case, we tell it to delete all the entries that it has in cache.

@CacheEvict(cacheNames="headers", allEntries=true)
public void flushCache() { }

In the function update we update the database and with the label @CachePut, we inform Spring that it updates the data for the existing value in dtoRequest.id.

Of course, this function must return an object equal to the function labeled with the tag @Cacheable , and we must indicate the input value on which we want to update the data

Running

To understand the application better, we will execute it and give it a request .

The application at the beginning has four invoices in the invoiceHeader table. You can see how it fills the table in the data.sql file

Let’s run the get function of the PrincipalController class. For this we write this:

> curl -s http://localhost:8080/2

The application will return the following:

{"interval":507,"httpStatus":"OK","invoiceHeader":{"id":2,"active":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

The field interval is the time in milliseconds that has takes the application making the request. As can be seen, it has taken more than half a second, because in the getDataCachefunction of CacheDataImpl.java we have a sleep 500instruction.

Now, we execute the call again:

> curl -s http://localhost:8080/2
{"interval":1,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Now the time the call has taken is 1, because Spring hasn’t executed the code of the function, and it has simply returned the value that it had cached.

However, if we request the id as 1, we have indicated that you should not cache this value, always execute the function and therefore we will have a time exceeding 500 milliseconds:

>curl -s http://localhost:8080/1
{"interval":503,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}
>curl -s http://localhost:8080/1
{"interval":502,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}
>curl -s http://localhost:8080/1
{"interval":503,"httpStatus":"OK","invoiceHeader":{"id":1,"activo":"S","yearFiscal":2019,"numberInvoice":1,"customerId":1}}

If we call to the flushcache function, we’ll clean the cache and therefore, the next call to the function will execute the code in it.

> curl -s http://localhost:8080/flushcache
Cache Flushed!
> curl -s http://localhost:8080/2
{"interval":508,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}
> curl -s http://localhost:8080/2
{"interval":0,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Finally, we will see as if we change the value of the field activo to N, since the function that makes the change is labeled with @CacheEvict, it will update the value of the cache, but the getDataCache function won’t execute in the next call.

> curl -X PUT   http://localhost:8080/   -H "Content-Type: application/json"   -d "{\"id\": 2, \"active\": \"N\"}"
>curl -s http://localhost:8080/2
{"interval":0,"httpStatus":"OK","invoiceHeader":{"id":2,"activo":"N","yearFiscal":2019,"numberInvoice":2,"customerId":2}}

Conclusions

Spring without any difficulty allows us to cache the results of the functions. However, you have to take into account that cache is very basic and it is realized in memory. Spring Boot permits us to use external libraries that will allow us to save the data in disc or database.

In the documentation, you can find the different implementations of cache that Spring Boot supports, one of which is EhCache with which you will can different kinds of backend for the data, as well as specify validity times for the data, and more.

How Much Memory Does a Java Thread Take?

A memory, which is taken by all Java threads, is a significant part of the total memory consumption of your application. There are a few techniques on how to limit the number of created threads, depending on whether your application is CPU-bound or IO-bound. If your application is rather IO-bound, you will very likely need to create a thread pool with a significant number of threads which can be bound to some IO operations (in blocked/waiting state, reading from DB, sending HTTP request).

However, if your app rather spends time on some computing task, you can, for instance, use HTTP server (e.g. Netty) with a lower number of threads and save a lot of memory. Let’s look at an example of how much memory we need to sacrifice to create a new thread.

Thread memory contains stack frames, local variables, method parameters, … and a thread size can is configured with defaults this way (in kilobytes):

$ java -XX:+PrintFlagsFinal -version | grep ThreadStackSize 
intx CompilerThreadStackSize    = 1024  {pd product} {default}
intx ThreadStackSize            = 1024  {pd product} {default}
intx VMThreadStackSize          = 1024  {pd product} {default}

Thread Memory Consumption on Java 8

$ java -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary /
-XX:+PrintNMTStatistics -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
Native Memory Tracking:
Total: reserved=6621614KB, committed=545166KB
- Java Heap (reserved=5079040KB, committed=317440KB)
  (mmap: reserved=5079040KB, committed=317440KB) 
-  Class (reserved=1066074KB, committed=13786KB)
    (classes #345)
    (malloc=9306KB #126) 
    (mmap: reserved=1056768KB, committed=4480KB) 
-  Thread (reserved=19553KB, committed=19553KB)
   (thread #19)
    (stack: reserved=19472KB, committed=19472KB)
    (malloc=59KB #105) 
    (arena=22KB #34)

We can see two types of memory:

  • Reserved — the size which is guaranteed to be available by a host’s OS (but still not allocated and cannot be accessed by JVM) — it’s just a promise
  • Committed — already taken, accessible, and allocated by JVM

In a section Thread, we can spot the same number in Reserved and Committed memory, which is very close to a number of threads * 1MB. The reason is that JVM aggressively allocates the maximum available memory for threads from the very beginning.

Thread Memory Consumption on Java 11

$ java -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary /-XX:+PrintNMTStatistics -version
openjdk version "11.0.2" 2019-01-15
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.2+9)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.2+9, mixed mode)
Native Memory Tracking:Total: reserved=6643041KB, committed=397465KB
-   Java Heap (reserved=5079040KB, committed=317440KB)
(mmap: reserved=5079040KB, committed=317440KB) 
-   Class (reserved=1056864KB, committed=4576KB)
(classes #426)(  instance classes #364, array classes #62)
(malloc=96KB #455) 
(mmap: reserved=1056768KB, committed=4480KB) 
(  Metadata:   )
(    reserved=8192KB, committed=4096KB)
(    used=2849KB)
(    free=1247KB)
(    waste=0KB =0,00%)
(  Class space:)
(    reserved=1048576KB, committed=384KB)
(    used=270K
(    free=114KB)
(    waste=0KB =0,00%)
-  Thread (reserved=15461KB, committed=613KB)
(thread #15)
(stack: reserved=15392KB, committed=544KB
(malloc=52KB #84) 
(arena=18KB #28)

You may notice that we are saving a lot of memory just because we are using Java 11, which no longer aggressively allocates up to Reserved Memory at the time of thread creation. Of course, this is just java -version command, but if you try it out, you will definitely notice a big improvement.

 

Java Streams Overview

For a long time, I had a gap in my knowledge of Java streams. On a basic level, I could use them, but I did not a deep understanding of them. So, I decided to do a quick overview of Java streams.

In this article, I will deal with many everything from the fundamentals to chaining and clean-up.

Hope this helps you broaden your knowledge of Java streams.

What Are Streams?

A stream is an ordered sequence of data that…

  • Provides a common I/O model
  • Abstracts details from an underlying source or destination

Whether you use streams to take data from memory, storage, or your network, it will hide the implementation details from you. The details are abstracted away, so in every situation, you can look at it as an ordered sequence of data.

storages
  • Stream types are uni-directional

This means that if you create an instance of a Java stream, you decide whether you would like to write to it or read from it. You can’t do both at the given time on a single stream.

Read / Write

We can divide the streams into two categories:

  • Byte streams – Interacts as binary data
  • Text streams – Interacts as unicode characters

The general interaction is the same for both Java stream types

Reading With Streams

As we mentioned, each stream is used either to read from or write to.

Firstly, let us see how we can read data from Java streams.

Java streams

The base class to read binary data in Java is the InputStreamAnd the base class to read text data is called the Readerclass.

Both classes almost have the same two methods:

  •  int read()
  •  int read(byte/char[] buff)

Notice that in both scenarios, they give back an Integer value. These are interpreted values. An integer is a 32-bit container, so it will work in both cases.

The difference between the two:

  • The InputStream works with bytes, which are 8-bits.
  • The Reader works with unicode characters, which are 16-bits.

Read Bytes With InputStream

InputStream input = // create input stream
int result;
while(result = input.read() >= 0) //Indicates the end-of-stream with a return value of -1
byte byteVal = (byte)result;
// do something with byteVal
};

Read Text with Reader

Reader reader = // create reader
int result;
while(result = reader.read() >= 0)//Indicates the end-of-stream with a return value of -1
char charVal = (char)result;
// do something with charVal
};

Note that if you would like to retrieve the value, you simply need to cast the result to the appropriate type — in this case, byte or char.

Writing With Streams

To write data, there are two base classes similar to the read streams.

  •  OutputStream (for bytes)
  •  Writer (for text)
Java Text streams
To write with Java streams is more straightforward than reading them. Both classes have a few   write  methods with the  void return type.

Writing Bytes With OutputStream

To write with OutputStream, you can pass a single byte the write method, or you can pass a byte array as well.

OutputStream output // create output stream;
Byte byteVal = 100;
output.write(byteVal);
byte[] byteBuff = {0, 10, 20};
output.write(byteBuff);

Writing Characters With Writer

To write with the Writer class, you can pass a simple character, character array, or String to its  write method.

Writer writer // create output stream;
char charVal = 'c';
writer.write(charVal);
char[] charArray = {'c', 'h', 'a', 'r'};
writer.write(charArray);
String stringVal = "String";
writer.write(stringVal);

As you can see, you need much less code to write than to read.

Common Java Stream Classes

Above, I wrote about the base stream classes. Now, let us go a bit deeper and talk about the different implementations.

Common Input/OutputStream Derived Classes

Java streams
  •  ByteArrayInputStream /  ByteArrayOutputStream – Enables us to create a stream over a byte array
  •  PipedInputStream PipedOutputStream – This is much like a producer-consumer concept. A piped output stream can be connected to a piped input stream to create a communications pipe. The piped output stream is the sending end of the pipe. Typically, data is written to a PipedOutputStream object by one thread and data is read from the connected PipedInputStream by some other thread.
  •  FileInputStream /  FileOutputStream – These allow us to create streams over files.

Common Reader/Writer Derived Classes

Java Reader/ Writer streams
Above are examples of Reader/Writer stream implementations.
  •  CharArrayReader / Writer – Allows creating streams over characters
  •  StringReader / Writer – Allows creating stream over Strings
  •  PipedReader / Writer – Allows creating a stream in a Producer/Consumer relationship over text. Similarly to the PipedOutput InputStream 
  •  InputStreamReader / Writer – Allows us to create a stream over an Input OutputStream 
  •  FileReader / Writer – These are delivered from the least mentioned above. It allows us to make a stream over text files.

Stream Errors and Clean-Up

So far, we looked at the general features of streams, but we have not considered all the realities of working with them.

Stream Realities

Java streams

Let us see the two main groups here.

Clean-Up

Problems

  • Streams are backed by physical storage, which often exists outside the Java runtime, like files or network connections.
  • Hence, Java may not reliably clean up, so we need to do our own reliable clean-up. We need to close the Streams when we are done with them.

Solutions

  • Streams implement the Closeable interface, which implements one single close method. So, this is our responsibility to call it.

Let us see a simple solution:

Reader reader;
try{
reader = // create output stream;
// do something with reader;
}catch (IOException e) {
//handle exception
}finally {
if(reader != null)
reader.close();
}

The problem with the above example is that you always need to implement it. Usually, we use Streams frequently, so it should be done automatically. Let us see how we can achieve it.

Automating Clean-Up

  •  AutoClosable interface
    • One method: close
    • The base interface of the Closable interface, so every Stream supports it.
    • Provides support for try-with-resources

Try-With-Resources

  • Automates the clean-up of one or more resources
    • A “resource” is any type that implements  AutoClosable
  • Syntax similar to traditional try statement
  • Optionally includes catch block(s)
    • Handle try body
    • Handle close method call

Working With Try-With-Resources

Here, I provided a simple example of how we can use the automatic close of streams with try-with-resources block.

I will use it through a FileInputStream. We will talk about this specific stream later.

 try(FileInputStream input = new FileInputStream("file1.txt")) {
        int data = input.read();
        while(data != -1){
            System.out.print((char) data);
            data = input.read();
        }
    }

With the above approach, you do not need to investigate further work to close your streams.

Summary

In this article, we talked about the fundamentals of Java streams, what are they, how they work, and how you can use them. In my next article, I will dig a bit deeper, and I will write about more advanced Stream topics like chaining, buffered streams, and how to use file systems with it. So stay tuned!

See more about streams in the official Java documentation here.

RESTful API Design Principle: Deciding Levels of Granularity

Granularity is an essential principle of REST API design. As we understand, business functions divided into many small actions are fine-grained, and business functions divided into large operations are coarse-grained.

However, discussions about what level of granularity that needs to be in APIs may vary, we will get distinct suggestions, and even end up in debates. Regardless, its best to decide based upon business functions and its use cases, as granularity decisions would undoubtedly vary on a case by case basis.

This article discusses a few points on how API designers would need to choose their RESTful service granularity levels.

Image title

Coarse-Grained and Fine-Grained APIs

In some cases, calls across the network may be expensive, so to minimize them, coarse-grained APIs may be the best fit, as each request from the client forces lot of work at the server side, and in fine-grained, many calls are required to do the same amount of work at the client side.

Example: Consider a service returns customer orders in a single call. In case of fine-grained, it returns only the customer IDs, and for each customer id, the client needs to make an additional request to get details, so n+1 calls need to be made by the clients. It may be expensive round trips regarding its performance and response times over the network.

In a few other cases, APIs should be designed at the lowest practical level of granularity, because combining them is possible and allowed in ways that they suit the customer needs.

Example: An electronic form submission may need to collect addresses as well as, say, tax information. In this case, there are two functions: one is a collection of applicant’s whereabouts, and another is a collection of tax details. Each task needs to be addressed with a distinct API and requires a separate service because an address change is logically a different event and not related to tax time reporting, i.e., why one needs to submit the tax information (again) for an address change.

Levels of Granularity

Level of granularity should satisfy the specific needs of business functions or use cases. While the goal is to minimize calls across the network and for better performance, we must understand the set of operations that API consumers require and how they would give a better idea of the “correctly-grained” APIs in our designs.

At times, it may be appropriate that the API design supports both coarse-grained as well as fine-grained to give the flexibility for the API developers to choose the right APIs for their use cases.

Guidelines

The following points may serve as some basic guidelines for the readers to decide their APIs granularity levels in their API modeling.

  • In general, consider that the services may be coarse-grained, and APIs are fine-grained.
  • Maintain a balance between the amount of response data and the number of resources required to provide that data. It will help decide the granularity.
  • The types of performed operations on the data should also be considered as part of the design when defining the granularity.
  • Read requests are normally coarse-grained. Returning all information as required to render the page; it won’t hurt as much as two separate API calls in some cases.
  • On the other hand, write requests must be fine-grained. Find out everyday operations clients needs, and provide a specific API for that use case.
  • At times, you should use medium grained, i.e., neither fine-grained or coarse-grained. An example could be as seen in the following sample where the nested resources are within two levels deep.Image title

While the above guideline may understandably lead to more API deployment units, this can cause annoyances down the line. There are patterns, especially the API Gateway, that bring a better orchestration with those numerous APIs. Orchestrating the APIs with optimized endpoints, request collapsing, and much more helps in addressing the granularity challenges.