5 Hidden Secrets in Java

As programming languages grow, it is inevitable that hidden features begin to appear and constructs that were never intended by the founders begin to creep into common usage. Some of these features rear their head as idioms and become accepted parlance in the language, while others become anti-patterns and are relegated to the dark corners of the language community. In this article, we will take a look at five Java secrets that are often overlooked by the large population of Java developers (some for good reason). With each description, we will look at the use cases and rationale that brought each feature into the existence and look at some examples when it may be appropriate to use these features.

The reader should note that not all these features are not truly hidden in the language, but are often unused in daily programming. While some may be very useful at appropriate times, others are almost always a poor idea and are shown in this article to peek the interest of the reader (and possibly give him or her a good laugh). The reader should use his or her judgment when deciding when to use the features described in this article: Just because it can be done does not mean it should.

1. Annotation Implementation

Since Java Development Kit (JDK) 5, annotations have an integral part of many Java applications and frameworks. In a vast majority of cases, annotations are applied to language constructs, such as classes, fields, methods, etc., but there is another case in which annotations can be applied: As implementable interfaces. For example, suppose we have the following annotation definition:



public @interface Test {

String name();


Normally, we would apply this annotation to a method, as in the following:

public class MyTestFixure {


public void givenFooWhenBarThenBaz() {

// …



We can then process this annotation, as described in Creating Annotations in Java. If we also wanted to create an interface that allows for tests to be created as objects, we would have to create a new interface, naming it something other than Test:

public interface TestInstance {

public String getName();


Then we could instantiate a TestInstance object:

public class FooTestInstance {

public String getName() {

return “Foo”;



TestInstance myTest = new FooTestInstance();

While our annotation and interface are nearly identical, with very noticeable duplication, there does not appear to be a way to merge these two constructs. Fortunately, looks are deceiving and there is a technique for merging these two constructs: Implement the annotation:

public class FooTest implements Test {


public String name() {

return “Foo”;



public Class<? extends Annotation> annotationType() {

return Test.class;



Note that we must implement the annotationType method and return the type of the annotation as well, since this is implicitly part of the Annotation interface. Although in nearly every case, implementing an annotation is not a sound design decision (the Java compiler will show a warning when implementing an interface), it can be useful in a select few circumstances, such as within annotation-driven frameworks.

2. Instance Initialization

In Java, as with most object-oriented programming languages, objects are exclusively instantiated using a constructor (with some critical exceptions, such as Java object deserialization). Even when we create static factory methods to create objects, we are simply wrapping a call to the constructor of an object to instantiate it. For example:

public class Foo {

private final String name;

private Foo(String name) {

this.name = name;


public static Foo withName(String name) {

return new Foo(name);



Foo foo = Foo.withName(“Bar”);

Therefore, when we wish to initialize an object, we consolidate the initialization logic into the constructor of the object. For example, we set the name field of the Foo class within its parameterized constructor. While it may appear to be a sound assumption that all of the initialization logic is found in the constructor or set of constructors for a class, this is not the case in Java. Instead, we can also use instance initialization to execute code when an object is created:

public class Foo {


System.out.println(“Foo:instance 1”);


public Foo() {




Instance initializers are specified by adding initialization logic within a set of braces within the definition of a class. When the object is instantiated, its instance initializers are called first, followed by its constructors. Note that more than one instance initializer may be specified, in which case, each is called in the order it appears within the class definition. Apart from instance initializers, we can also create static initializers, which are executed when the class is loaded into memory. To create a static initializer, we simply prefix an initializer with the keyword static:

public class Foo {


System.out.println(“Foo:instance 1”);


static {

System.out.println(“Foo:static 1”);


public Foo() {




When all three initialization techniques (constructors, instance initializers, and static initializers) are present in a class, static initializers are always executed first (when the class is loaded into memory) in the order they are declared, followed by instance initializers in the order they are declared, and lastly by constructors. When a superclass is introduced, the order of execution changes slightly:

1 Static initializers of superclass, in order of their declaration

2 Static initializers of subclass, in order of their declaration

3 Instance initializers of superclass, in order of their declaration

4 Constructor of superclass

5 Instance initializers of subclass, in order of their declaration

6 Constructor of subclass

For example, we can create the following application:

public abstract class Bar {

private String name;

static {

System.out.println(“Bar:static 1”);



System.out.println(“Bar:instance 1”);


static {

System.out.println(“Bar:static 2”);


public Bar() {




System.out.println(“Bar:instance 2”);


public Bar(String name) {

this.name = name;




public class Foo extends Bar {

static {

System.out.println(“Foo:static 1”);



System.out.println(“Foo:instance 1”);


static {

System.out.println(“Foo:static 2”);


public Foo() {



public Foo(String name) {





System.out.println(“Foo:instance 2”);


public static void main(String… args) {

new Foo();


new Foo(“Baz”);



If we execute this code, we receive the following output:

Bar:static 1

Bar:static 2

Foo:static 1

Foo:static 2

Bar:instance 1

Bar:instance 2


Foo:instance 1

Foo:instance 2


Bar:instance 1

Bar:instance 2


Foo:instance 1

Foo:instance 2


Note that the static initializers were only executed once, even though two Foo objects were created. While instance and static initializers can be useful, initialization logic should be placed in constructors and methods (or static methods) should be used when complex logic is required to initialize the state of an object.

3. Double-Brace Initialization

Many programming languages include some syntactic mechanism to quickly and concisely create a list or map (or dictionary) without using verbose boilerplate code. For example, C++ includes brace initialization which allows developers to quickly create a list of enumerated values, or even initialize entire objects if the constructor for the object supports this functionality. Unfortunately, prior to JDK 9, no such feature was included (we will touch on this inclusion shortly). In order to naively create a list of objects, we would do the following:

List<Integer> myInts = new ArrayList<>();




While this accomplishes our goal of creating a new list initialized with three values, it is overly verbose, requiring the developer to repeat the name of the list variable for each addition. In order to shorten this code, we can use double-brace initialization to add the same three elements:

List<Integer> myInts = new ArrayList<>() {{





Double-brace initialization–which earns its name from the set of two open and closed curly braces–is actually a composite of multiple syntactic elements. First, we create an anonymous inner class that extends the ArrayList class. Since ArrayList has no abstract methods, we can create an empty body for the anonymous implementation:

List<Integer> myInts = new ArrayList<>() {};

Using this code, we essentially create an anonymous subclass of ArrayList that is exactly the same as the original ArrayList. One of the major differences is that our inner class has an implicit reference to the containing class (in the form of a captured this variable) since we are creating a non-static inner class. This allows us to write some interesting–if not convoluted–logic, such as adding the captured this variable to the anonymous, double-brace initialized inner class:

public class Foo {

public List<Foo> getListWithMeIncluded() {

return new ArrayList<Foo>() {{




public static void main(String… args) {

Foo foo = new Foo();

List<Foo> fooList = foo.getListWithMeIncluded();




If this inner class were statically defined, we would not have access to Foo.this. For example, the following code, which statically creates the named FooArrayList inner class, does not have access to the Foo.this reference and is therefore not compilable:

public class Foo {

public List<Foo> getListWithMeIncluded() {

return new FooArrayList();


private static class FooArrayList extends ArrayList<Foo> {{




Resuming the construction of our double-brace initialized ArrayList, once we have created the non-static inner class, we then use instance initialization, as we saw above, to execute the addition of the three initial elements when the anonymous inner class is instantiated. Since anonymous inner classes are immediately instantiated and only one object of the anonymous inner class ever exist, we have essentially created a non-static inner singleton object that adds the three initial elements when it is created. This can be made more obvious if we separate the pair of braces, where one brace clearly constitutes the definition of the anonymous inner class and the other brace denotes the start of the instance initialization logic:

List<Integer> myInts = new ArrayList<>() {







While this trick can be useful, JDK 9 (JEP 269) has supplanted the utility of this trick with a set of static factory methods for List (as well as many of the other collection types). For example, we could have created the List above using these static factory methods, as illustrated in the following listing:

List<Integer> myInts = List.of(1, 2, 3);

This static factory technique is desirable for two main reasons: (1) No anonymous inner class is created and (2) the reduction in boilerplate code (noise) required to create the List. The caveat to creating a List in this manner is that the resulting List is immutable, and therefore cannot be modified once it has been created. In order to create a mutable List with the desired initial elements, we are stuck with either using the naive technique or double-brace initialization.

Note that the naive initialization, double-brace initialization, and the JDK 9 static factory methods are not just available for List.  They are also available for Set and Map objects, as illustrated in the following snippet:

// Naive initialization

Map<String, Integer> myMap = new HashMap<>();

myMap.put(“Foo”, 10);

myMap.put(“Bar”, 15);

// Double-brace initialization

Map<String, Integer> myMap = new HashMap<>() {{

put(“Foo”, 10);

put(“Bar”, 15);


// Static factory initialization

Map<String, Integer> myMap = Map.of(“Foo”, 10, “Bar”, 15);

It is important to consider the nature of double-brace initialization before deciding to use it. While it does improve the readability of code, it carries with it some implicit side-effects.

4. Executable Comments

Comments are an essential part of almost every program and the main benefit of comments is that they are not executed. This is made even more evident when we comment out a line of code within our program: We want to retain the code in our application but we do not want it to be executed. For example, the following program results in 5 being printed to standard output:

public static void main(String args[]) {

int value = 5;

// value = 8;



While it is a fundamental assumption that comments are never executed, it is not completely true. For example, what does the following snippet print to standard output?

public static void main(String args[]) {

int value = 5;

// \u000dvalue = 8;



A good guess would be 5 again, but if we run the above code, we see 8 printed to standard output. The reason behind this seeming bug is the Unicode character \u000d; this character is actually a Unicode carriage return, and Java source code is consumed by the compiler as Unicode formatted text files. Adding this carriage return pushes the assignment value = 8 to the line directly following the comment, ensuring that it is executed. This means that the above snippet is effectively equal to the following:

public static void main(String args[]) {

int value = 5;


value = 8;



Although this appears to be a bug in Java, it is actually a conscious inclusion in the language. The original goal of Java was to create a platform independent language (hence the creation of the Java Virtual Machine, or JVM) and interoperability of the source code is a key aspect of this goal. By allowing Java source code to contain Unicode characters, we can include non-Latin characters in a universal manner. This ensures that code written in one region of the world (that may include non-Latin characters, such as in comments) can be executed in any other. For more information, see Section 3.3 of the Java Language Specification, or JLS.

We can take this to the extreme and even write an entire application in Unicode. For example, what does the following program do (source code obtained from Java: Executing code in comments?!)?












If the above is placed in a file named Ugly.java and executed, it prints Hello world to standard output. If we convert these escaped Unicode characters into American Standard Code for Information Interchange (ASCII) characters, we obtain the following program:


class Ugly



void main(





“Hello w”+


Although it is important to know that Unicode characters can be included in Java source code, it is highly suggested that they are avoided unless required (for example, to include non-Latin characters in comments). If they are required, be sure not to include characters, such as carriage return, that change the expected behavior of the source code.

5. Enum Interface Implementation

One of the limitations of enumerations (enums) compared to classes in Java is that enums cannot extend another class or enum. For example, it is not possible to execute the following:

public class Speaker {

public void speak() {




public enum Person extends Speaker {



private final String name;

private Person(String name) {

this.name = name;




We can, however, have our enum implement an interface and provide an implementation for its abstract methods as follows:

public interface Speaker {

public void speak();


public enum Person implements Speaker {



private final String name;

private Person(String name) {

this.name = name;



public void speak() {





We can now also use an instance of Person anywhere a Speaker object is required. Whatsmore, we can also provide an implementation of the abstract methods of an interface on a per-constant basis (called constant-specific methods):

public interface Speaker {

public void speak();


public enum Person implements Speaker {

JOE(“Joseph”) {

public void speak() { System.out.println(“Hi, my name is Joseph”); }



public void speak() { System.out.println(“Hey, what’s up?”); }


private final String name;

private Person(String name) {

this.name = name;



public void speak() {





Unlike some of the other secrets in this article, this technique should be encouraged where appropriate. For example, if an enum constant, such as JOE or JIM, can be used in place of an interface type, such as Speaker, the enum that defines the constant should implement the interface type. For more information, see Item 38 (pp. 176-9) of Effective Java, 3rd Edition.


In this article, we looked at five hidden secrets in Java, namely: (1) Annotations can be extended, (2) instance initialization can be used to configure an object upon instantiation, (3) double-brace initialization can be used to execute instructions when creating an anonymous inner class, (4) comments can sometimes be executed, and (5) enums can implement interfaces. While some of these features have their appropriate uses, some of them should be avoided (i.e. creating executable comments). When deciding to use these secrets, be sure to obey the following rule: Just because something can be done, does not mean that it should.

50 Common Java Errors and How to Avoid Them

There are many types of errors that could be encountered while developing Java software, but most are avoidable. We’ve rounded up 50 of the most common Java software errors, complete with code examples and tutorials to help you work around common coding problems.

For more tips and tricks for coding better Java programs, download our Comprehensive Java Developer’s Guide, which is jam-packed with everything you need to up your Java game – from tools to the best websites and blogs, YouTube channels, Twitter influencers, LinkedIn groups, podcasts, must-attend events, and more.

If you’re working with .NET, you should also check out our guide to the 50 most common .NET software errors and how to avoid them. But if your current challenges are Java-related, read on to learn about the most common issues and their workarounds.

Compiler Errors

Compiler error messages are created when the Java software code is run through the compiler. It is important to remember that a compiler may throw many error messages for one error. So fix the first error and recompile. That could solve many problems.

1. “… Expected”

This error occurs when something is missing from the code. Often this is created by a missing semicolon or closing parenthesis.

private static double volume(String solidom, double alturam, double areaBasem, double raiom) {
double vol;
 if (solidom.equalsIgnoreCase("esfera"){
 else {
 if (solidom.equalsIgnoreCase("cilindro") {
 else {
 return vol;

Often this error message does not pinpoint the exact location of the issue. To find it:

  • Make sure all opening parenthesis have a corresponding closing parenthesis.
  • Look in the line previous to the Java code line indicated. This Java software error doesn’t get noticed by the compiler until further in the code.
  • Sometimes a character such as an opening parenthesis shouldn’t be in the Java code in the first place. So the developer didn’t place a closing parenthesis to balance the parentheses.

Check out an example of how a missed parenthesis can create an error (@StackOverflow).

2. “Unclosed String Literal”

The “unclosed string literal” error message is created when the string literal ends without quotation marks, and the message will appear on the same line as the error. (@DreamInCode) A literal is a source code of a value.

 public abstract class NFLPlayersReference {
 private static Runningback[] nflplayersreference;
 private static Quarterback[] players;
 private static WideReceiver[] nflplayers;
 public static void main(String args[]){
 Runningback r = new Runningback("Thomlinsion");
 Quarterback q = new Quarterback("Tom Brady");
 WideReceiver w = new WideReceiver("Steve Smith");
 NFLPlayersReference[] NFLPlayersReference;
 Run();// {
 NFLPlayersReference = new NFLPlayersReference [3];
 nflplayersreference[0] = r;
 players[1] = q;
 nflplayers[2] = w;
 for ( int i = 0; i < nflplayersreference.length; i++ ) {
 System.out.println("My name is " + " nflplayersreference[i].getName());
 System.out.println("NFL offensive threats have great running abilities!");
  private static void Run() {
  System.out.println("Not yet implemented");

Commonly, this happens when:

  • The string literal does not end with quote marks. This is easy to correct by closing the string literal with the needed quote mark.
  • The string literal extends beyond a line. Long string literals can be broken into multiple literals and concatenated with a plus sign (“+”).
  • Quote marks that are part of the string literal are not escaped with a backslash (“\”).

Read a discussion of the unclosed string literal Java software error message. (@Quora)

3. “Illegal Start of an Expression”

There are numerous reasons why an “illegal start of an expression” error occurs. It ends up being one of the less-helpful error messages. Some developers say it’s caused by bad code.

Usually, expressions are created to produce a new value or assign a value to a variable. The compiler expects to find an expression and cannot find it because the syntax does not match expectations. (@StackOverflow) It is in these statements that the error can be found.

  public void newShape(String shape) {
  switch (shape) {
  case "Line":
  Shape line = new Line(startX, startY, endX, endY);
  case "Oval":
  Shape oval = new Oval(startX, startY, endX, endY);
  case "Rectangle":
  Shape rectangle = new Rectangle(startX, startY, endX, endY);
  System.out.println("ERROR. Check logic.");

Browse discussions of how to troubleshoot the “illegal start of an expression” error. (@StackOverflow)

4. “Cannot Find Symbol”

This is a very common issue because all identifiers in Java need to be declared before they are used. When the code is being compiled, the compiler does not understand what the identifier means.

&quot;cannot find symbol&quot; Java software error

There are many reasons you might receive the “cannot find symbol” message:

  • The spelling of the identifier when declared may not be the same as when it is used in the code.
  • The variable was never declared.
  • The variable is not being used in the same scope it was declared.
  • The class was not imported.

Read a thorough discussion of the “cannot find symbol” error and examples of code that create this issue. (@StackOverflow)

5. “Public Class XXX Should Be in File”

The “public class XXX should be in file” message occurs when the class XXX and the Java program filename do not match. The code will only be compiled when the class and Java file are the same. (@coderanch):

package javaapplication3; 
  public class Robot { 
  int xlocation; 
  int ylocation; 
  String name; 
  static int ccount = 0; 
   public Robot(int xxlocation, int yylocation, String nname) { 
  xlocation = xxlocation; 
  ylocation = yylocation; 
  name = nname; 
  public class JavaApplication1 { 
  public static void main(String[] args) { 
  robot firstRobot = new Robot(34,51,"yossi"); 
  System.out.println("numebr of robots is now " + Robot.ccount); 

To fix this issue:

  • Name the class and file the same.
  • Make sure the case of both names is consistent.

See an example of the “Public class XXX should be in file” error. (@StackOverflow)

6. “Incompatible Types”

“Incompatible types” is an error in logic that occurs when an assignment statement tries to pair a variable with an expression of types. It often comes when the code tries to place a text string into an integer — or vice versa. This is not a Java syntax error. (@StackOverflow)

test.java:78: error: incompatible types
return stringBuilder.toString();


required: int
found: String

1 error

There really isn’t an easy fix when the compiler gives an “incompatible types” message:

  • There are functions that can convert types.
  • The developer may need change what the code is expected to do.

Check out an example of how trying to assign a string to an integer created the “incompatible types.”(@StackOverflow)

7. “Invalid Method Declaration; Return Type Required”

This Java software error message means the return type of a method was not explicitly stated in the method signature.

public class Circle
  private double radius;
  public CircleR(double r)
  radius = r;
  public diameter()
  double d = radius * 2;
  return d;

There are a few ways to trigger the “invalid method declaration; return type required” error:

  • Forgetting to state the type
  • If the method does not return a value then “void” needs to be stated as the type in the method signature.
  • Constructor names do not need to state type. But if there is an error in the constructor name, then the compiler will treat the constructor as a method without a stated type.

Follow an example of how constructor naming triggered the “invalid method declaration; return type required” issue. (@StackOverflow)

8. “Method <X> in Class <Y> Cannot Be Applied to Given Types”

This Java software error message is one of the more helpful error messages. It explains how the method signature is calling the wrong parameters.

RandomNumbers.java:9: error: method generateNumbers in class RandomNumbers cannot be applied to given types;
 required: int[]
 reason: actual and formal argument lists differ in length

The method called is expecting certain arguments defined in the method’s declaration. Check the method declaration and call carefully to make sure they are compatible.

This discussion illustrates how a Java software error message identifies the incompatibility created by arguments in the method declaration and method call. (@StackOverflow)

9. “Missing Return Statement”

The “missing return statement” message occurs when a method does not have a return statement. Each method that returns a value (a non-void type) must have a statement that literally returns that value so it can be called outside the method.

public String[] OpenFile() throws IOException { 
Map<String, Double> map = new HashMap(); 
FileReader fr = new FileReader("money.txt"); 
BufferedReader br = new BufferedReader(fr); 
  while (br.ready()){ 
  String str = br.readLine(); 
  String[] list = str.split(" "); 
}catch(IOException e){ 
System.err.println("Error - IOException!"); 

There are a couple reasons why a compiler throws the “missing return statement” message:

  • A return statement was simply omitted by mistake.
  • The method did not return any value but type void was not declared in the method signature.

Check out an example of how to fix the “missing return statement” Java software error. (@StackOverflow)

10. “Possible Loss of Precision”

“Possible loss of precision” occurs when more information is assigned to a variable than it can hold. If this happens, pieces will be thrown out. If this is fine, then the code needs to explicitly declare the variable as a new type.

&quot;possible loss of precision&quot; error in Java

A “possible loss of precision” error commonly occurs when:

  • Trying to assign a real number to a variable with an integer data type.
  • Trying to assign a double to a variable with an integer data type.

This explanation of Primitive Data Types in Java shows how the data is characterized. (@Oracle)

11. “Reached End of File While Parsing”

This error message usually occurs in Java when the program is missing the closing curly brace (“}”). Sometimes it can be quickly fixed by placing it at the end of the code.

public class mod_MyMod extends BaseMod
 public String Version()
  return "1.2_02";
 public void AddRecipes(CraftingManager recipes)
  recipes.addRecipe(new ItemStack(Item.diamond), new Object[] {
  "#", Character.valueOf('#'), Block.dirt

The above code results in the following error:

java:11: reached end of file while parsing }

Coding utilities and proper code indenting can make it easier to find these unbalanced braces.

This example shows how missing braces can create the “reached end of file while parsing” error message. (@StackOverflow)

12. “Unreachable Statement”

“Unreachable statement” occurs when a statement is written in a place that prevents it from being executed. Usually, this is after a break or return statement.

  ... // unreachable statement
 int i=1;
  ... // dead code

Often simply moving the return statement will fix the error. Read the discussion of how to fix unreachable statement Java software error. (@StackOverflow)

13. “Variable <X> Might Not Have Been Initialized”

This occurs when a local variable declared within a method has not been initialized. It can occur when a variable without an initial value is part of an if statement.

int x;
 if (condition) {
  x = 5;
 System.out.println(x); // x may not have been initialized

Read this discussion of how to avoid triggering the “variable <X> might not have been initialized”error. (@reddit)

14. “Operator … Cannot be Applied to <X>”

This issue occurs when operators are used for types not in their definition.

operator < cannot be applied to java.lang.Object,java.lang.Object

This often happens when the Java code tries to use a type string in a calculation. To fix it, the string needs to be converted to an integer or float.

Read this example of how non-numeric types were causing a Java software error warning that an operator cannot be applied to a type. (@StackOverflow)

15. “Inconvertible Types”

The “inconvertible types” error occurs when the Java code tries to perform an illegal conversion.

TypeInvocationConversionTest.java:12: inconvertible types
 found : java.util.ArrayList<java.lang.Class<? extends TypeInvocationConversionTest.Interface1>>
 required: java.util.ArrayList<java.lang.Class<?>>
  lessRestrictiveClassList = (ArrayList<Class<?>>) classList;

For example, booleans cannot be converted to an integer.

Read this discussion about finding ways to convert inconvertible types in Java software. (@StackOverflow)

16. “Missing Return Value”

You’ll get the “missing return value” message when the return statement includes an incorrect type. For example, the following code:

public class SavingsAcc2 {
  private double balance;
  private double interest;
  public SavingsAcc2() {
  balance = 0.0;
  interest = 6.17;
  public SavingsAcc2(double initBalance, double interested) {
  balance = initBalance;
  interest = interested;
  public SavingsAcc2 deposit(double amount) {
  balance = balance + amount;
  public SavingsAcc2 withdraw(double amount) {
  balance = balance - amount;
  public SavingsAcc2 addInterest(double interest) {
  balance = balance * (interest / 100) + balance;
  public double getBalance() {
  return balance;

Returns the following error:

SavingsAcc2.java:29: missing return value 
SavingsAcc2.java:35: missing return value 
SavingsAcc2.java:41: missing return value 
3 errors

Usually, there is a return statement that doesn’t return anything.

Read this discussion about how to avoid the “missing return value” Java software error message. (@coderanch)

17. “Cannot Return a Value From Method Whose Result Type Is Void”

This Java error occurs when a void method tries to return any value, such as in the following example:

public static void move()
  System.out.println("What do you want to do?");
  Scanner scan = new Scanner(System.in);
  int userMove = scan.nextInt();
  return userMove;
 public static void usersMove(String playerName, int gesture)
  int userMove = move();
  if (userMove == -1)

Often this is fixed by changing to method signature to match the type in the return statement. In this case, instances of void can be changed to int:

public static int move()
  System.out.println("What do you want to do?");
  Scanner scan = new Scanner(System.in);
  int userMove = scan.nextInt();
  return userMove;

Read this discussion about how to fix the “cannot return a value from method whose result type is void” error. (@StackOverflow)

18. “Non-Static Variable … Cannot Be Referenced From a Static Context”

This error occurs when the compiler tries to access non-static variables from a static method(@javinpaul):

public class StaticTest {
  private int count=0;
  public static void main(String args[]) throws IOException {
  count++; //compiler error: non-static variable count cannot be referenced from a static context

To fix the “non-static variable … cannot be referenced from a static context” error, two things can be done:

  • The variable can be declared static in the signature.
  • The code can create an instance of a non-static object in the static method.

Read this tutorial that explains what is the difference between static and non-static variables. (@sitesbay)

19. “Non-Static Method … Cannot Be Referenced From a Static Context”

This issue occurs when the Java code tries to call a non-static method in a non-static class. For example, the following code:

class Sample
  private int age;
  public void setAge(int a)
  public int getAge()
  return age;
  public static void main(String args[])
  System.out.println("Age is:"+ getAge());

Would return this error:

Exception in thread "main" java.lang.Error: Unresolved compilation problem:
Cannot make a static reference to the non-static method getAge() from the type Sample

To call a non-static method from a static method is to declare an instance of the class calling the non-static method.

Read this explanation of what is the difference between non-static methods and static methods.

20. “(array) <X> Not Initialized”

You’ll get the “(array) <X> not initialized” message when an array has been declared but not initialized. Arrays are fixed in length so each array needs to be initialized with the desired length.

The following code is acceptable:

AClass[] array = {object1, object2}
 As is:
AClass[] array = new AClass[2];
array[0] = object1;
array[1] = object2;
But not:
AClass[] array;
array = {object1, object2};

Read this discussion of how to initialize arrays in Java software. (@StackOverflow)

21. “ArrayIndexOutOfBoundsException”

This is a runtime error message that occurs when the code attempts to access an array index that is not within the values. The following code would trigger this exception:

String[] name = {
 for (int i = 0; i <= name.length; i++) {
  System.out.print(name[i] + '\n');

Here’s another example (@DukeU):

int[] list = new int[5];
list[5] = 33; // illegal index, maximum index is 4

Array indexes start at zero and end at one less than the length of the array. Often it is fixed by using “<” instead of “<=” when defining the limits of the array index.

Check out this example of how an index triggered the “ArrayIndexOutOfBoundsException” Java software error message. (@StackOverflow)

22. “StringIndexOutOfBoundsException”

This is an issue that occurs when the code attempts to access part of the string that is not within the bounds of the string. Usually, this happens when the code tries to create a substring of a string that is not as long as the parameters are set at. Here’s an example (@javacodegeeks):

public class StringCharAtExample {
    public static void main(String[] args) {
       String str = "Java Code Geeks!";
        System.out.println("Length: " + str.length());
        //The following statement throws an exception, because
        //the request index is invalid.
        char ch = str.charAt(50);

Like array indexes, string indexes start at zero. When indexing a string, the last character is at one less than the length of the string. The “StringIndexOutOfBoundsException” Java software error message usually means the index is trying to access characters that aren’t there.

Here’s an example that illustrates how the “StringIndexOutOfBoundsException” can occur and be fixed. (@StackOverflow)

23. “NullPointerException”

A “NullPointerException” will occur when the program tries to use an object reference that does not have a value assigned to it (@geeksforgeeks).

// A Java program to demonstrate that invoking a method
// on null causes NullPointerException
import java.io.*;
class GFG{
    public static void main (String[] args)    {
        // Initializing String variable with null value
        String ptr = null;
        // Checking if ptr.equals null or works fine.
       try {
           // This line of code throws NullPointerException
            // because ptr is null
            if (ptr.equals("gfg"))
                System.out.print("Not Same");
        } catch(NullPointerException e)
            System.out.print("NullPointerException Caught");

The Java program raises an exception often when:

  • A statement references an object with a null value.
  • Trying to access a class that is defined but isn’t assigned a reference.

Here’s discussion of when developers may encounter the “NullPointerException” and how to handle it. (@StackOverflow)

24. “NoClassDefFoundError”

The “NoClassDefFoundError” will occur when the interpreter cannot find the file containing a class with the main method. Here’s an example from DZone (@DZone):

If you compile this program:

class A
  // some code
 public class B
  public static void main(String[] args)
  A a = new A();

Two .class files are generated: A.class and B.class. Removing the A.class file and running the B.class file, you’ll get the NoClassDefFoundError:

Exception in thread "main" java.lang.NoClassDefFoundError: A
 at MainClass.main(MainClass.java:10)
 Caused by: java.lang.ClassNotFoundException: A
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

This can happen if:

  • The file is not in the right directory.
  • The name of the class must be the same as the name of the file (without the file extension). The names are case sensitive.

Read this discussion of why “NoClassDefFoundError” occurs when running Java software. (@StackOverflow)

25. “NoSuchMethodFoundError”

This error message will occur when the Java software tries to call a method of a class and the method no longer has a definition (@myUND):

Error: Could not find or load main class wiki.java

Often the “NoSuchMethodFoundError” Java software error occurs when there is a typo in the declaration.

Read this tutorial to learn how to avoid the error message NoSuchMethodFoundError.” (@javacodegeeks)

26. “NoSuchProviderException”

“NoSuchProviderException” occurs when a security provider is requested that is not available (@alvinalexander):


When trying to find why “NoSuchProviderException” occurs, check:

  • The JRE configuration.
  • The Java home is set in the configuration.
  • Which Java environment is used.
  • The security provider entry.

Read this discussion of what causes “NoSuchProviderException” when Java software is run. (@StackOverflow)

27. AccessControlException

AccessControlException indicates that requested access to system resources such as a file system or network is denied, as in this example from JBossDeveloper (@jbossdeveloper):

ERROR Could not register mbeans java.security.
AccessControlException: WFSM000001: Permission check failed (permission "("javax.management.MBeanPermission" "org.apache.logging.log4j.core.jmx.LoggerContextAdmin#-
[org.apache.logging.log4j2:type=51634f]" "registerMBean")" in code source "(vfs:/C:/wildfly-10.0.0.Final/standalone/deployments/mySampleSecurityApp.war/WEB-INF/lib/log4j-core-2.5.jar )" of "null")

Read this discussion of a workaround used to get past an “AccessControlException” error. (@github)

28. “ArrayStoreException”

An “ArrayStoreException” occurs when the rules of casting elements in Java arrays are broken. Arrays are very careful about what can go into them. (@Roedyg) For instance, this example from JavaScan.com illustrates that this program (@java_scan):

 /* ............... START ............... */
public class JavaArrayStoreException {
     public static void main(String...args) {
         Object[] val = new Integer[4];
         val[0] = 5.8;
} /* ............... END ............... */

Results in the following output:

Exception in thread "main" java.lang.ArrayStoreException: java.lang.Double
at ExceptionHandling.JavaArrayStoreException.main(JavaArrayStoreException.java:7)

When an array is initialized, the sorts of objects allowed into the array need to be declared. Then each array element needs be of the same type of object.

Read this discussion of how to solve for the “ArrayStoreException.” (@StackOverflow)

29. “Bad Magic Number”

This Java software error message means something may be wrong with the class definition files on the network. Here’s an example from The Server Side (@TSS_dotcom):

Java(TM) Plug-in: Version 1.3.1_01
Using JRE version 1.3.1_01 Java HotSpot(TM) Client VM
User home directory = C:\Documents and Settings\Ankur
Proxy Configuration: Manual Configuration
java.lang.ClassFormatError: SalesCalculatorAppletBeanInfo (Bad magic number)
at java.lang.ClassLoader.defineClass0(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source) 
at java.security.SecureClassLoader.defineClass(Unknown Source)
at sun.applet.AppletClassLoader.findClass(Unknown Source)
at sun.plugin.security.PluginClassLoader.access$201(Unknown Source)
at sun.plugin.security.PluginClassLoader$1.run(Unknown Source) 
at java.security.AccessController.doPrivileged(Native Method)
at sun.plugin.security.PluginClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.beans.Introspector.instantiate(Unknown Source)
at java.beans.Introspector.findInformant(Unknown Source)
at java.beans.Introspector.(Unknown Source)
at java.beans.Introspector.getBeanInfo(Unknown Source)
at sun.beans.ole.OleBeanInfo.(Unknown Source)
at sun.beans.ole.StubInformation.getStub(Unknown Source)
at sun.plugin.ocx.TypeLibManager$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.plugin.ocx.TypeLibManager.getTypeLib(Unknown Source)
at sun.plugin.ocx.TypeLibManager.getTypeLib(Unknown Source)
at sun.plugin.ocx.ActiveXAppletViewer.statusNotification(Native Method)
at sun.plugin.ocx.ActiveXAppletViewer.notifyStatus(Unknown Source)
at sun.plugin.ocx.ActiveXAppletViewer.showAppletStatus(Unknown Source)
at sun.applet.AppletPanel.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

The “bad magic number” error message could happen when:

  • The first four bytes of a class file is not the hexadecimal number CAFEBABE.
  • The class file was uploaded as in ASCII mode not binary mode.
  • The Java program is run before it is compiled.

Read this discussion of how to find the reason for a “bad magic number.” (@coderanch)

30. “Broken Pipe”

This error message refers to the data stream from a file or network socket has stopped working or is closed from the other end (@ExpertsExchange).

Exception in thread "main" java.net.SocketException: Broken pipe
      at java.net.SocketOutputStream.socketWrite0(Native Method)
      at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
      at java.net.SocketOutputStream.write(SocketOutputStream.java:115)
      at java.io.DataOutputStream.write

The causes of a broken pipe often include:

  • Running out of disk scratch space.
  • RAM may be clogged.
  • The datastream may be corrupt.
  • The process reading the pipe might have been closed.

Read this discussion of what is the Java error “broken pipe.” (@StackOverflow)

31. “Could Not Create Java Virtual Machine”

This Java error message usually occurs when the code tries to invoke Java with the wrong arguments (@ghacksnews):

Error: Could not create the Java Virtual Machine
Error: A fatal exception has occurred. Program will exit.

It often is caused by a mistake in the declaration in the code or allocating the proper amount of memory to it.

Read this discussion of how to fix the Java software error “Could not create Java Virtual Machine.” (@StackOverflow)

32. “class file contains wrong class”

The “class file contains wrong class” issue occurs when the Java code tries to find the class file in the wrong directory, resulting in an error message similar to the following:

MyTest.java:10: cannot access MyStruct 
bad class file: D:\Java\test\MyStruct.java 
file does not contain class MyStruct 
Please remove or make sure it appears in the correct subdirectory of the classpath. 
MyStruct ms = new MyStruct(); ^

To fix this error, these tips could help:

  • Make sure the name of the source file and the name of the class match — including case.
  • Check if the package statement is correct or missing.
  • Make sure the source file is in the right directory.

Read this discussion of how to fix a “class file contains wrong class” error. (@StackOverflow)

33. “ClassCastException”

The “ClassCastException” message indicates the Java code is trying to cast an object to the wrong class. In this example from Java Concept of the Day, running the following program:

package com;
class A{
    int i = 10;
class B extends A{
    int j = 20;
class C extends B{
    int k = 30;
public class ClassCastExceptionDemo{
   public static void main(String[] args)    {
        A a = new B();   //B type is auto up casted to A type
        B b = (B) a;     //A type is explicitly down casted to B type.
        C c = (C) b;    //Here, you will get class cast exception

Results in this error:

Exception in thread “main” java.lang.ClassCastException: com.B cannot be cast to com.
at com.ClassCastExceptionDemo.main(ClassCastExceptionDemo.java:23)

The Java code will create a hierarchy of classes and subclasses. To avoid the “ClassCastException” error, make sure the new type belongs to the right class or one of its parent classes. If Generics are used, these errors can be caught when the code is compiled.

Read this tutorial on how to fix “ClassCastException” Java software errors. (@java_concept)

34. “ClassFormatError”

The “ClassFormatError” message indicates a linkage error and occurs when a class file cannot be read or interpreted as a class file.

Caused by: java.lang.ClassFormatError: Absent Code attribute in method that is
        not native or abstract in class file javax/persistence/GenerationType
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(Unknown Source)
at java.lang.ClassLoader.defineClass(Unknown Source)
at java.security.SecureClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.access$000(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)

There are several reasons why a “ClassFormatError” can occur:

  • The class file was uploaded as in ASCII mode not binary mode.
  • The web server must send class files as binary not ASCII.
  • There could be a classpath error that prevents the code from finding the class file.
  • If the class is loaded twice, the second time will cause the exception to be thrown.
  • An old version of Java runtime is being used.

Read this discussion about what causes the “ClassFormatError” in Java. (@StackOverflow)

35. “ClassNotFoundException”

“ClassNotFoundException” only occurs at run time — meaning a class that was there during compilation is missing at run time. This is a linkage error.


Much like the “NoClassDefFoundError,” this issue can occur if:

  • The file is not in the right directory.
  • The name of the class must be the same as the name of the file (without the file extension). The names are case sensitive.

Read this discussion of what causes “ClassNotFoundException” the for more cases. (@StackOverflow).

36. “ExceptionInInitializerError”

This Java issue will occur when something goes wrong with a static initialization (@GitHub). When the Java code later uses the class, the “NoClassDefFoundError” error will occur.

  at org.eclipse.mat.hprof.HprofIndexBuilder.fill(HprofIndexBuilder.java:54)
  at org.eclipse.mat.parser.internal.SnapshotFactory.parse(SnapshotFactory.java:193)
  at org.eclipse.mat.parser.internal.SnapshotFactory.openSnapshot(SnapshotFactory.java:106)
  at com.squareup.leakcanary.HeapAnalyzer.openSnapshot(HeapAnalyzer.java:134)
  at com.squareup.leakcanary.HeapAnalyzer.checkForLeak(HeapAnalyzer.java:87)
  at com.squareup.leakcanary.internal.HeapAnalyzerService.onHandleIntent(HeapAnalyzerService.java:56)
  at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:65)
  at android.os.Handler.dispatchMessage(Handler.java:102)
  at android.os.Looper.loop(Looper.java:145)
  at android.os.HandlerThread.run(HandlerThread.java:61)
Caused by: java.lang.NullPointerException: in == null
  at java.util.Properties.load(Properties.java:246)
  at org.eclipse.mat.util.MessageUtil.(MessageUtil.java:28)
 at org.eclipse.mat.util.MessageUtil.(MessageUtil.java:13)
  ... 10 more

There needs to be more information to fix the error. Using getCause() in the code can return the exception that caused the error to be returned.

Read this discussion about how to track down the cause of the ExceptionInInitializerError. (@StackOverflow)

37. “IllegalBlockSizeException”

An “IllegalBlockSizeException” will occur during decryption when the length message is not a multiple of 8 bytes. Here’s an example from ProgramCreek.com (@ProgramCreek):

protected byte[] engineWrap(Key key) throws IllegalBlockSizeException, InvalidKeyException {
    try {
        byte[] encoded = key.getEncoded();
        return engineDoFinal(encoded, 0, encoded.length);
    } catch (BadPaddingException e) { 
       IllegalBlockSizeException newE = new IllegalBlockSizeException();
        throw newE;

The “IllegalBlockSizeException” could be caused by:

  • Different encryption and decryption algorithm options used.
  • The message to be decrypted could be truncated or garbled in transmission.

Read this discussion about how to prevent the IllegalBlockSizeException Java software error message. (@StackOverflow)

38. “BadPaddingException”

A “BadPaddingException” will occur during decryption when padding was used to create a message than can be measured by a multiple of 8 bytes. Here’s an example from Stack Overflow (@StackOverflow):

javax.crypto.BadPaddingException: Given final block not properly padded
at com.sun.crypto.provider.SunJCE_f.b(DashoA13*..)
at com.sun.crypto.provider.SunJCE_f.b(DashoA13*..)
at com.sun.crypto.provider.AESCipher.engineDoFinal(DashoA13*..)
at javax.crypto.Cipher.doFinal(DashoA13*..)

Encrypted data is binary so don’t try to store it in a string or the data was not padded properly during encryption.

Read this discussion about how to prevent the BadPaddingException. (@StackOverflow)

39. “IncompatibleClassChangeError”

An “IncompatibleClassChangeError” is a form of LinkageError that can occur when a base class changes after the compilation of a child class. This example is from How to Do in Java (@HowToDoInJava):

Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
at java.security.SecureClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.access$000(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClassInternal(Unknown Source)
at net.sf.cglib.core.DebuggingClassWriter.toByteArray(DebuggingClassWriter.java:73)
at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:26)
at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108)
at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104)
at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69)

When the “IncompatibleClassChangeError” occurs, it is possible that:

  • The static on the main method was forgotten.
  • A legal class was used illegally.
  • A class was changed and there are references to it from an another class by its old signatures. Try deleting all class files and recompiling everything.

Try these steps to resolve the “IncompatibleClassChangeError.” (@javacodegeeks)

40. “FileNotFoundException”

This Java software error message is thrown when a file with the specified pathname does not exist.

@Override public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException {
    if (uri.toString().startsWith(FILE_PROVIDER_PREFIX)) {
        int m = ParcelFileDescriptor.MODE_READ_ONLY;
        if (mode.equalsIgnoreCase("rw")) m = ParcelFileDescriptor.MODE_READ_WRITE; 
        File f = new File(uri.getPath());
        ParcelFileDescriptor pfd = ParcelFileDescriptor.open(f, m);
        return pfd;
    } else {
        throw new FileNotFoundException("Unsupported uri: " + uri.toString());

In addition to files not existing the specified pathname, this could mean the existing file is inaccessible.

Read this discussion about why the “FileNotFoundException” could be thrown. (@StackOverflow)

41. “EOFException”

An “EOFException” is thrown when an end of file or end of stream has been reached unexpectedly during input. Here’s an example from JavaBeat of an application that throws an EOFException:

import java.io.DataInputStream;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
public class ExceptionExample {
    public void testMethod1() {
        File file = new File("test.txt");
        DataInputStream dataInputStream = null;
        try {
            dataInputStream = new DataInputStream(new FileInputStream(file));
            while (true) {
        } catch (EOFException e) {
        } catch (IOException e) {
        } finally {
            try {
                if (dataInputStream != null) {
            } catch (IOException e) {
    public static void main(String[] args) {
        ExceptionExample instance1 = new ExceptionExample();

Running the program above results in the following exception:

at java.io.DataInputStream.readInt(DataInputStream.java:392)
at logging.simple.ExceptionExample.testMethod1(ExceptionExample.java:16)
at logging.simple.ExceptionExample.main(ExceptionExample.java:36)

When there is no more data while the class DataInputStream is trying to read data in the stream, “EOFException” will be thrown. It can also occur in the ObjectInputStream and RandomAccessFile classes.

Read this discussion about when the “EOFException” can occur while running Java software. (@StackOverflow)

42. “UnsupportedEncodingException”

This Java software error message is thrown when character encoding is not supported (@Penn).

public UnsupportedEncodingException()

It is possible that the Java Virtual Machine being used doesn’t support a given character set.

Read this discussion of how to handle “UnsupportedEncodingException” while running Java software. (@StackOverflow)

43. “SocketException”

A “SocketException” indicates there is an error creating or accessing a socket (@ProgramCreek).

public void init(String contextName, ContextFactory factory) {
  super.init(contextName, factory);
  String periodStr = getAttribute(PERIOD_PROPERTY);
  if (periodStr != null) {
  int period = 0;
  try {
  period = Integer.parseInt(periodStr);
  } catch (NumberFormatException nfe) {}
  if (period <= 0) {
  throw new MetricsException("Invalid period: " + periodStr);
  metricsServers =
  Util.parse(getAttribute(SERVERS_PROPERTY), DEFAULT_PORT);
  unitsTable = getAttributeTable(UNITS_PROPERTY);
  slopeTable = getAttributeTable(SLOPE_PROPERTY);
  tmaxTable = getAttributeTable(TMAX_PROPERTY);
  dmaxTable = getAttributeTable(DMAX_PROPERTY);
  try {
  datagramSocket = new DatagramSocket();
  } catch (SocketException se) {

This exception usually is thrown when the maximum connections are reached due to:

  • No more network ports available to the application.
  • The system doesn’t have enough memory to support new connections.

Read this discussion of how to resolve “SocketException” issues while running Java software. (@StackOverflow)

44. “SSLException”

This Java software error message occurs when there is failure in SSL-related operations. The following example is from Atlassian (@Atlassian):

com.sun.jersey.api.client.ClientHandlerException: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
  at com.sun.jersey.client.apache.ApacheHttpClientHandler.handle(ApacheHttpClientHandler.java:202)
  at com.sun.jersey.api.client.Client.handle(Client.java:365)
  at com.sun.jersey.api.client.WebResource.handle(WebResource.java:556)
  at com.sun.jersey.api.client.WebResource.get(WebResource.java:178)
  at com.atlassian.plugins.client.service.product.ProductServiceClientImpl.getProductVersionsAfterVersion(ProductServiceClientImpl.java:82)
  at com.atlassian.upm.pac.PacClientImpl.getProductUpgrades(PacClientImpl.java:111)
  at com.atlassian.upm.rest.resources.ProductUpgradesResource.get(ProductUpgradesResource.java:39)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
  at java.lang.reflect.Method.invoke(Unknown Source)
  at com.atlassian.plugins.rest.common.interceptor.impl.DispatchProviderHelper$ResponseOutInvoker$1.invoke(DispatchProviderHelper.java:206)
  at com.atlassian.plugins.rest.common.interceptor.impl.DispatchProviderHelper$1.intercept(DispatchProviderHelper.java:90)
  at com.atlassian.plugins.rest.common.interceptor.impl.DefaultMethodInvocation.invoke(DefaultMethodInvocation.java:61)
 at com.atlassian.plugins.rest.common.expand.interceptor.ExpandInterceptor.intercept(ExpandInterceptor.java:38)
 at com.atlassian.plugins.rest.common.interceptor.impl.DefaultMethodInvocation.invoke(DefaultMethodInvocation.java:61)
  at com.atlassian.plugins.rest.common.interceptor.impl.DispatchProviderHelper.invokeMethodWithInterceptors(DispatchProviderHelper.java:98)
  at com.atlassian.plugins.rest.common.interceptor.impl.DispatchProviderHelper.access$100(DispatchProviderHelper.java:28)
  at com.atlassian.plugins.rest.common.interceptor.impl.DispatchProviderHelper$ResponseOutInvoker._dispatch(DispatchProviderHelper.java:202)
 Caused by: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
 Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
 Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty

This can happen if:

  • Certificates on the server or client have expired.
  • Server port has been reset to another port.

Read this discussion of what can cause the “SSLException” error in Java software. (@StackOverflow)

45. “MissingResourceException”

A “MissingResourceException” occurs when a resource is missing. If the resource is in the correct classpath, this is usually because a properties file is not configured properly. Here’s an example (@TIBCO):

java.util.MissingResourceException: Can't find bundle for base name localemsgs_en_US, locale en_US

Read this discussion of how to fix “MissingResourceException” while running Java software.

46. “NoInitialContextException”

A “NoInitialContextException” occurs when the Java application wants to perform a naming operation but can’t create a connection (@TheASF).

[java] Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
 [java] at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:645)
 [java] at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:247)
 [java] at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:284)
 [java] at javax.naming.InitialContext.lookup(InitialContext.java:351)
 [java] at org.apache.camel.impl.JndiRegistry.lookup(JndiRegistry.java:51)

This can be a complex problem to solve but here are some possible problems that cause the “NoInitialContextException” Java error message:

  • The application may not have the proper credentials to make a connection.
  • The code may not identify the implementation of JNDI needed.
  • The InitialContext class may not be configured with the right properties.

Read this discussion of what “NoInitialContextException” means when running Java software. (@StackOverflow)

47. “NoSuchElementException”

A “NoSuchElementException” happens when an iteration (such as a “for” loop) tries to access the next element when there is none.

public class NoSuchElementExceptionDemo{
  public static void main(String args[]) {
  Hashtable sampleMap = new Hashtable();
  Enumeration enumeration = sampleMap.elements();
  enumeration.nextElement(); //java.util.NoSuchElementExcepiton here because enumeration is empty
 Exception in thread "main" java.util.NoSuchElementException: Hashtable Enumerator
  at java.util.Hashtable$EmptyEnumerator.nextElement(Hashtable.java:1084)
  at test.ExceptionTest.main(NoSuchElementExceptionDemo.java:23)

The “NoSuchElementException” can be thrown by these methods:

  • Enumeration::nextElement()
  • NamingEnumeration::next()
  • StringTokenizer::nextElement()
  • Iterator::next()

Read this tutorial of how to fix “NoSuchElementException” in Java software. (@javinpaul)

48. “NoSuchFieldError”

This Java software error message is thrown when an application tries to access a field in an object but the specified field no longer exists in the onbject (@sourceforge).

public NoSuchFieldError()

Usually, this error is caught in the compiler but will be caught during runtime if a class definition has been changed between compile and running.

Read this discussion of how to find what causes the “NoSuchFieldError” when running Java software. @StackOverflow

49. “NumberFormatException”

This Java software error message occurs when the application tries to convert a string to a numeric type, but that the number is not a valid string of digits (@alvinalexander).

package com.devdaily.javasamples;
 public class ConvertStringToNumber {
  public static void main(String[] args) {
  try {
  String s = "FOOBAR";
  int i = Integer.parseInt(s);
  // this line of code will never be reached
  System.out.println("int value = " + i);
  catch (NumberFormatException nfe) {

The can “NumberFormatException” be thrown when:

  • Leading or trailing spaces in the number.
  • The sign is not ahead of the number.
  • The number has commas.
  • Localisation may not categorize it as a valid number.
  • The number is too big to fit in the numeric type.

Read this discussion of how to avoid “NumberFormatException” when running Java software. (@StackOverflow).

50. “TimeoutException”

This Java software error message occurs when a blocking operation times out.

private void queueObject(ComplexDataObject obj) throws TimeoutException, InterruptedException {
    if (!queue.offer(obj, 10, TimeUnit.SECONDS)) {
        TimeoutException ex = new TimeoutException("Timed out waiting for parsed elements to be processed. Aborting.");
        throw ex;

Read this discussion about how to handle “TimeoutException” when running Java software. (@StackOverflow).


And that wraps it up! If you’ve followed along the whole way, you should be ready to handle a variety of runtime and compiler errors and exceptions. Feel free to keep both of these articles saved or otherwise bookmarked for quick recall. And for the ultimate Java developer’s toolkit, don’t forget to download The Comprehensive Java Developer’s Guide.

Creating Annotations in Java

Annotations are a powerful part of Java, but most times we tend to be the users rather than the creators of annotations. For example, it is not difficult to find Java source code that includes the @Override annotation processed by the Java compiler, the @Autowired annotation used by the Spring framework, or the @Entity annotation used by the Hibernate framework; but rarely do we see custom annotations. While custom annotations are an often-overlooked aspect of the Java language, they can be a very useful asset in developing readable code and just as importantly, useful in understanding how many common frameworks, such as Spring or Hibernate, succinctly accomplish their goals.

In this article, we will cover the basics of annotations, including what annotations are, how they are useful in large-than-academic examples, and how to process them. In order to demonstrate how annotations work in practice, we will create a Javascript Object Notation (JSON) serializer that processes annotated objects and produces a JSON string representing each object. Along the way, we will cover many of the common stumbling blocks of annotations, including the quirks of the Java reflection framework and visibility concerns for annotation consumers. The interested reader can find the source code for the completed JSON serializer on GitHub.

What Are Annotations?

Annotations are decorators that are applied to Java constructs, such as classes, methods, or fields, that associate metadata with the construct. These decorators are benign and do not execute any code in-and-of-themselves, but can be used by runtime frameworks or the compiler to perform certain actions. Stated more formally, the Java Language Specification (JLS), Section 9.7, provides the following definition:

An annotation is a marker which associates information with a program construct, but has no effect at run time.

It is important to note the last clause in this definition: Annotations have no effect on a program at runtime. This is not to say that a framework may not change its behavior based on the presence of an annotation at runtime, but that the inclusion of an annotation does not itself change the runtime behavior of a program. While this may appear to be a nuanced distinction, it is a very important one that must be understood in order to grasp the usefulness of annotations.

For example, adding the @Autowired annotation to an instance field does not in-and-of-itself change the runtime behavior of a program: The compiler simply includes the annotation at runtime, but the annotation does not execute any code or inject any logic that alters the normal behavior of the program (the behavior expected when the annotation is omitted). Once we introduce the Spring framework at runtime, we are able to gain powerful Dependency Injection (DI) functionality when our program is parsed. By including the annotation, we have instructed the Spring framework to inject an appropriate dependency into our field. We will see shortly (when we create our JSON serializer) that the annotation itself does not accomplish this, but rather, the annotation acts as a marker, informing the Spring framework that we desire a dependency to be injected into the annotated field.

Retention and Target

Creating an annotation requires two pieces of information: (1) a retention policy and (2) a target. A retention policy specifies how long, in terms of the program lifecycle, the annotation should be retained for. For example, annotations may be retained during compile-time or runtime, depending on the retention policy associated with the annotation. As of Java 9, there are three standard retention policies, as summarized below:

Source Annotations are discarded by the compiler
Class Annotations are recorded in the class file generated by the compiler but are not required to be retained by the Java Virtual Machine (JVM) that processes the class file at runtime
Runtime Annotations are recorded in the class file by the compiler and retained at runtime by the JVM

As we will see shortly, the runtime option for annotation retention is one of the most common, as it allows for Java programs to reflectively access the annotation and execute code based on the presence of an annotation, as well as access the data associated with an annotation. Note that an annotation has exactly one associated retention policy.

The target of an annotation specifies which Java constructs an annotation can be applied to. For example, some annotations may be valid for methods only, while others may be valid for both classes and fields. As of Java 9, there are eleven standard annotation targets, as summarized in the following table:

Annotation Type Annotates another annotation
Constructor Annotates a constructor
Field Annotates a field, such as an instance variable of a class or an enum constant
Local variable Annotates a local variable
Method Annotates a method of a class
Module Annotates a module (new in Java 9)
Package Annotates a package
Parameter Annotates a parameter to a method or constructor
Type Annotates a type, such as a class, interfaces, annotation types, or enum declarations
Type Parameter Annotates a type parameter, such as those used as formal generic parameters
Type Use Annotates the use of a type, such as when an object of a type is created using the newkeyword, when an object is cast to a specified type, when a class implements an interface, or when the type of a throwable object is declared using the throws keyword (for more information, see the Type Annotations and Pluggable Type Systems Oracle tutorial)

For more information on these targets, see Section 9.7.4 of the JLS. It is important to note that one or more targets may be associated with an annotation. For example, if the field and constructor targets are associated with an annotation, then the annotation may be used on either fields or constructors. If on the other hand, an annotation only has an associated target of method, then applying the annotation to any construct other than a method results in an error during compilation.

Annotation Parameters

Annotations may also have associated parameters. These parameters may be a primitive (such as int or double), String, class, enum, annotation, or an array of any of the five preceding types (see Section 9.6.1 of the JLS). Associating parameters with an annotation allows for an annotation to provide contextual information or can parameterize a processor of an annotation. For example, in our JSON serializer implementation, we will allow for an optional annotation parameter that specifies the name of a field when it is serialized (or use the variable name of the field by default if no name is specified).

How Are Annotations Created?

For our JSON serializer, we will create a field annotation that allows a developer to mark a field to be included when serializing an object. For example, if we create a car class, we can annotate the fields of the car (such as make and model) with our annotation. When we serialize a car object, the resulting JSON will include make and model keys, where the values represent the value of the make and model fields, respectively. For the sake of simplicity, we will assume that this annotation will be used only for fields of type String, ensuring that the value of the field can be directly serialized as a string.

To create such a field annotation, we declare a new annotation using the @interface keyword:

public @interface JsonField {
 public String value() default "";

The core of our declaration is the public @interface JsonField, which declares an annotation type with a publicmodifier, allowing our annotation to be used in any package (assuming the package is properly imported if in another module). The body of the annotation declares a single String parameter, named value, that has a type of String and a default value of an empty string.

Note that the variable name value has a special meaning: It defines a Single-Element Annotation (Section 9.7.3. of the JLS) and allows users of our annotation to supply a single parameter to the annotation without specifying the name of the parameter. For example, a user can annotate a field using @JsonField("someFieldName") and is not required to declare the annotation as @JsonField(value = "someFieldName"), although the latter may still be used (but it is not required). The inclusion of a default value of empty string allows for the value to be omitted, resulting in value holding an empty string if no value is explicitly specified. For example, if a user declares the above annotation using the form @JsonField, then the value parameter is set to an empty string.

The retention policy and target of the annotation declaration are specified using the @Retention and @Target annotations, respectively. The retention policy is specified using the java.lang.annotation.RetentionPolicy enum and includes constants for each of the three standard retention policies. Likewise, the target is specified using the java.lang.annotation.ElementTypeenum, which includes constants for each of the eleven standard target types.

In summary, we created a public, single-element annotation named JsonField, which is retained by the JVM during runtime and may only be applied to fields. This annotation has a single parameter, value, of type String with a default value of an empty string. With our annotation created, we can now annotate fields to be serialized.

How Are Annotations Used?

Using an annotation requires only that the annotation is placed before an appropriate construct (any valid target for the annotation). For example, we can create a Carclass using the following class declaration:

public class Car {
private final String make;

private final String model;
private final String year;

public Car(String make, String model, String year) {
this.make = make;
this.model = model;
this.year = year;

public String getMake() {
return make;
public String getModel() {
return model;

public String getYear() {
return year;

public String toString() {
return year + ” ” + make + ” ” + model;


This class exercises the two major uses of the @JsonField annotation: (1) with an explicit value and (2) with a default value. We could have also annotated a field using the form @JsonField(value = "someName"), but this style is overly verbose and does not aid in the readability of our code. Therefore, unless the inclusion of an annotation parameter name in a single-element annotation adds to the readability of code, it should be omitted. For annotations with more than one parameter, the name of each parameter is required to differentiate between parameters (unless only one argument is provided, in which case, the argument is mapped to the value parameter if no name is explicitly provided).

Given the above uses of the @JsonField annotation, we would expect that a Car ject is serialized into a JSON string of the form {"manufacturer":"someMake", "model":"someModel"} (note, as we will see later, we will disregard the order of the keys–manufacturer and model–in this JSON string). Before we proceed, it is important to note that adding the @JsonField annotations does not change the runtime behavior of the Car class. If we compile this class, the inclusion of @JsonField annotations does not enhance the behavior of the Car class anymore than had we omitted the annotations. These annotations are simply recorded, along with the value of the value parameter, in the class file for the Car class. Altering the runtime behavior of our system requires that we process these annotations.

How are Annotations Processed?

Processing annotations is accomplished through the Java Reflection Application Programming Interface (API). Sidelining the technical nature of the reflection API for a moment, the reflection API allows us to write code that will inspect the class, methods, fields, etc. of an object. For example, if we create a method that accepts a Car object, we can inspect the class of this object (namely, Car) and discover that this class has three fields: (1) make, (2) model, and (3) year. Furthermore, we can inspect these fields to discover if each is annotated with a specific annotation.

Using this capability, we can iterate through each field of the class associated with the object passed to our method and discover which of these fields are annotated with the @JsonField annotation. If the field is annotated with the @JsonField annotation, we record the name of the field and its value. Once all the fields have been processed, then we can create the JSON string using these field names and values.

Determining the name of the field requires more complex logic than determining the value. If the @JsonField includes a provided value for the value parameter (such as "manufacturer" in the previous @JsonField("manufacturer") use), we will use this provided field name. If the value of the value parameter is an empty string, we know that no field name was explicitly provided (since this is the default value for the value parameter), or else, an empty string was explicitly provided. In either case, we will use the variable name of the field as the field name (for example, model in the private final String model declaration).

Combining this logic into a JsonSerializer class, we can create the following class declaration:

public class JsonSerializer {
    public String serialize(Object object) throws JsonSerializeException {
        try {
            Class<?> objectClass = requireNonNull(object).getClass();
            Map<String, String> jsonElements = new HashMap<>();
            for (Field field: objectClass.getDeclaredFields()) {
                if (field.isAnnotationPresent(JsonField.class)) {
                    jsonElements.put(getSerializedKey(field), (String) field.get(object));
            return toJsonString(jsonElements);
        catch (IllegalAccessException e) {
            throw new JsonSerializeException(e.getMessage());
    private String toJsonString(Map<String, String> jsonMap) {
        String elementsString = jsonMap.entrySet()
                .map(entry -> "\""  + entry.getKey() + "\":\"" + entry.getValue() + "\"")
        return "{" + elementsString + "}";
    private static String getSerializedKey(Field field) {
        String annotationValue = field.getAnnotation(JsonField.class).value();
        if (annotationValue.isEmpty()) {
            return field.getName();
        else {
            return annotationValue;

We also create an exception that will be used to denote if an error has occurred while processing the object supplied to our serialize method:

public class JsonSerializeException extends Exception {
    private static final long serialVersionUID = -8845242379503538623L;
    public JsonSerializeException(String message) {

Although the JsonSerializer class appears complex, it consists of three main tasks: (1) finding all fields of the supplied class annotated with the @JsonField annotation, (2) recording the field name (or the explicitly provided field name) and value for all fields that include the @JsonField annotation, and (3) converting the recorded field name and value pairs into a JSON string.

The line requireNonNull(object).getClass() simply checks that the supplied object is not null (and throws a NullPointerException if it is) and obtains the Class object associated with the supplied object. We will use this Class object shortly to obtain the fields associated with the class. Next, we create a Map of Strings to Strings, which will be used store the field name and value pairs.

With our data structures established, we next iterate through each field declared in the class of the supplied object. For each field, we configure the field to suppress Java language access checking when accessing the field. This is a very important step since the fields we annotated are private. In the standard case, we would be unable to access these fields, and attempting to obtain the value of the private field would result in an IllegalAccessException being thrown. In order to access these private fields, we must instruct the reflection API to suppress the standard Java access checking for this field using the setAccessible method. The setAccessible(boolean) documentation defines the meaning of the supplied boolean flag as follows:

A value of true indicates that the reflected object should suppress Java language access checking when it is used. A value of false indicates that the reflected object should enforce Java language access checks.

Note that with the introduction of modules in Java 9, using the setAccessible method requires that the package containing the class whose private fields will be accessed should be declared open in its module definition. For more information, see this explanation by Michał Szewczyk and Accessing Private State of Java 9 Modules by Gunnar Morling.

After gaining access to the field, we check if the field is annotated with the @JsonField. If it is, we determine the name of the field (either through an explicit name provided in the @JsonField annotation or the default name, which equals the variable name of the field) and record the name and field value in our previously constructed map. Once all fields have been processed, we then convert the map of field names to field values (jsonElements) into a JSON string.

We accomplish by converting the map into a stream of entries (key-value pairs for each entry in the map), mapping each entry to a string of the form "<fieldName>":"<fieldValue>", where <fieldName> is the key for the entry and <fieldValue> is the value for the entry. Once all entries have been processed, we combine all of these entry strings with a comma. This results in a string of the form "<fieldName1>":"<fieldValue1>","<fieldName2>":"<fieldValue2>",.... Once this terminal string has been joined, we surround it with curly braces, creating a valid JSON string.

In order to test this serializer, we can execute the following code:

Car car = new Car("Ford", "F150", "2018");
JsonSerializer serializer = new JsonSerializer();

This results in the following output:


As expected, the maker and model fields of the Car object have been serialized, using the name of the field (or the explicitly supplied name in the case of the maker field) as the key and the value of the field as the value. Note that the order of JSON elements may be reversed from the output seen above. This occurs because there is no definite ordering for the array of declared fields for a class, as stated in the getDeclaredFields documentation:

The elements in the returned array are not sorted and are not in any particular order.

Due to this limitation, the order of the elements in the JSON string may vary. To make the order of the elements deterministic, we would have to impose ordering ourselves (such as by sorting the map of field names to field values). Since a JSON object is defined as an unordered set of name-value pairs, as per the JSON standard, imposing ordering is unneeded. Note, however, a test case for the serialize method should pass for either {"model":"F150","manufacturer":"Ford"} or {"manufacturer":"Ford","model":"F150"}.


Java annotations are a very powerful feature in the Java language, but most often, we are the users of standard annotations (such as @Override) or common framework annotations (such as @Autowired), rather than their developers. While annotations should not be used in place of interfaces or other language constructs that properly accomplish a task in an object-oriented manner, they can greatly simplify repetitive logic. For example, rather than creating a toJsonStringmethod within an interface and having all classes that can be serialized implement this interface, we can annotate each serializable field. This takes the repetitive logic of the serialization process (mapping field names to fields values) and places it into a single serializer class. It also decouples the serialization logic from the domain logic, removing the clutter of manual serialization from the conciseness of the domain logic.

While custom annotations are not frequently used in most Java applications, knowledge of this feature is a requirement for any intermediate or advanced user of the Java language. Not only will knowledge of this feature enhance the toolbox of a developer, just as importantly, it will aid in the understanding of the common annotations in the most popular Java frameworks.

@RestController vs. @Controller : Spring Framework

Spring MVC Framework and REST

Spring’s annotation-based MVC framework simplifies the process of creating RESTful web services. The key difference between a traditional Spring MVC controller and the RESTful web service controller is the way the HTTP response body is created. While the traditional MVC controller relies on the View technology, the RESTful web service controller simply returns the object and the object data is written directly to the HTTP response as JSON/XML.  For a detailed description of creating RESTful web services using the Spring framework, click here.

Image title

Figure 1: Spring MVC traditional workflow

Spring MVC REST Workflow

The following steps describe a typical Spring MVC REST workflow:

  1. The client sends a request to a web service in URI form.
  2. The request is intercepted by the DispatcherServlet which looks for Handler Mappings and its type.
    • The Handler Mappings section defined in the application context file tells DispatcherServlet which strategy to use to find controllers based on the incoming request.
    • Spring MVC supports three different types of mapping request URIs to controllers: annotation, name conventions, and explicit mappings.
  3. Requests are processed by the Controller and the response is returned to the DispatcherServlet which then dispatches to the view.

In Figure 1, notice that in the traditional workflow the ModelAndView object is forwarded from the controller to the client. Spring lets you return data directly from the controller, without looking for a view, using the @ResponseBody annotation on a method. Beginning with Version 4.0, this process is simplified even further with the introduction of the @RestController annotation. Each approach is explained below.

Using the @ResponseBody Annotation

When you use the @ResponseBody annotation on a method, Spring converts the return value and writes it to the http response automatically. Each method in the Controller class must be annotated with @ResponseBody.


Figure 2: Spring 3.x MVC RESTful web services workflow

Behind the Scenes

Spring has a list of HttpMessageConverters registered in the background. The responsibility of the HTTPMessageConverter is to convert the request body to a specific class and back to the response body again, depending on a predefined mime type. Every time an issued request hits @ResponseBody, Spring loops through all registered HTTPMessageConverters seeking the first that fits the given mime type and class, and then uses it for the actual conversion.

Code Example

Let’s walk through @ResponseBody with a simple example.

Project Creation and Setup

  1. Create a Dynamic Web Project with Maven support in your Eclipse or MyEclipse IDE.
  2. Configure Spring support for the project.• If you are using Eclipse IDE, you need to download all Spring dependencies and configure your pom.xml to contain those dependencies.• In MyEclipse, you only need to install the Spring facet and the rest of the configuration happens automatically.
  3. Create the following Java class named Employee. This class is our POJO.
package com.example.spring.model;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement(name = "Employee")
public class Employee {
 String name; 
 String email;
 public String getName() {
 return name;
 public void setName(String name) {
 this.name = name;
 public String getEmail() {
 return email;
 public void setEmail(String email) {
 this.email = email;
 public Employee() {
 Then, create the following @Controller class:
package com.example.spring.rest;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import com.example.spring.model.Employee;
public class EmployeeController {
 Employee employee = new Employee();
 @RequestMapping(value = "/{name}", method = RequestMethod.GET, produces = "application/json")
 public @ResponseBody Employee getEmployeeInJSON(@PathVariable String name) {
 return employee; 
 @RequestMapping(value = "/{name}.xml", method = RequestMethod.GET, produces = "application/xml")
 public @ResponseBody Employee getEmployeeInXML(@PathVariable String name) {
 return employee; 
 Notice the @ResponseBody added to each of the @RequestMapping methods in the return value. After that, it’s a two-step process:
  1. Add the <context:component-scan> and <mvc:annotation-driven /> tags to the Spring configuration file.
    • <context:component-scan> activates the annotations and scans the packages to find and register beans within the application context.
    • <mvc:annotation-driven/> adds support for reading and writing JSON/XML if the Jackson/JAXB libraries are on the classpath.
    • For JSON format, include the jackson-databind jar and for XML include the jaxb-api-osgi jar to the project classpath.
  2. Deploy and run the application on any server (e.g., Tomcat). If you are using MyEclipse, you can run the project on the embedded Tomcat server.JSON—Use the URL: http://localhost:8080/SpringRestControllerExample/rest/employees/Bob and the following output displays:output_json-crop

    XML — Use the
    URL: http://localhost:8080/SpringRestControllerExample/rest/employees/Bob.xml and the following output displays:output_xml

Using the @RestController Annotation

Spring 4.0 introduced @RestController, a specialized version of the controller which is a convenience annotation that does nothing more than add the @Controller and @ResponseBody annotations. By annotating the controller class with @RestController annotation, you no longer need to add @ResponseBody to all the request mapping methods. The @ResponseBody annotation is active by default. Click here to learn more.

To use @RestController in our example, all we need to do is modify the @Controller to @RestController and remove the @ResponseBody from each method. The resultant class should look like the following:

package com.example.spring.rest;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.example.spring.model.Employee;
public class EmployeeController {
 Employee employee = new Employee();
 @RequestMapping(value = "/{name}", method = RequestMethod.GET, produces = "application/json")
 public Employee getEmployeeInJSON(@PathVariable String name) {
 return employee;
 @RequestMapping(value = "/{name}.xml", method = RequestMethod.GET, produces = "application/xml")
 public Employee getEmployeeInXML(@PathVariable String name) {
 return employee; 

Note that we no longer need to add the @ResponseBody to the request mapping methods. After making the changes, running the application on the server again results in same output as before.


As you can see, using @RestController is quite simple and is the preferred method for creating MVC RESTful web services starting from Spring v4.0. I would like to extend a big thank you to my co-author, Swapna Sagi, for all of her help in bringing you this information!

Java 8: A quick introduction to Parallelism and the Spliterator


With the release of Java 8 a number of new language features were introduced [1]. These included lambda functionsstreamsand completable futures. Colleagues of mine have already reviewed these features in previous articles on this blog, which I recommend reading as part of this topic [2][3]. In this article I will touch on an aspect of the Java 8 release that relates to the push towards exploiting parallelism, in the context of the existing Collections Framework – specifically the new Spliterator interface.

Due to the size and complexity of these topics, it will only be possible to provide a high-level introduction, but it is hoped that this will trigger interest in the reader, to go off and delve further into an area of growing importance and interest.


First thing first – what is meant by the term ‘parallelism’? For the purpose of this article, at a high level, it is a term that refers to programming in a way to exploit multi-core CPU units. That is to say, to take a piece of work (a task) and break it out into separate units (sub-tasks) that can be processed in parallel. Then aggregating the results of all the processed units to complete the original work [4]. This is sufficient for now, although the reader might want to investigate further on differences between parallelism, concurrency and sequential programming.

Figure 1

For this to be successful, the processing on a unit should not have a dependency on any other unit – this processing should be stateless and also should not depend on the result of the processing of any other unit. This is important when it comes to the data that is involved – as parallel computing lends itself to situations where vast amounts of data is to be handled.

This also raises the question of when to go for parallelism – there is an overhead in the added complexity of identifying and breaking out the units of work and then coordinating the parallel processing. Then there is also the issue of latency in reading the data – the savings found in using parallel cores for processing is to try to keep each CPU core busy all the time – which requires reading data as and when is needed without delay. Before embarking on a parallel processing architecture, some cost-benefit analysis is required to be sure that this is the right approach.

With the introduction of Java 8 Oracle was aiming to make it more attractive and simpler for developers to use Java in parallel programming. The inclusion of features such as lambda functions and the Streams API was to be seen as part of this push [5]. How successful that becomes, time will tell.

This concept of parallelism is behind the existing fork/join framework found in Java, which will be covered in a future article, but for now we will confine our overview to the new Spliteratorinterface, found in the java.util library.

Collections Framework

The Collections Framework is also found in the java.util library, and provides a range of classes that act as containers to hold objects (technically, references to the objects) and allow useful behaviour such as adding new objects, searching the container for a particular object, and the sorting of such objects – the behaviour available is depending on the type of container used [6].

Within the library there are two different concepts of containers, captured in the core interfaces, that are defined within the framework [7]. One concept is that of a Collection that is a sequence of individual elements, which is further refined by extending the Collectioninterface with interfaces for ListSetQueue or Deque.

Listing 1 – Simple Iterator on an ArrayList

// Create a new Collection type; in this case an ArrayList Collection<Person> people = new ArrayList<Person>(); // Add some people using Person objects people.add(new Person("Jane", "Doe", "Ireland")); people.add(new Person("Joe", "Doe", "England")); people.add(new Person("John", "Doe", "Scotland")); people.add(new Person("Julie", "Doe", "Wales")); people.add(new Person("Jerry", "Doe", "France")); people.add(new Person("Jim", "Doe", "Italy")); // simple iterator example Iterator<Person> peopleIterator = people.iterator(); while(peopleIterator.hasNext()) { Person person = peopleIterator.next(); System.out.println("Hello " + person.getFirst_name() + " " + person.getLast_name() + " from " + person.getCountry()); }

The output is:

Hello Jane Doe from Ireland Hello Joe Doe from England Hello John Doe from Scotland Hello Julie Doe from Wales Hello Jerry Doe from France Hello Jim Doe from Italy

Separately, there is the concept of a container, defined in the Map interface, holding a group of key-value object pairs, which allow the use of a key to find a value – in this way an object (value) can be found using another object (key) that has been mapped to it.

Both the interfaces of Collection and Map are found in the Collections Framework, in java.util, along with a class called Collections. Note that the Collections class consists exclusively of static methods, which include polymorphic algorithms, that operate on or return collections.

Paralellism and the Collections Framework

A problem with using the containers of the Collections Framework in some form of parallel programming is that these containers are not thread-safe. Wrappers are provided for adding automatic synchronization (thread safety) but the drawback is that this introduces thread contention, where two or more threads are trying to access the same resource simultaneously and therefore cause the runtime to either suspend their execution or execute them more slowly.

The Spliterator

A new interface added to java.util is the Spliterator, which as the name implies, is a new special kind of Iterator that can traverse a Collection[8]. For Java 8, the Collection interface has been updated to include a new spliterator() method, that when called returns a Spliterator. This is not the case for the separate Map interface, although it is considered part of the Collections Framework as explained above.

The Spliterator can ‘split’ the Collection, partitioning off some of its elements as another Spliterator. This does allow parallel processing of different parts of a Collection but note that the Spliterator itself does not provide the parallel processing behaviour. Instead, the Spliteratoris there to support parallel traversal of the suitably partitioned parts of a Collection. This solves the problem of dividing the data, as held in a Collection such as an ArrayList, into suitably sized sub-units that can be processed in parallel.

The fork/join framework, which is found in Java 7 libraries, can be used with the Spliteratorand is designed for parallelizable work that can be broken into smaller pieces recursively to be processed independently, and then aggregating the results of the sub-units to produce a final result. Again, however note that using the Spliterator is not dependant on the fork/join framework, this framework is just one way of implementing parallel processing [9].

Listing 2 – Simple Spliterator on an ArrayList

The following code gives us an ArrayList, called people that is to hold objects of the Persontype – a user defined type:

Collection<Person> people = new ArrayList<Person>();

The ‘people’ collection then has a number of objects added, and we can now call on the spliterator() of the ArrayList:

Spliterator<Person> peopleSpliterator = people.spliterator();

This returns a Spliterator we are calling ‘peopleSpliterator’.

System.out.println(" " + peopleSpliterator.characteristics()); System.out.println(" " + peopleSpliterator.estimateSize());

The output from the above code is :


The value 16464 represents the characteristics of the Spliterator partition. This is important as it is these predefined characteristics that result in what ends up in each new partition and how it is structured. The int value returned from the characteristics() call is the result of the OR’ed values of the individual characteristics for an ArrayList, which is ORDEREDSORTED and SUBSIZED[10].

ORDERED indicates that the elements have a defined order, which is expected from a List, when traversing and partitioning them. SORTED means that the elements follow a predefined sort order while SUBSIZED indicates that this and any further resulting Spliterator are SIZEDSIZED means the Spliterator has been created from a source with a known size. The details of the characteristics for a particular Collection type can be found in the online API documentation. Where you wish to have a collection with non-standard characteristics, that means going down the route of implementing your own version of the Spliterator interface.

Listing 3 – Split the Spliterator

Now that we have a Spliterator, lets ‘split’ it:

Spliterator<Person> newPartition = peopleSpliterator.trySplit(); System.out.println(" " + newPartition.estimateSize()); System.out.println(" " + peopleSpliterator.estimateSize());

The output from this code is:


As you can see we have a new partition with 3 elements, leaving the existing partition with 3. The important method of trySplit() has partitioned the elements of the original Spliterator – in this case split it evenly, leaving the existing Spliterator with a reduced number of elements in its own partition. Clearly this is very different from the behaviour of the pre-existing Iterator.

If this Spliterator can be split (partitioned), the method will try to divide the elements in half for a balanced parallel computation. This will not always be the case, and where a Spliteratorcannot be split, a null is returned. The splitting, or partitioning, can continue recursively until a Spliterator returns null. If a different behaviour is required in how the trySplit() method works, then again a custom implementation of the Spliterator interface is required.

When used in a parallel programming solution, bear in mind that the individual Spliterator is not thread safe; instead the parallel processing implementation must insure that each individual Spliterator is handled by one thread at a time. A thread calling the trySplit() of a Spliterator, may hand over the returned new Spliterator to a separate thread for processing. This follows the idea of decomposition of a larger task into smaller sub-tasks that can be processed in parallel individually of each other.

When working with a Spliterator, especially in a parallel programming environment, changes to the data structure that is undergoing processing can lead to arbitrary and unknown behaviour and that is not good. Ideally the original data source should not be interfered with (no elements added, replaced or removed). Collection types are not immutable and cannot manage concurrent modifications. To help with this problem, for the ArrayList, the trySplit() returns a ‘late-binding’ and ‘fast-fail’ Spliterator. This late-binding binds the elements to the Spliteratoron first split, first traversal or first query for estimated size rather then at the time of the creation of the Spliterator. After this binding occurs any interference with the elements is then detected, throwing a ConcurrentModificationException. Here, it fails-fast, rather then risking arbitrary and non-deterministic behaviour at a future time.

Of course with the above trivial example of only 6 elements in the list, this is hardly a work load that would justify parallel programming – but as mentioned before, Spliterator itself does not provide the parallel programming behaviour – the reader might be interested in experimenting with a Collection type holding a vastly larger number of objects and then look to implement a parallel processing solution to exploit the use of the Spliterator. A suggestion is that such a solution may incorporate the fork/join framework and/or the Stream API.

Future articles will delve further into this interesting and increasingly important area.

Thorough Introduction to Apache Kafka


Kafka is a word that gets heard a lot nowadays… A lot of leading digital companies seem to use it as well. But what is it actually?

Kafka was originally developed at LinkedIn in 2011 and has improved a lot since then. Nowadays it is a whole platform, allowing you to redundantly store absurd amounts of data, have a message bus with huge throughput (millions/sec) and use real-time stream processing on the data that goes through it all at once.

This is all well and great, but stripped down to its core, Kafka is a distributed, horizontally-scalable, fault-tolerant, commit log.

Those were some fancy words, let’s go at them one by one and see what they mean. Afterwards, we will dive deep into how it works.


A distributed system is one which is split into multiple running machines, all of which work together in a cluster to appear as one single node to the end user. Kafka is distributed in the sense that it stores, receives and sends messages on different nodes (called brokers).

The benefits to this approach are high scalability and fault-tolerance.


Let’s define the term vertical scalability first. Say, for instance, you have a traditional database server which is starting to get overloaded. The way to get this solved is to simply increase the resources (CPU, RAM, SSD) on the server. This is called vertical scaling — where you add more resources to the machine. There are two big disadvantages to scaling upwards:

  1. There are limits defined by the hardware. You cannot scale upwards indefinitely.
  2. It usually requires downtime, something which big corporations cannot afford.

Horizontal scalability is solving the same problem by throwing more machines at it. Adding a new machine does not require downtime nor are there any limits to the amount of machines you can have in your cluster. The catch is that not all systems support horizontal scalability, as they are not designed to work in a cluster and those that are are usually more complex to work with.

Horizontal scaling becomes much cheaper after a certain threshold


Something that emerges in non-distributed systems is that they have a single point of failure (SPoF). If your single database server fails (as machines do) for whatever reason, you’re screwed.

Distributed systems are designed in such a way to accommodate failures in a configurable way. In a 5-node Kafka cluster, you can have it continue working even if 2 of the nodes are down. It is worth noting that fault-tolerance is at a direct tradeoff with performance, as in the more fault-tolerant your system is, the less performant it is.

Commit Log

A commit log (also referred to as write-ahead log, transaction log) is a persistent ordered data structure which only supports appends. You cannot modify nor delete records from it. It is read from left to right and guarantees item ordering.

Sample illustration of a commit log, taken from here

– Are you telling me that Kafka is such a simple data structure?

In many ways, yes. This structure is at the heart of Kafka and is invaluable, as it provides ordering, which in turn provides deterministic processing. Both of which are non-trivial problems in distributed systems.

Kafka actually stores all of its messages to disk (more on that later) and having them ordered in the structure lets it take advantage of sequential disk reads.

  • Reads and writes are a constant time O(1) (knowing the record ID), which compared to other structure’s O(log N) operations on disk is a huge advantage, as each disk seek is expensive.
  • Reads and writes do not affect another. Writing would not lock reading and vice-versa (as opposed to balanced trees)

These two points have huge performance benefits, since the data size is completely decoupled from performance. Kafka has the same performance whether you have 100KB or 100TB of data on your server.

How does it work?

Applications (producers) send messages (records) to a Kafka node (broker) and said messages are processed by other applications called consumers. Said messages get stored in a topic and consumers subscribe to the topic to receive new messages.

As topics can get quite big, they get split into partitions of a smaller size for better performance and scalability. (ex: say you were storing user login requests, you could split them by the first character of the user’s username)
Kafka guarantees that all messages inside a partition are ordered in the sequence they came in. The way you distinct a specific message is through its offset, which you could look at as a normal array index, a sequence number which is incremented for each new message in a partition.

Kafka follows the principle of a dumb broker and smart consumer. This means that Kafka does not keep track of what records are read by the consumer and delete them but rather stores them a set amount of time (e.g one day) or until some size threshold is met. Consumers themselves poll Kafka for new messages and say what records they want to read. This allows them to increment/decrement the offset they’re at as they wish, thus being able to replay and reprocess events.

It is worth noting that consumers are actually consumer groups which have one or more consumer processes inside. In order to avoid two processes reading the same message twice, each partition is tied to only one consumer process per group.

Representation of the data flow

Persistence to Disk

As I mentioned earlier, Kafka actually stores all of its records to disk and does not keep anything in RAM. You might be wondering how this is in the slightest way a sane choice. There are numerous optimizations behind this that make it feasible:

  1. Kafka has a protocol which groups messages together. This allows network requests to group messages together and reduce network overhead, the server in turn persist chunk of messages in one go and consumer fetch large linear chunks at once
  2. Linear reads/writes on a disk are fast. The concept that modern disks are slow is because of disk seek, something that is not an issue in big linear operations.
  3. Said linear operations are heavily optimized by the OS, via read-ahead(prefetch large block multiples) and write-behind (group small logical writes into big physical writes) techniques.
  4. Modern OSes cache the disk in free RAM. This is called pagecache.
  5. Since Kafka stores messages in a standardized binary format unmodified throughout the whole flow (producer->broker->consumer), it can make use of the zero-copy optimization. That is when the OS copies data from the pagecache directly to a socket, effectively bypassing the Kafka broker application entirely

All of these optimizations allow Kafka to deliver messages at near network speed.

Data Distribution & Replication

Let’s talk about how Kafka achieves fault-tolerance and how it distributes data between nodes.

Data Replication

Partition data is replicated across multiple brokers in order to preserve the data in case one broker dies.

At all times, one broker “owns” a partition and is the node through which applications write/read from the partition. This is called a partition leader. It replicates the data it receives to N other brokers, called followers. They store the data as well and are ready to be elected as leader in case the leader node dies.

This helps you configure the guarantee that any successfully published message will not be lost. Having the option to change the replication factor lets you trade performance for stronger durability guarantees, depending on the criticality of the data.

4 Kafka brokers with a replication factor of 3

In this way, if one leader ever fails, a follower can take his place.

You may be asking, though:

– How does a producer/consumer know who the leader of a partition is?

For a producer/consumer to write/read from a partition, they need to know its leader, right? This information needs to be available from somewhere.
Kafka stores such metadata in a service called Zookeeper.

What is Zookeeper?

Zookeeper is a distributed key-value store. It is highly-optimized for reads but writes are slower. It is most commonly used to store metadata and handle the mechanics of clustering (heartbeats, distributing updates/configurations, etc).

It allows clients of the service (the Kafka brokers) to subscribe and have changes sent to them once they happen. This is how brokers know when to switch partition leaders. Zookeeper is also extremely fault-tolerant and it ought to be, as Kafka heavily depends on it.

It is used for storing all sort of metadata, to mention some:

  • Consumer group‘s offset per partition (although modern clients store offsets in a separate Kafka topic)
  • ACL (Access Control Lists) — used for limiting access/authorization
  • Producer & Consumer Quotas —maximum message/sec boundaries
  • Partition Leaders and their health

How does a producer/consumer know who the leader of a partition is?

Producer and Consumers used to directly connect and talk to Zookeeper to get this (and other) information. Kafka has been moving away from this coupling and since versions 0.8 and 0.9 respectively, clients fetch metadata information from Kafka brokers directly, who themselves talk to Zookeeper.

Metadata Flow


In Kafka, a stream processor is anything that takes continual streams of data from input topics, performs some processing on this input and produces a stream of data to output topics (or external services, databases, the trash bin, wherever really…)

It is possible to do simple processing directly with the producer/consumer APIs, however for more complex transformations like joining streams together, Kafka provides a integrated Streams API library.

This API is intended to be used within your own codebase, it is not running on a broker. It works similar to the consumer API and helps you scale out the stream processing work over multiple applications (similar to consumer groups).

Stateless Processing

A stateless processing of a stream is deterministic processing that does not depend on anything external. You know that for any given data you will always produce the same output independent of anything else. An example for that would be simple data transformation — appending something to a string "Hello" -> "Hello, World!".


Stream-Table Duality

It is important to recognize that streams and tables are essentially the same. A stream can be interpreted as a table and a table can be interpreted as a stream.

Stream as a Table

If you look at how synchronous database replication is achieved, you’ll see that it is through the so-called streaming replication, where each change in a table is sent to a replica server. A Kafka stream can be interpreted in the same way — as a stream of updates for data, in which the aggregate is the final result of the table. Such streams get saved in a local RocksDB (by default) and are called a KTable.

Each record increments the aggregated count

Table as a Stream

A table can be looked at as a snapshot of the latest value for each key in a stream. In the same way stream records can produce a table, table updates can produce a changelog stream.

Each update produces a snapshot record in the stream

Stateful Processing

Some simple operations like map() or filter() are stateless and do not require you to keep any data regarding the processing. However, in real life, most operations you’ll do will be stateful (e.g count()) and as such will require you to store the currently accumulated state.

The problem with maintaining state on stream processors is that the stream processors can fail! Where would you need to keep this state in order to be fault-tolerant?

A naive approach is to simply store all state in a remote database and join over the network to that store. The problem with this is that there is no locality of data and lots of network round-trips, both of which will significantly slow down your application. A more subtle but important problem is that your stream processing job’s uptime would be tightly coupled to the remote database and the job will not be self-contained (a change in the database from another team might break your processing).

So what’s a better approach?
Recall the duality of tables and streams. This allows us to convert streams into tables that are co-located with our processing. It also provides us with a mechanism for handling fault tolerance — by storing the streams in a Kafka broker.

A stream processor can keep its state in a local table (e.g RocksDB), which will be updated from an input stream (after perhaps some arbitrary transformation). When the process fails, it can restore its data by replaying the stream.

You could even have a remote database be the producer of the stream, effectively broadcasting a changelog with which you rebuild the table locally.

Stateful processing, joining a KStream with a KTable


Normally, you’d be forced to write your stream processing in a JVM language, as that is where the only official Kafka Streams API client is.

Sample KSQL setup

Currently in a developer preview, KSQL is a new feature which allows you to write your simple streaming jobs in a familiar SQL-like language.

You set up a KSQL server and interactively query it through a CLI to manage the processing. It works with the same abstractions (KStream & KTable), guarantees the same benefits of the Streams API (scalability, fault-tolerance) and greatly simplifies work with streams.

This might not sound as a lot but in practice is way more useful for testing out stuff and even allows people outside of development (e.g product owners) to play around with stream processing. I encourage you to take a look at the quick-start video and see how simple it is.

Streaming alternatives

Kafka streams is a perfect mix of power and simplicity. It arguably has the best capabilities for stream jobs on the market and it integrates with Kafka way easier than other stream processing alternatives (StormSamzaSpark,Wallaroo).

The problem with most other stream processing frameworks is that they are complex to work with and deploy. A batch processing framework like Spark needs to:

  • Control a large number of jobs over a pool of machines and efficiently distribute them across the cluster.
  • To achieve this it has to dynamically package up your code and physically deploy it to the nodes that will execute it. (along with configuration, libraries, etc.)

Unfortunately tackling these problems makes the frameworks pretty invasive. They want to control many aspects of how code is deployed, configured, monitored, and packaged.

Kafka Streams let you roll out your own deployment strategy when you need it, be it KubernetesMesosNomadDocker Swarm or others.

The underlying motivation of Kafka Streams is to enable all your applications to do stream processing without the operational complexity of running and maintaining yet another cluster. The only potential downside is that it is tightly coupled with Kafka, but in the modern world where most if not all real-time processing is powered by Kafka that may not be a big disadvantage.

When would you use Kafka?

As we already covered, Kafka allows you to have a huge amount of messages go through a centralized medium and store them without worrying about things like performance or data loss.

This means it is perfect for use as the heart of your system’s architecture, acting as a centralized medium that connects different applications. Kafka can be the center piece of an event-driven architecture and allows you to truly decouple applications from one another.


Kafka allows you to easily decouple communication between different (micro)services. With the Streams API, it is now easier than ever to write business logic which enriches Kafka topic data for service consumption. The possibilities are huge and I urge you to explore how companies are using Kafka.


Apache Kafka is a distributed streaming platform capable of handling trillions of events a day. Kafka provides low-latency, high-throughput, fault-tolerant publish and subscribe pipelines and is able to process streams of events.

We went over its basic semantics (producer, broker, consumer, topic), learned about some of its optimizations (pagecache), learned how it’s fault-tolerant by replicating data and were introduced to its powerful streaming abilities.

Kafka has seen large adoption at thousands of companies worldwide, including a third of the Fortune 500. With the continual improvement of Kafka and the recently released first major version 1.0 (1st November, 2017),there are predictions that this Streaming Platform is going to be as big and central of a data platform as relational databases are.

I hope that this introduction helped familiarize you with Apache Kafka and its potential.

Further Reading Resources & Things I did not mention

The rabbit hole goes deeper than this article was able to cover. Here are some features I did not get the chance to mention but are nevertheless important to know:

Connector API — API helping you connect various services to Kafka as a source or sink (PostgreSQL, Redis, ElasticSearch)

Log Compaction — An optimization which reduces log size. Extremely useful in changelog streams

Exactly-once Message Semantics — Guarantee that messages are received exactly once. This is a big deal.


Confluent Blog — a wealth of information regarding Apache Kafka

Kafka Documentation — Great, extensive, high-quality documentation

Kafka Summit 2017 videos

Thank you for taking the time to read this.

7 Techniques for thread-safe classes

Almost every Java application uses threads. A web server like Tomcat process each request in a separate worker thread, fat clients process long-running requests in dedicated worker threads, and even batch processes use the java.util.concurrent.ForkJoinPool to improve performance.

It is, therefore, necessary to write classes in a thread-safe way, which can be achieved by one of the following techniques:

No state

When multiple threads access the same instance or static variable you must somehow coordinate the access to this variable. The easiest way to do this is simply by avoiding instance or static variables. Methods in classes without instance variables do only use local variables and method arguments. The following example shows such a method which is part of the class java.lang.Math:

public static int subtractExact(int x, int y) {
  int r = x - y;
  if (((x ^ y) & (x ^ r)) < 0) {
      throw new ArithmeticException("integer overflow");
  return r;

No shared state

If you can not avoid state do not share the state. The state should only be owned by a single thread. An example of this technique is the event processing thread of the SWT or Swing graphical user interface frameworks.

You can achieve thread-local instance variables by extending the thread class and adding an instance variable. In the following example, the field pool and workQueue are local to a single worker thread.

package java.util.concurrent;
public class ForkJoinWorkerThread extends Thread {
    final ForkJoinPool pool;               
    final ForkJoinPool.WorkQueue workQueue;

The other way to achieve thread-local variables is to use the class java.lang.ThreadLocal for the fields you want to make thread-local. Here is an example of an instance variable using java.lang.ThreadLocal:

public class CallbackState {
public static final ThreadLocal<CallbackStatePerThread> callbackStatePerThread =
    new ThreadLocal<CallbackStatePerThread>()
        protected CallbackStatePerThread  initialValue()
       return getOrCreateCallbackStatePerThread();

You wrap the type of your instance variable inside the java.lang.ThreadLocal. You can provide an initial value for your java.lang.ThreadLocal through the method initialValue().

The following shows how to use the instance variable:

CallbackStatePerThread callbackStatePerThread = CallbackState.callbackStatePerThread.get();

Through calling the method get() you receive the object associated with the current thread.

Since in application servers a pool of many threads is used to process requests, java.lang.ThreadLocal leads to a high memory consumption in this environment. java.lang.ThreadLocal is therefore not recommended for classes executed by the request processing threads of an application server.

Message passing

If you do not share state using the above techniques you need a way for the threads to communicate. A technique to do this is by passing messages between threads. You can implement message passing using a concurrent queue from the package java.util.concurrent. Or, better yet, use a framework like Akka, a framework for actor style concurrency. The following example shows how to send a message with Akka:

target.tell(message, getSelf());

and receive a message:

public Receive createReceive() {
   return receiveBuilder()
      .match(String.class, s -> System.out.println(s.toLowerCase()))

Immutable state

To avoid the problem that the sending thread changes the message during the message is read by another thread, messages should be immutable. The Akka framework, therefore, has the convention that all messages have to be immutable

When you implement an immutable class you should declare its fields as final. This not only makes sure that the compiler can check that the fields are in fact immutable but also makes them correctly initialized even when they are incorrect published. Here is an example of a final instance variable:

public class ExampleFinalField
  private final int finalField;
  public ExampleFinalField(int value)
   this.finalField = value;

final is a field modifier. It makes the field immutable not the object the field references to. So the type of the final field should be a primitive type like in the example or also an immutable class.

Use the data structures from java.util.concurrent

Message passing uses concurrent queues for the communication between threads. Concurrent Queues are one of the data structures provided in the package java.util.concurrent. This package provides classes for concurrent maps, queues, dequeues, sets and lists. Those data structures are highly optimized and tested for thread safety.

Synchronized blocks

If you can not use one of the above techniques use synchronized locks. By putting a block inside a synchronized block you make sure that only one thread at a time can execute this section.


Beware that when you use multiple nested synchronize blocks you risk deadlocks. A deadlock happens when two threads are trying to acquire a lock held by the other thread.

Volatile fields

Normal, nonvolatile fields, can be cached in registers or caches. Through the declaration of a variable as volatile, you tell the JVM and the compiler to always return the latest written value. This not only applies to the variable itself but to all values written by the thread which has written to the volatile field. The following shows an example of a volatile instance variable:

public class ExampleVolatileField
  private volatile int  volatileField;

You can use volatile fields if the writes do not depend on the current value. Or if you can make sure that only one thread at a time can update the field.

volatile is a field modifier. It makes the field itself volatile not the object it references. In case of an array you need to use java.util.concurrent.atomic.AtomicReferenceArray to access the array elements in a volatile way. See the race condition in org.­springframework.­util.­ConcurrentReferenceHashMap as an example of this error.

Even more techniques

I excluded the following more advanced techniques from this list: Atomic updates, a technique in which you call atomic instructions like compare and set provided by the CPU, java.util.concurrent.locks.ReentrantLock, a lock implementation which provides more flexibility than synchronized blocks, java.util.concurrent.locks.ReentrantReadWriteLock, a lock implementation in which reads do not block reads and java.util.concurrent.locks.StampedLock a nonreeantrant Read-Write lock with the possibility to optimistically read values.


The best way to achieve thread safety is to avoid shared state. For the state, you need to share you can either use message parsing together with immutable classes or the concurrent data structures together with synchronized blocks and volatile fields.

I would be glad to hear from you about the techniques you use to achieve thread-safe classes.