@RestController vs @Controller : Spring Framework

Spring MVC Framework and REST

Spring’s annotation-based MVC framework simplifies the process of creating RESTful web services. The key difference between a traditional Spring MVC controller and the RESTful web service controller is the way the HTTP response body is created. While the traditional MVC controller relies on the View technology, the RESTful web service controller simply returns the object and the object data is written directly to the HTTP response as JSON/XML.  For a detailed description of creating RESTful web services using the Spring framework, click here.

Image title

Figure 1: Spring MVC traditional workflow

Spring MVC REST Workflow

The following steps describe a typical Spring MVC REST workflow:

  1. The client sends a request to a web service in URI form.
  2. The request is intercepted by the DispatcherServlet which looks for Handler Mappings and its type.
    • The Handler Mappings section defined in the application context file tells DispatcherServlet which strategy to use to find controllers based on the incoming request.
    • Spring MVC supports three different types of mapping request URIs to controllers: annotation, name conventions, and explicit mappings.
  3. Requests are processed by the Controller and the response is returned to the DispatcherServlet which then dispatches to the view.

In Figure 1, notice that in the traditional workflow the ModelAndView object is forwarded from the controller to the client. Spring lets you return data directly from the controller, without looking for a view, using the @ResponseBody annotation on a method. Beginning with Version 4.0, this process is simplified even further with the introduction of the @RestController annotation. Each approach is explained below.

Using the @ResponseBody Annotation

When you use the @ResponseBody annotation on a method, Spring converts the return value and writes it to the http response automatically. Each method in the Controller class must be annotated with @ResponseBody.

3.x-diagram

Figure 2: Spring 3.x MVC RESTful web services workflow

Behind the Scenes

Spring has a list of HttpMessageConverters registered in the background. The responsibility of the HTTPMessageConverter is to convert the request body to a specific class and back to the response body again, depending on a predefined mime type. Every time an issued request hits @ResponseBody, Spring loops through all registered HTTPMessageConverters seeking the first that fits the given mime type and class, and then uses it for the actual conversion.

Code Example

Let’s walk through @ResponseBody with a simple example.

Project Creation and Setup

  1. Create a Dynamic Web Project with Maven support in your Eclipse or MyEclipse IDE.
  2. Configure Spring support for the project.• If you are using Eclipse IDE, you need to download all Spring dependencies and configure your pom.xml to contain those dependencies.• In MyEclipse, you only need to install the Spring facet and the rest of the configuration happens automatically.
  3. Create the following Java class named Employee. This class is our POJO.
package com.example.spring.model;
import javax.xml.bind.annotation.XmlRootElement;
 @XmlRootElement(name = “Employee”)
 public class Employee {
     String name
      String email;
    public String getName() {
       return name;
    }
    public void setName(String name) {
       this.name = name;
    }
     public String getEmail() {
                 return email;
     }
     public void setEmail(String email) {
       this.email = email;
     }
     public Employee() {
     }
 }

Then, create the following @Controller class:

 package com.example.spring.rest;
 import org.springframework.stereotype.Controller;
 import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import com.example.spring.model.Employee;
@Controller
@RequestMapping("employees")
public class EmployeeController {
     Employee employee = new Employee();
     @RequestMapping(value = “/{name}”, method = RequestMethod.GET, produces = “application/json”)
     public @ResponseBody Employee getEmployeeInJSON(@PathVariable String name) {
        employee.setName(name);
        employee.setEmail(“employee1@genuitec.com”);
     return employee;
    }
    @RequestMapping(value = “/{name}.xml”, method = RequestMethod.GET, produces = “application/xml”)
     public @ResponseBody Employee getEmployeeInXML(@PathVariable String name) {
        employee.setName(name);
        employee.setEmail(“employee1@genuitec.com”);
        return employee;
     }
 }
 Notice the @ResponseBody added to each of the @RequestMapping methods in the return value. After that, it’s a two-step process:
  1. Add the <context:component-scan> and <mvc:annotation-driven /> tags to the Spring configuration file.
    • <context:component-scan> activates the annotations and scans the packages to find and register beans within the application context.
    • <mvc:annotation-driven/> adds support for reading and writing JSON/XML if the Jackson/JAXB libraries are on the classpath.
    • For JSON format, include the jackson-databind jar and for XML include the jaxb-api-osgi jar to the project classpath.
  2. Deploy and run the application on any server (e.g., Tomcat). If you are using MyEclipse, you can run the project on the embedded Tomcat server.JSON—Use the URL: http://localhost:8080/SpringRestControllerExample/rest/employees/Bob and the following output displays:output_json-cropXML — Use the
    URL: http://localhost:8080/SpringRestControllerExample/rest/employees/Bob.xml and the following output displays:output_xml

Using the @RestController Annotation

Spring 4.0 introduced @RestController, a specialized version of the controller which is a convenience annotation that does nothing more than add the @Controller and @ResponseBody annotations. By annotating the controller class with @RestController annotation, you no longer need to add @ResponseBody to all the request mapping methods. The @ResponseBody annotation is active by default. Click here to learn more.
4.x-diagram

To use @RestController in our example, all we need to do is modify the @Controller to @RestController and remove the @ResponseBody from each method. The resultant class should look like the following:

 package com.example.spring.rest;
 import org.springframework.web.bind.annotation.PathVariable;
 import org.springframework.web.bind.annotation.RequestMapping;
 import org.springframework.web.bind.annotation.RequestMethod;
 import org.springframework.web.bind.annotation.RestController;
 import com.example.spring.model.Employee;
@RestController
 @RequestMapping(“employees”)
 public class EmployeeController {
     Employee employee = new Employee();
     @RequestMapping(value = “/{name}”, method = RequestMethod.GET, produces = “application/json”)
     public Employee getEmployeeInJSON(@PathVariable String name) {
        employee.setName(name);
        employee.setEmail(“employee1@genuitec.com”);
        return employee;
    }
     @RequestMapping(value = “/{name}.xml”, method = RequestMethod.GET, produces = “application/xml”)
     public Employee getEmployeeInXML(@PathVariable String name) {
        employee.setName(name);
        employee.setEmail(“employee1@genuitec.com”);
     return employee;
     }
 }

Note that we no longer need to add the @ResponseBody to the request mapping methods. After making the changes, running the application on the server again results in same output as before.

Conclusion

As you can see, using @RestController is quite simple and is the preferred method for creating MVC RESTful web services starting from Spring v4.0. I would like to extend a big thank you to my co-author, Swapna Sagi, for all of her help in bringing you this information!

Java 8 :: Streams – Sequential vs Parallel streams

Parallel streams divide the provided task into many and run them in different threads, utilizing multiple cores of the computer. On the other hand sequential streams work just like for-loop using a single core.

The tasks provided to the streams are typically the iterative operations performed on the elements of a collection or array or from other dynamic sources. Parallel execution of streams run multiple iterations simultaniously in different available cores.

parallel-sequential

In parallel execution, if number of tasks are more than available cores at a given time, the remaining tasks are queued waiting for currently running task to finish.

It is also important to know that iterations are only performed at a terminal operation, that’s because streams are deisnged to be lazy.

Example

Let’s test sequential and parallel behavior with an example.

import java.time.LocalTime;
import java.util.Arrays;
import java.util.stream.Stream;

public class SequentialParallelComparison {

    public static void main (String[] args) {
        String[] strings = {"1", "2", "3", "4", "5", "6", "7", "8", "9", "10"};

        System.out.println("-------\nRunning sequential\n-------");
        run(Arrays.stream(strings).sequential());
        System.out.println("-------\nRunning parallel\n-------");
        run(Arrays.stream(strings).parallel());
    }

    public static void run (Stream<String> stream) {

        stream.forEach(s -> {
            System.out.println(LocalTime.now() + " - value: " + s +
                                " - thread: " + Thread.currentThread().getName());
            try {
                Thread.sleep(200);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        });
    }
}

In above example we are printing various information, i.e. time, collection element value and thread name. We are doing that in forEach() terminal function. Other than parallel() and sequential(), we are not using any other intermediate operations, but that doesn’t matter if we use the same intermediate operations for the both. We are also making each iteration to sleep for 200ms so that we can cleary compare the time taken by sequential and parallel invocations.

Output:

This is the output, on an 8 logical processors (4 Core) machine.

-------
Running sequential
-------
02:29:02.817 - value: 1 - thread: main
02:29:03.022 - value: 2 - thread: main
02:29:03.223 - value: 3 - thread: main
02:29:03.424 - value: 4 - thread: main
02:29:03.624 - value: 5 - thread: main
02:29:03.824 - value: 6 - thread: main
02:29:04.025 - value: 7 - thread: main
02:29:04.225 - value: 8 - thread: main
02:29:04.426 - value: 9 - thread: main
02:29:04.626 - value: 10 - thread: main
-------
Running parallel
-------
02:29:04.830 - value: 7 - thread: main
02:29:04.830 - value: 3 - thread: ForkJoinPool.commonPool-worker-1
02:29:04.830 - value: 8 - thread: ForkJoinPool.commonPool-worker-4
02:29:04.830 - value: 2 - thread: ForkJoinPool.commonPool-worker-3
02:29:04.830 - value: 9 - thread: ForkJoinPool.commonPool-worker-2
02:29:04.830 - value: 5 - thread: ForkJoinPool.commonPool-worker-5
02:29:04.830 - value: 1 - thread: ForkJoinPool.commonPool-worker-6
02:29:04.831 - value: 10 - thread: ForkJoinPool.commonPool-worker-7
02:29:05.030 - value: 4 - thread: ForkJoinPool.commonPool-worker-3
02:29:05.030 - value: 6 - thread: ForkJoinPool.commonPool-worker-2

This clearly shows that in sequential stream each iteration waits for currently running one to finish, whereas, in parallel stream, eight threads are spawn simultaneously, remaining two, wait for others. Also notice the name of threads. In parallel stream, Fork and Join framework is used to create multiple threads. Parallel streams create ForkJoinPool instance via static ForkJoinPool.commonPool() method.

Difference Between Stored Procedure And User Defined Function In SQL Server

This article describes the differences between Stored Procedures and User Defined Functions in SQL Server.

Stored Procedure

A Stored Procedure is nothing more than prepared SQL code that you save so you can reuse the code over and over again. So if you think about a query that you write over and over again, instead of having to write that query each time you would save it as a Stored Procedure and then just call the Stored Procedure to execute the SQL code that you saved as part of the Stored Procedure.

In addition to running the same SQL code over and over again you also have the ability to pass parameters to the Stored Procedure, so depending on what the need is, the Stored Procedure can act accordingly based on the parameter values that were passed.

Stored Procedures can also improve performance. Many tasks are implemented as a series of SQL statements. Conditional logic applied to the results of the first SQL statements determine which subsequent SQL statements are executed. If these SQL statements and conditional logic are written into a Stored Procedure, they become part of a single execution plan on the server. The results do not need to be returned to the client to have the conditional logic applied; all of the work is done on the server.

Benefits of Stored Procedures

  • Precompiled execution

    SQL Server compiles each Stored Procedure once and then reutilizes the execution plan. This results in tremendous performance boosts when Stored Procedures are called repeatedly.
  • Reduced client/server traffic

    If network bandwidth is a concern in your environment then you’ll be happy to learn that Stored Procedures can reduce long SQL queries to a single line that is transmitted over the wire.

  • Efficient reuse of code and programming abstraction

    Stored Procedures can be used by multiple users and client programs. If you utilize them in a planned manner then you’ll find the development cycle requires less time.

  • Enhanced security controls

    You can grant users permission to execute a Stored Procedure independently of underlying table permissions.

User Defined Functions

Like functions in programming languages, SQL Server User Defined Functions are routines that accept parameters, perform an action such as a complex calculation, and returns the result of that action as a value. The return value can either be a single scalar value or a result set.

Functions in programming languages are subroutines used to encapsulate frequently performed logic. Any code that must perform the logic incorporated in a function can call the function rather than having to repeat all of the function logic.

SQL Server supports two types of functions

  • Built-in functions

    Operate as defined in the Transact-SQL Reference and cannot be modified. The functions can be referenced only in Transact-SQL statements using the syntax defined in the Transact-SQL Reference.

  • User Defined Functions

    Allow you to define your own Transact-SQL functions using the CREATE FUNCTION statement. User Defined Functions use zero or more input parameters, and return a single value. Some User Defined Functions return a single, scalar data value, such as an int, char, or decimal value.

Benefits of User Defined Functions

  • They allow modular programming

    You can create the function once, store it in the database, and call it any number of times in your program. User Defined Functions can be modified independently of the program source code.

  • They allow faster execution

    Similar to Stored Procedures, Transact-SQL User Defined Functions reduce the compilation cost of Transact-SQL code by caching the plans and reusing them for repeated executions. This means the user-defined function does not need to be reparsed and reoptimized with each use resulting in much faster execution times. CLR functions offer significant performance advantage over Transact-SQL functions for computational tasks, string manipulation, and business logic. Transact-SQL functions are better suited for data-access intensive logic.

  • They can reduce network traffic

    An operation that filters data based on some complex constraint that cannot be expressed in a single scalar expression can be expressed as a function. The function can then invoked in the WHERE clause to reduce the number or rows sent to the client.

Differences between Stored Procedure and User Defined Function in SQL Server

Sr.No. User Defined Function Stored Procedure
1 Function must return a value. Stored Procedure may or not return values.
2 Will allow only Select statements, it will not allow us to use DML statements. Can have select statements as well as DML statements such as insert, update, delete and so on
3 It will allow only input parameters, doesn’t support output parameters. It can have both input and output parameters.
4 It will not allow us to use try-catch blocks. For exception handling we can use try catch blocks.
5 Transactions are not allowed within functions. Can use transactions within Stored Procedures.
6 We can use only table variables, it will not allow using temporary tables. Can use both table variables as well as temporary table in it.
7 Stored Procedures can’t be called from a function. Stored Procedures can call functions.
8 Functions can be called from a select statement. Procedures can’t be called from Select/Where/Having and so on statements. Execute/Exec statement can be used to call/execute Stored Procedure.
9 A UDF can be used in join clause as a result set. Procedures can’t be used in Join clause

Volatile boolean vs AtomicBoolean

I use volatile fields when said field is ONLY UPDATED by its owner thread and the value is only read by other threads, you can think of it as a publish/subscribe scenario where there are many observers but only one publisher. However if those observers must perform some logic based on the value of the field and then push back a new value then I go with Atomic* vars or locks or synchronized blocks, whatever suits me best. In many concurrent scenarios it boils down to get the value, compare it with another one and update if necessary, hence the compareAndSet and getAndSet methods present in the Atomic* classes.

Check the JavaDocs of the java.util.concurrent.atomic package for a list of Atomic classes and an excellent explanation of how they work (just learned that they are lock-free, so they have an advantage over locks or synchronized blocks)

Design Patterns

What is the design patterns?

Design Patterns in javaIn this design patterns tutorial, we will explain all type of design patterns in java with example. A design pattern is a common solution that is used to test generally repetitive problems in software development. The design does not exist as a complete program that can be transformed into an object or machine code but, as a template identify problems in the system and provide appropriate solutions. The design pattern testing is not present in normal procedural programming and is mostly adopted by developers in Object Oriented environment. These provide the interaction on Object-Oriented level involving classes and objects.It is used as an efficient programming approach where Object Oriented systems are being developed to provide robustly and error-free software generation.

Spring 5 Design Pattern Book

You could purchase my Spring 5 book that is with title name “Spring 5 Design Pattern“. This book is available on the Amazon and Packt publisher website. Learn various design patterns and best practices in Spring 5 and use them to solve common design problems. You could use author discount to purchase this book by using code- “AUTHDIS40“.
Spring-5-Design-Pattern

Need for Design Patterns

With the emerging needs of technology and the growth in the IT industry, typical software development practices, that required the completion of the entire software before testing, has also evolved. To avoid reverting to the stage of development after completion, a testing practice during development phase was introduced. It can be used to identify error conditions and problems in the code that may not be apparent at the time. The end modules that are obtained are already tested and are less error-prone.

Designing a template that can be reused on multiple codes saves time for test creation and is easy to understand by developers with prior experience working with it. The templates are code and problem independent and do not need to be specified by coders to deal with a problem

Types of Design Patterns

Design patterns are classified into four main categories and each individual design pattern in the category make up a total of 23 design patterns. The four main categories are:

observer-software-design-pattern-5-638

  1. Creational Pattern
  2. Structural Pattern
  3. Behavioral Pattern
  4. J2EE Pattern

Creational Patterns

Creational Pattern is mostly concerned with the manner involved with creating class instances. It is further characterized as class-creation and object-creation Patterns. The object creation or instantiation is done implicitly using design patterns rather than directly. Thus, for a use-case, there is flexibility involved with the object creation.

  • Abstract Factory
    In this pattern, a factory of related objects is created by an interface without specification of the class name. The factory passes the objects by following the Factory Pattern.
  • Builder
    This pattern is used for a stage by stage creation of a complex object by combining simple objects. The final object creation depends on the stages of the creative process but is independent of other objects.
  • Factory Method
    This pattern is employed mostly during development in Java. It provides implicit object instantiation through common interfaces.
  • Object Pool
    Object pooling is used to reduce object creation cost when it is high for certain process and thus improves performance. It employs the method of object caching and simply retrieves objects from the cache pool instead of having to create it. The number of objects in the pool can be restricted to keep from continual growth.
  • Prototype
    In Prototype patterns, object duplication is performed while performance is monitored. A prototype interface pattern is present to produce a copy of an object. It is used to restrict memory/database operations by keeping modification to a minimum using object copies.
  • Singleton
    This pattern involves the present of one class and restricting object creation to a single object. The presence of a single object removes the need for object instantiation for accessing.

Structural Patterns

Structural Patterns deal with the composition of classes and objects. Inheritance is employed for interface composition and methods for addition of new functionalities are introduced by object composition. A better understanding of the entity relationship is established. Abilities of independent interfaces are combined in structural patterns.

  • Adapter
    To link two interfaces that are not compatible and utilize their functionalities, Adapter pattern is used. It is used to provide a new interface covering for any existing class.
  • Bridge
    In Bridge Pattern, there is a structural alteration in the main and interface implementer classes without having any effect on each other. These two classes are made independent of each other and are only connected by using the bridge which is an interface.
  • Composite
    Composite Pattern is used group together objects as one object. The objects are composed in a tree structure form with the representation of individual tree nodes and the hierarchy as well. The objects belonging to the same groups are modified using this pattern.
  • Decorator
    Decorator pattern restricts the alteration of object structure while a new functionality is added to it. The initial class remains unaltered while a decorator class wraps around it and provides extra capabilities.
  • Façade
    Façade provides clients with access to the system but conceals the working of the system and its complexities. The pattern creates one class consisting of user functions and delegates provide calling facilities to the classes belonging to the systems.
  • Flyweight
    Flyweight pattern is used to reduce memory usage and improve performance by cutting on object creation. The pattern looks for similar objects that already exist to reuse it rather than creating new ones that are similar.
  • Private Class Data
    Some of the class attributes may be available without requirement and thus may be prone to be corrupted. To prevent that the attributes may be allowed to be manipulated only once during operation after which it becomes private and thus data is protected.
  • Proxy
    It is used to create objects that may represent functions of other classes or objects and the interface is used to access these functionalities.

Behavioral patterns

Behavioral pattern deals with the communication between class objects. They are used to sense the presence of already present communication patterns and may be able to manipulate these patterns.

  • Chain of responsibility
    A chain of objects is created to deal with the request so that no request goes back unfulfilled.
  • Command
    Command pattern deals with requests by hiding it inside an object as a command and sent to be to invoker object which then passes it to an appropriate object that can fulfill the request.
  • Interpreter
    Interpreter pattern is used for language or expression evaluation by creating an interface that tells the context for interpretation.
  • Iterator
    Iterator pattern is used to provide sequential access to a number elements present inside a collection object without any relevant information exchange.
  • Mediator
    Mediator pattern provides easy communication through its mediator class that allows communication for several classes.
  • Memento
    Memento pattern involves the working of three classes Memento, CareTaker, and Originator. Memento holds the restorable state of the object. Originator’s job is the creation and storing of state and CareTaker’s job is the restoration of memento states.
  • Null Object
    Null Object is used instead of specifying a Null value and is used to represent a particular operation that does nothing when created. It is basically a check for Null value without the presence of the value.
  • Observer
    A One-to-Many relationship calls for the need of Observer pattern to check the relative dependencies of objects.
  • State
    In State pattern, the behavior of a class varies with its state and is thus represented by the context object.
  • Strategy
    Strategy pattern deals with the change in class behavior at runtime. The objects consist of strategies and the context object judges the behavior at runtime of each strategy.
  • Template method
    It is used with components having similarity where a template of the code may be implemented to test both the components. The code can be changed with minor alterations.
  • Visitor
    A Visitor performs a set of operations on an element class and changes its behavior of execution. Thus the variance in the behavior of element class is dependent on the change in visitor class.

J2EE Patterns

J2EE stands for Java 2 Enterprise Edition currently known as Java Enterprise Edition (J EE). It consists of many APIs that provide software developers with the capabilities to write server-side code. The J2EE patterns deal with testing on the presentation tier as offered by Sun Java Center. These design patterns are specifically concerned with the following listed layers.

  • Presentation Layer
  • Business Layer
  • Integration Layer

Core J2EE Pattern Catalog

Presentation Tier

  • Intercepting Filter
    It is used to provide interception and manipulation of requests as well as response prior to and preceding the processing of the request.   readmore
  • Context Object
    Context Object is present to keep from using system information that is specific to the protocol and doesn’t coincide with its context.   readmore
  • Front Controller
    A centralized access point allows for non-duplication of the control logic needed to handle a request. Front Controller is to handle such request by acting as an initial point.   readmore
  • Application Controller
    It provides support for action reuse and code to view-management. The code is made more readable and maintainable as well as modular. Request handling is also improved and made more extensible.   readmore
  • View Helper
    It is used to provide a different view, hiding the logic present in the code. Now the logic and the view are completely independent to provide ease for developers and designers.   readmore
  • Composite View
    Small sub views can be created using the composite view. These sub views can be integrated to create a singular view.   readmore
  • Dispatcher View
    To be able to support a small amount of multitasking, dispatcher view is used. It provides handling and response generation for requests while a business processing is taking place.   readmore
  • Service to Worker
    It is used to perform handling of requests as well as processing of the business transaction and after that, the control is transferred to the View.   readmore

Business Tier

  • Business Delegate
    The business delegate pattern is one of the Java EE design patterns. It is used in order to decouple or reduce the coupling between the presentation tier and business services.   readmore
  • Service Locator
    The design pattern, service locator is an important part in software development. Looking up for a service is one of the core features of service locator. A robust abstraction layer performs this function. The design pattern uses a central registry called Service Locator.   readmore
  • Session Facade
    The session façade pattern’s core application is development of enterprise apps. You can also call it a logical extension of GoF designs. The pattern encases the interactions which are happening between the low-level components, which is Entity EJB.   readmore
  • Business Object
    Object-oriented programming makes use of the business object. It represents the parts of a business. A business object is able to represent things like event, person, business process, place, and concept. The business object can exist in certain forms like a product, an invoice, and the details of a particular part of a transaction.   readmore
  • Composite Entity
    It is one if the Java EE software-design patterns. The composite entity pattern performs modeling, managing and representing a set of interrelated persistent objects. It does not represent them as separate fine-grained entity beans. Composite entity beans are able to represent a graph of objects.   readmore
  • Transfer Object
    It is one of the Java EE design patterns. We need transfer object when we need to pass the data across various attributes in a packet to the server. Value Object is another name for transfer object. The transfer object is just a class of POJO which has a method of the getter and setter.   readmore

Integration Tier

  • Data Access Object
    The data access object in a computer software which is as an object which is responsible for providing abstract interface for communication to a specific form of database.   readmore
  • Service Activator
    The service activator design pattern is one of the Java EE patterns. It is an SI (spring integration) component. It is responsible for triggering or activating a service object or bean which is managed by the spring. A service activator searches through the message channel in order to look for messages.   readmore
  • Web Service Broker
    The web service broker uses web protocols and XML. We can use this pattern to expose and broker the services. Assume a circumstance, where multiple organizations are lined up in order to request info from a number of service providers.   readmore

Happy Design Patterns Learning with us!!!

5 Hidden Secrets in Java

As programming languages grow, it is inevitable that hidden features begin to appear and constructs that were never intended by the founders begin to creep into common usage. Some of these features rear their head as idioms and become accepted parlance in the language, while others become anti-patterns and are relegated to the dark corners of the language community. In this article, we will take a look at five Java secrets that are often overlooked by the large population of Java developers (some for good reason). With each description, we will look at the use cases and rationale that brought each feature into the existence and look at some examples when it may be appropriate to use these features.

The reader should note that not all these features are not truly hidden in the language, but are often unused in daily programming. While some may be very useful at appropriate times, others are almost always a poor idea and are shown in this article to peek the interest of the reader (and possibly give him or her a good laugh). The reader should use his or her judgment when deciding when to use the features described in this article: Just because it can be done does not mean it should.

1. Annotation Implementation

Since Java Development Kit (JDK) 5, annotations have an integral part of many Java applications and frameworks. In a vast majority of cases, annotations are applied to language constructs, such as classes, fields, methods, etc., but there is another case in which annotations can be applied: As implementable interfaces. For example, suppose we have the following annotation definition:

@Retention(RetentionPolicy.RUNTIME)

@Target(ElementType.METHOD)

public @interface Test {

String name();

}

Normally, we would apply this annotation to a method, as in the following:

public class MyTestFixure {

@Test

public void givenFooWhenBarThenBaz() {

// …

}

}

We can then process this annotation, as described in Creating Annotations in Java. If we also wanted to create an interface that allows for tests to be created as objects, we would have to create a new interface, naming it something other than Test:

public interface TestInstance {

public String getName();

}

Then we could instantiate a TestInstance object:

public class FooTestInstance {

public String getName() {

return “Foo”;

}

}

TestInstance myTest = new FooTestInstance();

While our annotation and interface are nearly identical, with very noticeable duplication, there does not appear to be a way to merge these two constructs. Fortunately, looks are deceiving and there is a technique for merging these two constructs: Implement the annotation:

public class FooTest implements Test {

@Override

public String name() {

return “Foo”;

}

@Override

public Class<? extends Annotation> annotationType() {

return Test.class;

}

}

Note that we must implement the annotationType method and return the type of the annotation as well, since this is implicitly part of the Annotation interface. Although in nearly every case, implementing an annotation is not a sound design decision (the Java compiler will show a warning when implementing an interface), it can be useful in a select few circumstances, such as within annotation-driven frameworks.

2. Instance Initialization

In Java, as with most object-oriented programming languages, objects are exclusively instantiated using a constructor (with some critical exceptions, such as Java object deserialization). Even when we create static factory methods to create objects, we are simply wrapping a call to the constructor of an object to instantiate it. For example:

public class Foo {

private final String name;

private Foo(String name) {

this.name = name;

}

public static Foo withName(String name) {

return new Foo(name);

}

}

Foo foo = Foo.withName(“Bar”);

Therefore, when we wish to initialize an object, we consolidate the initialization logic into the constructor of the object. For example, we set the name field of the Foo class within its parameterized constructor. While it may appear to be a sound assumption that all of the initialization logic is found in the constructor or set of constructors for a class, this is not the case in Java. Instead, we can also use instance initialization to execute code when an object is created:

public class Foo {

{

System.out.println(“Foo:instance 1”);

}

public Foo() {

System.out.println(“Foo:constructor”);

}

}

Instance initializers are specified by adding initialization logic within a set of braces within the definition of a class. When the object is instantiated, its instance initializers are called first, followed by its constructors. Note that more than one instance initializer may be specified, in which case, each is called in the order it appears within the class definition. Apart from instance initializers, we can also create static initializers, which are executed when the class is loaded into memory. To create a static initializer, we simply prefix an initializer with the keyword static:

public class Foo {

{

System.out.println(“Foo:instance 1”);

}

static {

System.out.println(“Foo:static 1”);

}

public Foo() {

System.out.println(“Foo:constructor”);

}

}

When all three initialization techniques (constructors, instance initializers, and static initializers) are present in a class, static initializers are always executed first (when the class is loaded into memory) in the order they are declared, followed by instance initializers in the order they are declared, and lastly by constructors. When a superclass is introduced, the order of execution changes slightly:

1 Static initializers of superclass, in order of their declaration

2 Static initializers of subclass, in order of their declaration

3 Instance initializers of superclass, in order of their declaration

4 Constructor of superclass

5 Instance initializers of subclass, in order of their declaration

6 Constructor of subclass

For example, we can create the following application:

public abstract class Bar {

private String name;

static {

System.out.println(“Bar:static 1”);

}

{

System.out.println(“Bar:instance 1”);

}

static {

System.out.println(“Bar:static 2”);

}

public Bar() {

System.out.println(“Bar:constructor”);

}

{

System.out.println(“Bar:instance 2”);

}

public Bar(String name) {

this.name = name;

System.out.println(“Bar:name-constructor”);

}

}

public class Foo extends Bar {

static {

System.out.println(“Foo:static 1”);

}

{

System.out.println(“Foo:instance 1”);

}

static {

System.out.println(“Foo:static 2”);

}

public Foo() {

System.out.println(“Foo:constructor”);

}

public Foo(String name) {

super(name);

System.out.println(“Foo:name-constructor”);

}

{

System.out.println(“Foo:instance 2”);

}

public static void main(String… args) {

new Foo();

System.out.println();

new Foo(“Baz”);

}

}

If we execute this code, we receive the following output:

Bar:static 1

Bar:static 2

Foo:static 1

Foo:static 2

Bar:instance 1

Bar:instance 2

Bar:constructor

Foo:instance 1

Foo:instance 2

Foo:constructor

Bar:instance 1

Bar:instance 2

Bar:name-constructor

Foo:instance 1

Foo:instance 2

Foo:name-constructor

Note that the static initializers were only executed once, even though two Foo objects were created. While instance and static initializers can be useful, initialization logic should be placed in constructors and methods (or static methods) should be used when complex logic is required to initialize the state of an object.

3. Double-Brace Initialization

Many programming languages include some syntactic mechanism to quickly and concisely create a list or map (or dictionary) without using verbose boilerplate code. For example, C++ includes brace initialization which allows developers to quickly create a list of enumerated values, or even initialize entire objects if the constructor for the object supports this functionality. Unfortunately, prior to JDK 9, no such feature was included (we will touch on this inclusion shortly). In order to naively create a list of objects, we would do the following:

List<Integer> myInts = new ArrayList<>();

myInts.add(1);

myInts.add(2);

myInts.add(3);

While this accomplishes our goal of creating a new list initialized with three values, it is overly verbose, requiring the developer to repeat the name of the list variable for each addition. In order to shorten this code, we can use double-brace initialization to add the same three elements:

List<Integer> myInts = new ArrayList<>() {{

add(1);

add(2);

add(3);

}};

Double-brace initialization–which earns its name from the set of two open and closed curly braces–is actually a composite of multiple syntactic elements. First, we create an anonymous inner class that extends the ArrayList class. Since ArrayList has no abstract methods, we can create an empty body for the anonymous implementation:

List<Integer> myInts = new ArrayList<>() {};

Using this code, we essentially create an anonymous subclass of ArrayList that is exactly the same as the original ArrayList. One of the major differences is that our inner class has an implicit reference to the containing class (in the form of a captured this variable) since we are creating a non-static inner class. This allows us to write some interesting–if not convoluted–logic, such as adding the captured this variable to the anonymous, double-brace initialized inner class:

public class Foo {

public List<Foo> getListWithMeIncluded() {

return new ArrayList<Foo>() {{

add(Foo.this);

}};

}

public static void main(String… args) {

Foo foo = new Foo();

List<Foo> fooList = foo.getListWithMeIncluded();

System.out.println(foo.equals(fooList.get(0)));

}

}

If this inner class were statically defined, we would not have access to Foo.this. For example, the following code, which statically creates the named FooArrayList inner class, does not have access to the Foo.this reference and is therefore not compilable:

public class Foo {

public List<Foo> getListWithMeIncluded() {

return new FooArrayList();

}

private static class FooArrayList extends ArrayList<Foo> {{

add(Foo.this);

}}

}

Resuming the construction of our double-brace initialized ArrayList, once we have created the non-static inner class, we then use instance initialization, as we saw above, to execute the addition of the three initial elements when the anonymous inner class is instantiated. Since anonymous inner classes are immediately instantiated and only one object of the anonymous inner class ever exist, we have essentially created a non-static inner singleton object that adds the three initial elements when it is created. This can be made more obvious if we separate the pair of braces, where one brace clearly constitutes the definition of the anonymous inner class and the other brace denotes the start of the instance initialization logic:

List<Integer> myInts = new ArrayList<>() {

{

add(1);

add(2);

add(3);

}

};

While this trick can be useful, JDK 9 (JEP 269) has supplanted the utility of this trick with a set of static factory methods for List (as well as many of the other collection types). For example, we could have created the List above using these static factory methods, as illustrated in the following listing:

List<Integer> myInts = List.of(1, 2, 3);

This static factory technique is desirable for two main reasons: (1) No anonymous inner class is created and (2) the reduction in boilerplate code (noise) required to create the List. The caveat to creating a List in this manner is that the resulting List is immutable, and therefore cannot be modified once it has been created. In order to create a mutable List with the desired initial elements, we are stuck with either using the naive technique or double-brace initialization.

Note that the naive initialization, double-brace initialization, and the JDK 9 static factory methods are not just available for List.  They are also available for Set and Map objects, as illustrated in the following snippet:

// Naive initialization

Map<String, Integer> myMap = new HashMap<>();

myMap.put(“Foo”, 10);

myMap.put(“Bar”, 15);

// Double-brace initialization

Map<String, Integer> myMap = new HashMap<>() {{

put(“Foo”, 10);

put(“Bar”, 15);

}};

// Static factory initialization

Map<String, Integer> myMap = Map.of(“Foo”, 10, “Bar”, 15);

It is important to consider the nature of double-brace initialization before deciding to use it. While it does improve the readability of code, it carries with it some implicit side-effects.

4. Executable Comments

Comments are an essential part of almost every program and the main benefit of comments is that they are not executed. This is made even more evident when we comment out a line of code within our program: We want to retain the code in our application but we do not want it to be executed. For example, the following program results in 5 being printed to standard output:

public static void main(String args[]) {

int value = 5;

// value = 8;

System.out.println(value);

}

While it is a fundamental assumption that comments are never executed, it is not completely true. For example, what does the following snippet print to standard output?

public static void main(String args[]) {

int value = 5;

// \u000dvalue = 8;

System.out.println(value);

}

A good guess would be 5 again, but if we run the above code, we see 8 printed to standard output. The reason behind this seeming bug is the Unicode character \u000d; this character is actually a Unicode carriage return, and Java source code is consumed by the compiler as Unicode formatted text files. Adding this carriage return pushes the assignment value = 8 to the line directly following the comment, ensuring that it is executed. This means that the above snippet is effectively equal to the following:

public static void main(String args[]) {

int value = 5;

//

value = 8;

System.out.println(value);

}

Although this appears to be a bug in Java, it is actually a conscious inclusion in the language. The original goal of Java was to create a platform independent language (hence the creation of the Java Virtual Machine, or JVM) and interoperability of the source code is a key aspect of this goal. By allowing Java source code to contain Unicode characters, we can include non-Latin characters in a universal manner. This ensures that code written in one region of the world (that may include non-Latin characters, such as in comments) can be executed in any other. For more information, see Section 3.3 of the Java Language Specification, or JLS.

We can take this to the extreme and even write an entire application in Unicode. For example, what does the following program do (source code obtained from Java: Executing code in comments?!)?

\u0070\u0075\u0062\u006c\u0069\u0063\u0020\u0020\u0020\u0020

\u0063\u006c\u0061\u0073\u0073\u0020\u0055\u0067\u006c\u0079

\u007b\u0070\u0075\u0062\u006c\u0069\u0063\u0020\u0020\u0020

\u0020\u0020\u0020\u0020\u0073\u0074\u0061\u0074\u0069\u0063

\u0076\u006f\u0069\u0064\u0020\u006d\u0061\u0069\u006e\u0028

\u0053\u0074\u0072\u0069\u006e\u0067\u005b\u005d\u0020\u0020

\u0020\u0020\u0020\u0020\u0061\u0072\u0067\u0073\u0029\u007b

\u0053\u0079\u0073\u0074\u0065\u006d\u002e\u006f\u0075\u0074

\u002e\u0070\u0072\u0069\u006e\u0074\u006c\u006e\u0028\u0020

\u0022\u0048\u0065\u006c\u006c\u006f\u0020\u0077\u0022\u002b

\u0022\u006f\u0072\u006c\u0064\u0022\u0029\u003b\u007d\u007d

If the above is placed in a file named Ugly.java and executed, it prints Hello world to standard output. If we convert these escaped Unicode characters into American Standard Code for Information Interchange (ASCII) characters, we obtain the following program:

public

class Ugly

{public

static

void main(

String[]

args){

System.out

.println(

“Hello w”+

“orld”);}}

Although it is important to know that Unicode characters can be included in Java source code, it is highly suggested that they are avoided unless required (for example, to include non-Latin characters in comments). If they are required, be sure not to include characters, such as carriage return, that change the expected behavior of the source code.

5. Enum Interface Implementation

One of the limitations of enumerations (enums) compared to classes in Java is that enums cannot extend another class or enum. For example, it is not possible to execute the following:

public class Speaker {

public void speak() {

System.out.println(“Hi”);

}

}

public enum Person extends Speaker {

JOE(“Joseph”),

JIM(“James”);

private final String name;

private Person(String name) {

this.name = name;

}

}

Person.JOE.speak();

We can, however, have our enum implement an interface and provide an implementation for its abstract methods as follows:

public interface Speaker {

public void speak();

}

public enum Person implements Speaker {

JOE(“Joseph”),

JIM(“James”);

private final String name;

private Person(String name) {

this.name = name;

}

@Override

public void speak() {

System.out.println(“Hi”);

}

}

Person.JOE.speak();

We can now also use an instance of Person anywhere a Speaker object is required. Whatsmore, we can also provide an implementation of the abstract methods of an interface on a per-constant basis (called constant-specific methods):

public interface Speaker {

public void speak();

}

public enum Person implements Speaker {

JOE(“Joseph”) {

public void speak() { System.out.println(“Hi, my name is Joseph”); }

},

JIM(“James”){

public void speak() { System.out.println(“Hey, what’s up?”); }

};

private final String name;

private Person(String name) {

this.name = name;

}

@Override

public void speak() {

System.out.println(“Hi”);

}

}

Person.JOE.speak();

Unlike some of the other secrets in this article, this technique should be encouraged where appropriate. For example, if an enum constant, such as JOE or JIM, can be used in place of an interface type, such as Speaker, the enum that defines the constant should implement the interface type. For more information, see Item 38 (pp. 176-9) of Effective Java, 3rd Edition.

Conclusion

In this article, we looked at five hidden secrets in Java, namely: (1) Annotations can be extended, (2) instance initialization can be used to configure an object upon instantiation, (3) double-brace initialization can be used to execute instructions when creating an anonymous inner class, (4) comments can sometimes be executed, and (5) enums can implement interfaces. While some of these features have their appropriate uses, some of them should be avoided (i.e. creating executable comments). When deciding to use these secrets, be sure to obey the following rule: Just because something can be done, does not mean that it should.