REST vs WebSocket Comparison and Benchmarks

One of the common questions asked during my #JavaEE7 presentations around the world is how do WebSockets compare with REST ?

First of all, REST is a style of architecture so what really people mean is RESTful HTTP. As an architecture cannot be compared with a technology. But the term is so loosely used that they are used in place of each other commonly.

Lets start with a one line definition for WebSocket …

Bi-directional and full-duplex communication channel over a single TCP connection.

WebSocket solves a few issues with REST, or HTTP in general:

  • Bi-directional: HTTP is a uni-directional protocol where a request is always initiated by client, server processes and returns a response, and then the client consumes it. WebSocket is a bi-directional protocol where there are no pre-defined message patterns such as request/response. Either client or server can send a message to the other party.
  • Full-duplex: HTTP allows the request message to go from client to server and then server sends a response message to the client. At a given time, either client is talking to server or server is talking to client. WebSocket allows client and server to talk independent of each other.
  • Single TCP Connection: Typically a new TCP connection is initiated for a HTTP request and terminated after the response is received. A new TCP connection need to be established for another HTTP request/response. For WebSocket, the HTTP connection is upgraded using standard HTTP Upgrade mechanism and client and server communicate over that same TCP connection for the lifecycle of WebSocket connection.
  • Lean protocol: HTTP is a chatty protocol. Here is the set of HTTP headers sent in request message by Advanced REST Client Chrome extension.

    And the response headers received from WildFly 8:

    These are 663 characters exchanged for a trivial “Hello World” echo. The source code for this simple application is here.For WebSocket, after the initial HTTP handshake, the data is minimally framed with 2 bytes.

Lets take a look at a micro benchmark that shows the overhead caused by REST over a WebSocket echo endpoint. The payload is just a simple text array populated with ‘x’. The source code for the benchmark is available here.

The first graph shows the time (in milliseconds) taken to process N messages for a constant payload size.


Here is the raw data that feeds this graph:


This graph and the table shows that the REST overhead increases with the number of messages. This is true because that many TCP connections need to be initiated and terminated and that many HTTP headers need to be sent and received. The last column particularly shows the multiplication factor for the amount of time to fulfill a REST request.

The second graph shows the time taken to process a fixed number of messages by varying the payload size.


Here is the raw data that feeds this graph:


This graph shows that the incremental cost of processing the request/response for a REST endpoint is minimal and most of the time is spent in connection initiation/termination and honoring HTTP semantics.

These benchmarks were generated on WildFly 8 and the source code for the benchmark is available here.

Together the graph also shows that WebSocket is a more efficient protocol than RESTful HTTP. But does that mean it will replace RESTful HTTP ?

The answer to that, at least in the short term is, NO!

  • WebSocket is a low-level protocol, think of it as a socket on the web. Every thing, including a simple request/response design pattern, how to create/update/delete resources need, status codes etc to be build on top of it. All of these are well defined for HTTP.
  • WebSocket is a stateful protocol where as HTTP is a stateless protocol. WebSocket connections are know to scale vertically on a single server where as HTTP can scale horizontally. There are some proprietary solutions for WebSocket horizontal scaling, but they are not standards-based.
  • HTTP comes with a lot of other goodies such as caching, routing, multiplexing, gzipping and lot more. All of these need to be defined on top of WebSocket.
  • How will Search Engine Optimization (SEO) work with WebSocket ? Works very well for HTTP URLs.
  • All proxy, DNS, firewalls are not yet fully aware of WebSocket traffic. They allow port 80 but might restrict traffic by snooping on it first.
  • Security with WebSocket is all-or-nothing approach.

This blog does not provide any conclusion because its meant to trigger thoughts!

And if you want a complete introduction to JSR 356 WebSocket API in Java EE 7, then watch a recently concluded webinar at vJUG:

So, what do you think ?

Let’s talk about connection pools.

I claim that:

Default settings of most popular connection pools are poor!

For you, it means:

Go review your connection pool settings.

You might have a problem if you rely on default settings. You may have memory leaks and unresponsive application (even if load is not high at all).

Below I will show some of most important settings and my recommendations how they really should be configured.

What is connection pool?

A plain web application that needs to write or read data from database, does it like this:

  1. Open a connection to DB        // takes N ms
  2. read/write data
  3. close the connection

(by the way, in old good CGI applications it was the only possible approach)

This approach is perfectly fine in many cases. And you probably don’t need anything more. But it has some disadvantages for highly-performant systems:

  • Step 1 can take some time. Probably tens or hundreds of milliseconds (it depends, of course).
  • It’s easy to forget Step 3 (close the connection) which causes a connection leak (causing memory leaks and other problems).

A new hero

That’s why another approach was born: application may preliminarily open a bunch of connections and hold them openall the time. The bunch of open connections is called connection pool. Then any operation looks like this:

  1. Take a DB connection from pool        // blazingly fast in most cases
  2. read/write data
  3. return the connection to the pool

Seems cool. But new power always means new problems.

… and new problems

When using a connection pool, we need to solve (at least) the following questions:

  • How many connections we should keep open?
  • How long should they be kept?
  • What if they appear to be broken?
  • What if application needs more connections than the pool currently has?
  • What if somebody forgets to return connection to pool?

To answer these questions, connection pools have a lot of settings. And their default values are mostly bad. Intrigued? Let me show.

Basic settings

I will consider the 2 most popular connection pools in Java world:

The basic parameters, of cause, are:

  • min size (minimum number of connections that should be open at any moment)
  • initial size (how many connections application opens at start)
  • max size (maximum number of connections in pool)

By the way, these are the only settings that have reasonable defaults. Here they are:

c3p0 HikariCP
min size 3 10
initial size 3 10
max size 15 10

Let’s continue with more problematic settings.

Critical settings

checkout Timeout

How long can application wait until it gets a connection from pool.

  • c3p0 setting: checkoutTimeout
  • HikariCP setting: connectionTimeout

Default values:

c3p0 HikariCP I recommend
checkoutTimeout 30 s 1 ms

Both default values are just disaster. 

As I mentioned, in most cases getting a connection from pool is blazingly fast. Except the case when the pool has no more open connections. Then the pool needs to acquire a new connection (which takes less than a second, as a rule). But if maxSize is reached, pool cannot open new connection, and just waits until somebody returns its connection to pool. But if the application has a connection leak (a bug which prevents connections to be returned), the pool will never get the connection back!

What then happens? 

In case of c3p0, we end up with all Threads frozen in the following state:

01 "qtp1905485420-495 13e09-3211" #495 prio=5 os_prio=0 tid=0x00007f20e078d800 nid=0x10d7 in Object.wait() [0x00007f204bc79000]
03    java.lang.Thread.State: WAITING (on object monitor)
05 at java.lang.Object.wait(Native Method)
07 at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable()
09 - locked <0x00000000c3295ef8> (a com.mchange.v2.resourcepool.BasicResourcePool)
11 at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource()
15     at org.hibernate.jpa.internal.QueryImpl.getResultList()
17     at domain.funds.FundsRepository.get()

It may seem that the HikariCP default “30 seconds” is a bit better. No, it doesn’t really help in high-performant applications. During those 30 seconds, a lot of new requests may come, and all of them are just frozen. Apparently application will get an OutOfMemory error soon. Any waiting just postpones the death of application for a few seconds.

That’s why I recommend to set checkoutTimeout to the minimal possible value: 1ms. Unfortunately we cannot set it to 0 because 0 means endless waiting &#55357;&#56898; The sooner we fail, the more chances we give working threads to complete their job. And we can clearly inform the user that the application is currently overloaded, and he should try later.

test Connection On Checkout

Sometimes connections in pool may die. Database can close them by its initiative, or a system administrator can just break network cable. That’s why pool should monitor connection aliveness.

The easiest setting to do that is “testConnectionOnCheckout” in c3p0 (I haven’t found a similar setting in HikariCP, it seems to be always enabled).

Default values:

c3p0 HikariCP I recommend
testConnectionOnCheckout false true? true

Definitely, it should be enabled by default!

Otherwise you will end up with lots of such exceptions in log:

1 org.hibernate.TransactionException: Unable to rollback against JDBC Connection
2 at o.h.r.j.i.AbstractLogicalConnectionImplementor.rollback()
3 at o.h.r.t.b.j.i.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.rollback(

P.S. If you want to achieve even better performance, you may consider testing connection in background, not on checkout:

  • testConnectionOnCheckout=false
  • testConnectionOnCheckin=true
  • idleConnectionTestPeriod=10

preferred test query

But how exactly should the pool test connections?

The problem is that it depends on database.

By default, both pools test connections by executing

  • “connection.isValid()” (in case of JDBC4), or
  • “connection.getMetaData().getTables()” (in case of JDBC3)

It may be slow because “getTables()” retrieves meta information about all tables each time. A recommended value is something like

  • “SELECT 1” (in case of MySql), or
  • “SELECT 1 FROM DUAL” (in case of Oracle) etc.

By executing this simple and fast query, the pool can check if a connection is still alive.

max idle time

How long can an unused connection stay in pool

  • c3p0 setting: maxIdleTime
  • HikariCP setting: idleTimeout

Default values:

c3p0 HikariCP I recommend
maxIdleTimeout 10 minutes 1..10 minutes

It’s not probably a big deal, but every opened connection

  • holds some resources inside database
  • prevents other systems from getting connections to the same database (every database has some limit of maximum possible number of connections)

That’s why it’s a good idea to close unused (idle) connection. I recommend to set this value to non-endless period. Probably several minutes is reasonable.

min pool size

How many connections pools should always have (even if unused).

  • c3p0 setting: minPoolSize
  • HikariCP setting: minimumIdle

Default values:

c3p0 HikariCP I recommend
maxIdleTimeout 3 max pool size 0…N

For the same reason, it’s probably a good idea to close unused connections. I would set this value to 0 or 1 in most cases. If some user unexpectedly decides to log in to your application at midnight, he will just wait for a few more milliseconds. Not a big deal.

max connection age

How long a connection may live in pool (no matter if it’s idle or used)

  • c3p0 setting: maxConnectionAge
  • HikariCP setting: maxLifetime

Default values:

c3p0 HikariCP I recommend
maxIdleTimeout 30 minutes say, 30 minutes

Just in case, it’s probably a good idea to close connections time-to-time. Probably it helps to avoid some memory leaks.

A quote from HikariCP documentation:

“We strongly recommend setting this value, and it should be several seconds shorter than any database or infrastructure imposed connection time limit.”

unreturned connection timeout

One of typical problems is a connection leak. Some buggy code took a connection from pool and didn’t return it. How to detect this problem?

Fortunately, we have a good setting for this case:

  • c3p0 setting: unreturnedConnectionTimeout
  • HikariCP setting: leakDetectionThreshold

Default values:

c3p0 HikariCP I recommend
maxIdleTimeout disabled disabled 5 minutes?

If any buggy code took a connection and didn’t return it during 5 minutes, the pool will forcedly return the connection and write warnings like this:

1 [C3P0PooledConnectionPoolManager Logging the stack trace by which the overdue resource was checked-out.
2 java.lang.Exception: DEBUG STACK TRACE: Overdue resource check-out stack trace.
3 at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource()
4 at org.hibernate.loader.Loader.prepareQueryStatement(
5 at domain.application.ApplicationReportSender.sendWeeklyReport(

It will help you to find out where is the guilty code.


I gave an overview of some connection pool settings. There is more of them. I gave some advices which seem reasonable from my experience. But your application may have different load. You users may have different behaviour. My advices may seem stupid to you.

No problems. Don’t trust me. But please, also Don’t trust defaults.

Go check your pool settings!

Using the IdentityHashMap in Java

In this article, we’ll discuss IdentityHashMap from the java.util package.

What Will We Learn?

  1. IdentityHashMap Class Overview
  2. IdentityHashMap Class Constructors
  3. IdentityHashMap Class Methods
  4. IdentityHashMap Class Example

Learn Java Collections Framework in-depth at the Java Collections Framework Tutorial.

1. IdentityHashMap Class Overview

This class is not a general-purpose Map implementation! While this class implements the Mapinterface, it intentionally violates Map’s general contract, which mandates the use of the equals method when comparing objects. This class is designed for use only in the rare cases wherein reference-equality semantics are required.IdentityHashMap is a HashTablebased implementation of the Map Interface. Normal HashMap compares keys using ‘.equals’ method. But Identity HashMap compares its keys using ‘==’ operator. Note that this implementation is not synchronized. If multiple threads access an identity hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. For Example:

   Map m = Collections.synchronizedMap(new IdentityHashMap(...));

2. IdentityHashMap Class Constructors

  • IdentityHashMap() — Constructs a new, empty identity hash map with a default expected maximum size (21).
  •  IdentityHashMap(int expectedMaxSize) — Constructs a new, empty map with the specified expected maximum size.
  • IdentityHashMap(Map<? extends K,? extends V>m) — Constructs a new identity hash map containing the key-value mappings in the specified map.

3. IdentityHashMap Class Methods

  • void clear() — Removes all of the mappings from this map.
  • Object clone() — Returns a shallow copy of this identity hash map: the keys and values themselves are not cloned.
  • boolean containsKey(Object key) — Tests whether the specified object reference is key in this identity hash map.
  •  boolean containsValue(Object value)  — Tests whether the specified object reference is a value in this identity hash map.
  •  Set<Map.Entry<K,V>>entrySet()  — Returns a Set view of the mappings contained in this map.
  •  boolean equals(Object o) — Compares the specified object with this map for equality.
  •  void forEach(BiConsumer<? super K,? superV> action)  — Performs the given action for each entry in this map until all entries have been processed or the action throws an exception.
  •  V get(Object key) — Returns the value to which the specified key is mapped, or null if this map contains no mapping for the key.
  • int hashCode() — Returns the hash code value for this map.
  • boolean isEmpty() — Returns true if this identity hash map contains no key-value mappings.
  •  Set keySet() — Returns an identity-based set view of the keys contained in this map.
  •  V put(K key, V value) — Associates the specified value with the specified key in this identity hash map.
  •  void putAll(Map<? extends K,? extends V> m) — Copies all of the mappings from the specified map to this map.
  •  V remove(Object key) — Removes the mapping for this key from this map if present.
  •  void replaceAll(BiFunction<? super K,? super V,? extends V> function)  —Replaces each entry’s value with the result of invoking the given function on that entry until all entries have been processed or the function throws an exception.
  •  int size() — Returns the number of key-value mappings in this identity hash map.
  •  Collection values() — Returns a Collection view of the values contained in this map.

4. IdentityHashMap Class Example

As we know that  IdentityHashMap is a HashTable –based implementation of the Map Interface. Normal HashMap compares keys using the .equals method. But Identity HashMap compares its keys using the == operator. Hence, ‘a’ and new String(‘a’) are considered as two different keys. The initial size of Identity HashMap is 21, while the initial size of a normal HashMap is 16.

import java.util.IdentityHashMap;
public class IdentityHashMapExample {
 public static void main(final String[] args) {
   final IdentityHashMap<String, String> identityHashMap = new IdentityHashMap<String, String>();
         identityHashMap.put("a", "Java");
         identityHashMap.put(new String("a"), "J2EE");
         identityHashMap.put("b", "J2SE");
         identityHashMap.put(new String("b"), "Spring");
         identityHashMap.put("c", "Hibernate");
         for (final String str : identityHashMap.keySet()) {
             System.out.println("Key : " + str + " and Value : " + identityHashMap.get(str));
         System.out.println("Size of map is : " + identityHashMap.size());
         System.out.println("Here 'a' and new String('a') are considered as separate keys");
Key : a and Value : Java
Key : b and Value : J2SE
Key : c and Value : Hibernate
Key : b and Value : Spring
Key : a and Value : J2EE
Size of map is : 5
Here 'a' and new String('a') are considered as separate keys

Further Learning

Collections Framework – EnumMap
Collections Framework – IdentityHashMap

Collections Framework – CopyOnWriteArraySet
Collections Framework – EnumSet

Collections Framework – CopyOnWriteArrayList

Java Integer Cache – Why Integer.valueOf(127) == Integer.valueOf(127) Is True

In an interview, one of my friends was asked that if we have two Integer objects, Integer a = 127; Integer b = 127; Why a == b evaluate to truewhen both are holding two separate objects? In this article, I will try to answer this question and also try to explain the answer.

Short Answer

The short answer to this question is, direct assignment of an int literal to an Integer reference is an example of auto-boxing concept where the literal value to object conversion code is handled by the compiler, so during compilation phase compiler converts Integer a = 127; to Integer a = Integer.valueOf(127);.

TheInteger class maintains an internal IntegerCache for integers which by default ranges from -128 to 127andInteger.valueOf() method returns objects of mentioned range from that cache. So a == b returns true because aand b both are pointing to the same object.

Long Answer

In order to understand the short answer let’s first understand the Java types, all types in Java lies under two categories

  1. Primitive Types: There are 8 primitive types (byte, short, int, long, float, double, char and boolean) in Java which holds their values directly in the form of binary bits.
    For example int a = 5; int b = 5;, here a and b directly holds the binary value of 5 and if we try to compare a and b using a == b we are actually comparing 5 == 5 which returns true.
  2. Reference Types: All types other than primitive types lies under the category of reference types e.g. Classes, Interfaces, Enums, Arrays etc. and reference types holds the address of the object instead of the object iteslf.
    For example, Integer a = new Integer(5); Integer b = new Integer(5), here a and b do not hold the binary value of 5instead a and b holds memory addresses of two separate objects where both objects contain a value 5. So if we try to compare a and b using a == b, we are actually comparing those two separate memory addresses hence we get false, to perform actual equality on a and b we need to perform a.euqals(b)Reference types are further divided into 4 categories Strong, Soft, Weak and Phantom References.

And we know that Java provides wrapper classes for all primitive types and support auto-boxing and auto-unboxing.

1 // Example of auto-boxing, here c is a reference type
2 Integer c = 128// Compiler converts this line to Integer c = Integer.valueOf(128);
4 // Example of auto-unboxing, here e is a primitive type
5 int e = c; // Compiler converts this line to int e = c.intValue();

Now if we create two integer objectsa andb, and try to compare them using the equality operator==, we will getfalsebecause both references are holding different-different objects

Integer a = 128// Compiler converts this line to Integer a = Integer.valueOf(128);
Integer b = 128// Compiler converts this line to Integer b = Integer.valueOf(128);
System.out.println(a == b); // Output -- false

But if we assign the value 127 to both a and b and try to compare them using the equality operator ==, we will get true why?

Integer a = 127// Compiler converts this line to Integer a = Integer.valueOf(127);
Integer b = 127// Compiler converts this line to Integer b = Integer.valueOf(127);
System.out.println(a == b); // Output -- true

As we can see in the code that we are assigning different objects to a and b but a == b can return true only if both aand b are pointing to the same object.

So how the comparison returning true? whats actually happening here? are a and b pointing to the same object?

Well till now we know that the code Integer a = 127; is an example of auto-boxing and compiler automatically converts this line to Integer a = Integer.valueOf(127);.

So it is the Integer.valueOf() method which is returning these integer objects which means this method must be doing something under the hood.

And if we take a look at the source code of Integer.valueOf() method, we can clearly see that if the passed int literal i is greater than IntegerCache.low and less thanIntegerCache.high then the method returns Integer objects fromIntegerCache. Default values for IntegerCache.low and IntegerCache.high are -128 and 127 respectively.

In other words, instead of creating and retruning new integer objects, Integer.valueOf() method returns Integer objects from an internal IntegerCache if the passed int literal is greater than
-128 and less than 127.

 * Returns an {@code Integer} instance representing the specified
 * {@code int} value.  If a new {@code Integer} instance is not
 * required, this method should generally be used in preference to
 * the constructor {@link #Integer(int)}, as this method is likely
 * to yield significantly better space and time performance by
 * caching frequently requested values.
 * This method will always cache values in the range -128 to 127,
 * inclusive, and may cache other values outside of this range.
 * @param  i an {@code int} value.
 * @return an {@code Integer} instance representing {@code i}.
 * @since  1.5
 public static Integer valueOf(int i) {
     if (i >= IntegerCache.low && i <= IntegerCache.high)
         return IntegerCache.cache[i + (-IntegerCache.low)];
     return new Integer(i);

Java caches integer objects which fall into -128 to 127 range because this range of integers gets used a lot in day to day programming which indirectly saves some memory.

As you can see in the following image Integer class maintains an inner static IntegerCache class which acts as the cache and holds integer objects from -128 to 127 and that’s why when we try to get integer object for 127 we always get the same object.


The cache is initialized on first usage when the class get loaded into memory because of the static block. The max range of the cache can be controlled by the -XX:AutoBoxCacheMax JVM option.

This caching behavior is not applicable for Integer objects only, similar to Integer.IntegerCache we also haveByteCache,ShortCache,LongCache,CharacterCache forByteShort,
Long,Character respectively.

Byte, Short and Long have a fixed range for caching between –127 to 127 (inclusive) but for Character, the range is from 0 to 127 (inclusive). The range can be modified via argument only for Integer but not for others.

You can find the complete source code for this article on this Github Repository and please feel free to provide your valuable feedback.

15 Spring Core Annotation Examples

As we know, Spring DI and Spring IOC are core concepts of the Spring Framework. Let’s explore some Spring core annotations from the org.springframework.beans.factory.annotationand org.springframework.context.annotation packages.

We often call these “Spring core annotations,” and we’ll review them in this article.

Here’s a list of all known Spring core annotations.

Image title


We can use the @Autowired annotation to mark the dependency that Spring is going to resolve and inject. We can use this annotation with a constructor, setter, or field injection.

Constructor Injection:

public class CustomerController {
    private CustomerService customerService;
    public CustomerController(CustomerService customerService) {
        this.customerService = customerService;

Setter Injection:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RestController;
public class CustomerController {
    private CustomerService customerService;
    public void setCustomerService(CustomerService customerService) {
        this.customerService = customerService;

Field Injection:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RestController;
public class CustomerController {
private CustomerService customerService;

For more details, visit our articles about @Autowired and Guide to Dependency Injection in Spring.


  •  @Bean is a method-level annotation and a direct analog of the XML element. The annotation supports some of the attributes offered by, such as the init-method, destroy-method, auto-wiring, and name.
  • You can use the  @Bean annotation in a  @Configuration-annotated or @Component-annotated class.

The following is a simple example of an @Bean method declaration:

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.companyname.projectname.customer.CustomerService;
import com.companyname.projectname.order.OrderService;
public class Application {
    public CustomerService customerService() {
        return new CustomerService();
    public OrderService orderService() {
        return new OrderService();

The preceding configuration is equivalent to the following Spring XML:

        <bean id="customerService" class="com.companyname.projectname.CustomerService"/>
        <bean id="orderService" class="com.companyname.projectname.OrderService"/>

Read more about the  @Bean annotation in this article  Spring @Bean Annotation with Example.


This annotation helps fine-tune annotation-based auto-wiring. There may be scenarios when we create more than one bean of the same type and want to wire only one of them with a property. This can be controlled using the @Qualifier annotation along with the  @Autowired annotation.

Example: Consider the EmailService and SMSService classes to implement the single  MessageService interface.

Create the MessageService interface for multiple message service implementations.

public interface MessageService {
    public void sendMsg(String message);

Next, create implementations:  EmailService and SMSService.

public class EmailService implements MessageService{
    public void sendMsg(String message) {
public class SMSService implements MessageService{
    public void sendMsg(String message) {

It’s time to see the usage of the @Qualifier annotation.

public interface MessageProcessor {
    public void processMsg(String message);
public class MessageProcessorImpl implements MessageProcessor {
    private MessageService messageService;
    // setter based DI
    public void setMessageService(MessageService messageService) {
        this.messageService = messageService;
    // constructor based DI
    public MessageProcessorImpl(@Qualifier("emailService") MessageService messageService) {
        this.messageService = messageService;
    public void processMsg(String message) {

Read more about this annotation in this article:  Spring @Qualifier Annotation Example.


The @Required annotation is a method-level annotation and applied to the setter method of a bean.

This annotation simply indicates that the setter method must be configured to be dependency-injected with a value at configuration time.

For example, @Required on setter methods marks dependencies that we want to populate through XML:

void setColor(String color) {
    this.color = color;

<bean class="com.javaguides.spring.Car">
   <property name="color" value="green" />

Otherwise, the  BeanInitializationException will be thrown.


The Spring @Value annotation is used to assign default values to variables and method arguments. We can read Spring environment variables as well as system variables using the @Value annotation.

The Spring @Value annotation also supports SpEL. Let’s look at some of the examples of using the @Value annotation.

Examples: We can assign a default value to a class property using the @Value annotation.

@Value("Default DBConfiguration")
private String defaultName;

The @Value annotation argument can be a string only, but Spring tries to convert it to the specified type. The following code will work fine and assign the boolean and integer values to the variable.

private boolean defaultBoolean;
private int defaultInt;

This demonstrates the Spring  @Value — Spring Environment Property

private String defaultAppName;

Next, assign system variables using the @Value annotation.

private String javaHome;

private String homeDir;

Spring @Value  – SpEL
private String javaHome;


The@DependsOn annotation can force Spring IoC containers to initialize one or more beans before the bean, which is annotated by the  @DependsOn annotation.

The @DependsOn annotation may be used on any class directly or indirectly annotated with the  @Component or on methods annotated with the @Bean.

Example: Let’s create  FirstBean and  SecondBean classes. In this example, the  SecondBean is initialized before bean  FirstBean.

public class FirstBean {
    private SecondBean secondBean;

public class SecondBean {
    public SecondBean() {
        System.out.println("SecondBean Initialized via Constuctor");

Declare the above beans in Java based on the configuration class.

public class AppConfig {
    @DependsOn(value = {
    public FirstBean firstBean() {
        return new FirstBean();
    public SecondBean secondBean() {
        return new SecondBean();

Read more about @DependsOn annotation on Spring – @DependsOn Annotation Example.


By default, the Spring IoC container creates and initializes all singleton beans at the time of application startup. We can prevent this pre-initialization of a singleton bean by using the @Lazy annotation.

The  @Lazy annotation may be used on any class, directly or indirectly annotated with the  @Component or on methods annotated with the @Bean.

Example: Consider we have below two beans — FirstBean and  SecondBean. In this example, we will explicitly load  FirstBean using the  @Lazyannotation.

public class FirstBean {
    public void test() {
        System.out.println("Method of FirstBean Class");
public class SecondBean {
    public void test() {
        System.out.println("Method of SecondBean Class");

Declare the above beans in Java based on the configuration class.

public class AppConfig {
    @Lazy(value = true)
    public FirstBean firstBean() {
        return new FirstBean();

    public SecondBean secondBean() {
        return new SecondBean();

As we can see, bean secondBean is initialized by the Spring container, while the bean firstBean is initialized explicitly.

Read more about the  @Lazy  annotation with a complete example onSpring – @Lazy Annotation Example.


A method annotated with  @Lookup tells Spring to return an instance of the method’s return type when we invoke it.

Detailed information about this annotation can be found on Spring @LookUp Annotation.


We use the  @Primary to give higher preference to a bean when there are multiple beans of the same type.

class Car implements Vehicle {}
class Bike implements Vehicle {}
class Driver {
    Vehicle vehicle;

class Biker {
    Vehicle vehicle;

Read more about this annotation on Spring – @Primary Annotation Example.


We use the@Scope annotation to define the scope of a  @Component class or the @Bean definition. It can be either singleton, prototype, request, session, globalSession, or some custom scope.

For example:

@Scope(value = ConfigurableBeanFactory.SCOPE_SINGLETON)
public class TwitterMessageService implements MessageService {
@Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class TwitterMessageService implements MessageService {

Read more about the @Scope annotations on Spring @Scope annotation with Singleton Scope Example and Spring @Scope annotation with Prototype Scope Example.


If we want Spring to use the @Component class or the @Bean method only when a specific profile is active, we can mark it with  @Profile. We can configure the name of the profile with the value argument of the annotation:

class Bike implements Vehicle {}

You can read more about profiles in this Spring Profiles.


The  @Import annotation indicates one or more @Configuration classes to import.

For example: in a Java-based configuration, Spring provides the @Import  annotation, which allows the loading @Bean definitions from another configuration class.

public class ConfigA {
    public A a() {
        return new A();
public class ConfigB {
    public B b() {
        return new B();

Now, rather than needing to specify both the ConfigA class and ConfigB class when instantiating the context, only ConfigB needs to be supplied explicitly.

Read more about the @Import annotation on Spring @Import Annotation.


Spring provides an @ImportResource annotation used to load beans from an applicationContext.xml file into an ApplicationContext. For example: consider that we have applicationContext.xml Spring bean configuration XML file on the classpath.

public class XmlConfiguration {

Read more about this annotation with a complete example onSpring @ImportResource Annotation.


The  @PropertySource annotation provides a convenient and declarative mechanism for adding a PropertySource to Spring’s Eenvironment to be used in conjunction with the @Configurationclasses.

For example, we are reading database configuration from the file config.propertiesfile and setting these property values to the DataSourceConfig class using the Environment.

import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.core.env.Environment;

public class ProperySourceDemo implements InitializingBean {
    Environment env;
    public void afterPropertiesSet() throws Exception {

    private void setDatabaseConfig() {
        DataSourceConfig config = new DataSourceConfig();

Read more about this annotation on Spring @PropertySource Annotation with Example.


We can use this annotation to specify multiple  @PropertySource  configurations:

 public class AppConfig {

Read more about this annotation on Spring @PropertySources Annotation.

Hope you enjoyed this post on the best Spring annotations for your project! Happy coding!