Thread Livelock

A livelock is a recursive situation where two or more threads would keep repeating a particular code logic. The intended logic is typically giving opportunity to the other threads to proceed in favor of ‘this’ thread.

A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time.
From Oracle reference docs:

 A thread often acts in response to the action of another thread. If the other thread’s action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked – they are simply too busy responding to each other to resume work.

For example consider a situation where two threads want to access a shared common resource via a Worker object but when they see that other Worker (invoked on another thread) is also ‘active’, they attempt to hand over the resource to other worker and wait for it to finish. If initially we make both workers active they will suffer from livelock.
main

The Common Resource Class

public class CommonResource {
    private Worker owner;

    public CommonResource (Worker d) {
        owner = d;
    }

    public Worker getOwner () {
        return owner;
    }

    public synchronized void setOwner (Worker d) {
        owner = d;
    }
}

The Worker Class

public class Worker {
    private String name;
    private boolean active;

    public Worker (String name, boolean active) {
        this.name = name;
        this.active = active;
    }

    public String getName () {
        return name;
    }

    public boolean isActive () {
        return active;
    }

    public synchronized void work (CommonResource commonResource, Worker otherWorker) {
        while (active) {
            // wait for the resource to become available.
            if (commonResource.getOwner() != this) {
                try {
                    wait(10);
                } catch (InterruptedException e) {
                   //ignore
                }
                continue;
            }

            // If other worker is also active let it do it's work first
            if (otherWorker.isActive()) {
                System.out.println(getName() +
                            " : handover the resource to the worker " +
                                                       otherWorker.getName());
                commonResource.setOwner(otherWorker);
                continue;
            }

            //now use the commonResource
            System.out.println(getName() + ": working on the common resource");
            active = false;
            commonResource.setOwner(otherWorker);
        }
    }
}

The main class

public class Livelock {

    public static void main (String[] args) {
        final Worker worker1 = new Worker("Worker 1 ", true);
        final Worker worker2 = new Worker("Worker 2", true);

        final CommonResource s = new CommonResource(worker1);

        new Thread(() -> {
            worker1.work(s, worker2);
        }).start();

        new Thread(() -> {
            worker2.work(s, worker1);
        }).start();
    }
}

Output:

There will be never ending recursion of the following output:

Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
Worker 1  : handing over the resource to the worker: Worker 2
Worker 2 : handing over the resource to the worker: Worker 1
    ........

Avoiding Livelock

In above example we can fix the issue by processing the common resource sequentially rather than in different threads simultaneously.

Just like deadlock, there’s no general guideline to avoid livelock, but we have to be careful in scenarios where we change the state of common objects also being used by other threads, for example in above scenario. the Worker object.

Filter Vs Interceptor

Filter: 

filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. Filters typically do not themselves create responses, but instead provide universal functions that can be “attached” to any type of servlet or JSP page.

Filters can perform many different types of functions. We’ll discuss examples of the italicized items in this paper:

  •   Authentication-Blocking requests based on user identity.
  •   Logging and auditing-Tracking users of a web application.
  •   Image conversion-Scaling maps, and so on.
  •   Data compression-Making downloads smaller.
  •   Localization-Targeting the request and response to a particular locale.

Request Filters can:

  •    perform security checks
  •   reformat request headers or bodies
  •   audit or log requests

Response Filters can:

  •   Compress the response stream
  •   append or alter the response stream
  •   create a different response altogether

Examples that have been identified for this design are

  •   Authentication Filters
  •   Logging and Auditing Filters
  •   Image conversion Filters
  •   Data compression Filters
  •   Encryption Filters
  •   Tokenizing Filters
  •   Filters that trigger resource access events
  •   XSL/T filters
  •   Mime-type chain Filter

Interceptors

Interceptors are used in conjunction with Java EE managed classes to allow developers to invoke interceptor methods in conjunction with method invocations or lifecycle events on an associated target class. Common uses of interceptors are logging, auditing, or profiling.

Interceptors can be defined within a target class as an interceptor method, or in an associated class called an interceptor class. Interceptor classes contain methods that are invoked in conjunction with the methods or lifecycle events of the target class.

1)  Cookie Interceptor

2)  Checkbox Interceptor

3)  FileUpload Interceptor

Difference:

A Servlet Filter is used in the web layer only, you can’t use it outside of a web context. Interceptors can be used anywhere. That’s the main difference.

for authentication of web pages you would use a servlet filter. For security stuff in your business layer or logging/bugtracing (a.k.a. independent of the web layer) you would use an Interceptor.

Apart from the fact that both Interceptors and filters are based on intercepting filter,there are few differences when it comes to Struts2.

Filters: (1)Based on Servlet Specification (2)Executes on the pattern matches on the request.(3) Not configurable method calls
Interceptors: (1)Based on Struts2. (2)Executes for all the request qualifies for a front controller( A Servlet filter ).And can be configured to execute additional interceptor for a particular action execution.(3)Methods in the Interceptors can be configured whether to execute or not by means of excludemethods or includeMethods

How JSON Web Token (JWT) Secures Your API

Image title

You’ve probably heard that JSON Web Token (JWT) is the current state-of-the-art technology for securing APIs.

Like most security topics, it’s important to understand how it works (at least, somewhat) if you’re planning to use it. The problem is that most explanations of JWT are technical and headache inducing.

Let’s see if I can explain how JWT can secure your API without crossing your eyes!

API Authentication

Certain API resources need restricted access. We don’t want one user to be able to change the password of another user, for example.

That’s why we protect certain resources make users supply their ID and password before allowing access— in other words, we authenticate them.

The difficulty in securing an HTTP API is that requests are stateless — the API has no way of knowing whether any two requests were from the same user or not.

So why don’t we require users to provide their ID and password on every call to the API? Only because that would be a terrible user experience.

JSON Web Token

What we need is a way to allow a user to supply their credentials just once, but then be identified in another way by the server in subsequent requests.

Several systems have been designed for doing this, and the current state-of-the-art standard is JSON Web Token.

There’s a great article on the topic, which makes a good analogy about how JSON web tokens work.

Instead of an API, imagine you’re checking into a hotel. The “token” is the plastic hotel security card that you get that allows you to access your room, and the hotel facilities, but not anyone else’s room.

When you check out of the hotel, you give the card back. This is analogous to logging out.

Structure of the Token

Normally, a JSON web token is sent via the header of HTTP requests. Here’s what one looks like:

Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIn0.dozjgNryP4J3jVmNHl0w5N_XgL0n3I9PlFUP0THsR8U

In fact, the token is the part after “Authorization: Bearer,” which is just the HTTP header info.

Before you conclude that it’s incomprehensible gibberish, there are a few things you can easily notice.

Firstly, the token consists of three different strings, separated by a period. These three string are base 64 encoded and correspond to the header, the payload, and the signature.

// Header
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
// Payload
eyJzdWIiOiIxMjM0NTY3ODkwIn0
// Signature
dozjgNryP4J3jVmNHl0w5N_XgL0n3I9PlFUP0THsR8U

Note: base 64 is a way of transforming strings to ensure they don’t get screwed up during transport across the web. It is not a kind of encryption and anyone can easily decode it to see the original data.

We can decode these strings to get a better understand of the structure of JWT.

The following is the decoded header from the token. The header is meta information about the token. It doesn’t tell us much to help build our basic understanding, so we won’t get into any detail about it.

{
  "alg": "HS256",
  "typ": "JWT"
}

Payload

The payload is of much more interest. The payload can include any data you like, but you might just include a user ID if the purpose of your token is API access authentication.

{
  "userId": "1234567890"
}

It’s important to note that the payload is not secure. Anyone can decode the token and see exactly what’s in the payload. For that reason, we usually include an ID rather than sensitive identifying information like the user’s email.

Even though this payload is all that’s needed to identify a user on an API, it doesn’t provide a means of authentication. Someone could easily find your user ID and forge a token if that’s all that was included.

So this brings us to the signature, which is the key piece for authenticating the token.

Hashing Algorithms

Before we explain how the signature works, we need to define what a hashing algorithm is.

To begin with, it’s a function for transforming a string into a new string called a hash. For example, say we wanted to hash the string “Hello, world.” Here’s the output we’d get using the SHA256 hashing algorithm:

4ae7c3b6ac0beff671efa8cf57386151c06e58ca53a78d83f36107316cec125f

The most important property of the hash is that you can’t use the hashing algorithm to identify the original string by looking at the hash.

There are many different types of hashing algorithms, but SHA256 is commonly used with JWT.

In other words, we can’t take the above hash and directly figure out that the original string was “Hello, world.” The hash is complicated enough that guessing the original string would be infeasible.

JWT Signature

So, coming back to the JWT structure, let’s now look at the third piece of the token, the signature. This actually needs to be calculated:

HMACSHA256(
  base64UrlEncode(header) + "." + base64UrlEncode(payload),
  "secret string"
);

Here’s an explanation of what’s going on here:

Firstly, HMACSHA256 is the name of a hashing function and takes two arguments: the string to hash, and the “secret” (defined below).

Secondly, the string we hash is the base 64 encoded header, plus the base 64 encoded payload.

Thirdly, the secret is an arbitrary piece of data that only the server knows.

Q. Why include the header and payload in the signature hash?

This ensures the signature is unique to this particular token.

Q. What’s the secret?

To answer this, let’s think about how you would forge a token.

We said before that you can’t determine a hash’s input from looking at the output. However, since we know that the signature includes the header and payload, as those are public information, if you know the hashing algorithm (hint: it’s usually specified in the header), you could generate the same hash.

But the secret, which only the server knows, is not public information. Including it in the hash prevents someone from generating their own hash to forge the token. And since the hash obscures the information used to create it, no one can figure out the secret from the hash, either.

The process of adding private data to a hash is called salting and makes cracking the token almost impossible.

Authentication Process

So now, you have a good idea of how a token is created. How do you use it to authenticate your API?

Login

A token is generated when a user logs in and is stored in the database with the user model.

loginController.js:

if (passwordCorrect) {
  user.token = generateToken(user.id);
  user.save();
}

The token then gets attached as the authorization header in the response to the login request.

loginController.js:

if (passwordCorrect) {
  user.token = generateToken(user.id);
  user.save();
  res.headers("authorization", `Bearer ${token}`).send();
}

Authenticating Requests

Now that the client has the token, they can attach it to any future requests to authentically identify the user.

When the server receives a request with an authorization token attached, the following happens:

  1. It decodes the token and extracts the ID from the payload.
  2. It looks up the user in the database with this ID.
  3. It compares the request token with the one that’s stored with the user’s model. If they match, the user is authenticated.

authMiddleware.js:

const token = req.header.token;
const payload = decodeToken(token);
const user = User.findById(payload.id);
if (user.token = token) {
  // Authorized
} else {
  // Unauthorized
}

Logging Out

If the user logs out, simply delete the token attached to the user model, and now the token will no longer work. A user will need to log in again to generate a new token.

logoutController.js:

user.token = null;
user.save();

Wrapping Up

So, that’s a very basic explanation of how you can secure an API using JSON Web Tokens. I hope your head doesn’t hurt too much.

There’s a lot more to this topic, though, so here’s some additional reading:

Kafka Technical Overview

Objective

In this article series, we will learn Kafka basics, Kafka delivery semantics, and configuration to achieve different semantics, Spark Kafka integration, and optimization. In Part 1 of this series we’ll look at Kafka basics.

Problem Statement

The following could be some of the problem statements:

  • Many sources and target systems to integrate. Generally, integration of many systems involves complexities like dealing with many protocols, messaging formats, etc.
  • Message systems handle high volume streams.

Integration of multiple source and target systems

Use Cases

Some of the use cases include:

  • Streaming processing
  • Tracking user activity, log aggregation, etc.
  • De-coupling systems

Integration of multiple source and target systems using Kafka

What Is Kafka?

Kafka is a horizontally scalable, fault tolerant, and fast messaging system. It’s a pub-sub model in which various producers and consumers can write and read. It decouples source and target systems. Some of the key features are:

  • Scale to 100s of nodes.
  • Can handle millions of messages per second.
  • Real-time processing (~10ms).

Kafka producer consumer integration

Key Terminologies

Topic, Partitions, and Offsets

A topic is a specific stream of data. It is very similar to a table in a NoSQL database. Like tables in a NoSQL database, the topic is split into partitions that enable topics to be distributed across various nodes. Like primary keys in tables, topics have offsets per partitions. You can uniquely identify a message using its topic, partition and offset.

DB Table and Kafka Topic analogy

Partitions

Partitions enable topics to be distributed across the cluster. Partitions are a unit of parallelism for horizontal scalability. One topic can have more than one partition scaling across nodes.

Kafka topic distribution across brokers

Messages are assigned to partitions based on partition keys; if there are no partition keys then the partition is randomly assigned. It’s important to use the correct key to avoid hotspots.

Kafka partitions & offsets in a topic

Each message in a partition is assigned an incremental id called an offset. Offsets are unique per partition and messages are ordered only within a partition. Messages written to partitions are immutable.

Kafka Architecture

The diagram below shows the architecture of Kafka.

Kafka Architecture

ZooKeeper

ZooKeeper is a centralized service for managing distributed systems. It offers hierarchical key-value store, configuration, synchronization, and name registry services to the distributed system it manages. ZooKeeper acts as ensemble layer (ties things together) and ensures high availability of the Kafka cluster. Kafka nodes are also called brokers. It’s important to understand that Kafka cannot work without ZooKeeper.

From the list of ZooKeeper nodes, one of the nodes is elected as a leader and the rest of the nodes follow the leader. In the case of a ZooKeeper node failure, one of the followers is elected as leader. More than 1 node is strongly recommended for high availability and more than 7 is not recommended.

ZooKeeper stores metadata and the current state of the Kafka cluster. For example details, like topic name, the number of partitions, replication, leader details of petitions, and consumer group details are stored in ZooKeeper. You can think of ZooKeeper like a project manager who manages resources in the project and remembers the state of the project.

Zookeeper leader and follower in a Kafka cluster

Key things to remember:

  • Manages list of brokers.
  • Elects broker leaders when a broker goes down.
  • Sends notifications on a new broker, new topic, deleted topic, lost brokers, etc.
  • From Kafka 0.10 on, consumer offsets are not stored in ZooKeeper, only the metadata of the cluster is stored in ZooKeepr.
  • The leader in ZooKeepr handles all writes and follower ZooKeepr handle only reads.

Broker

A broker is a single Kafka node that is managed by ZooKeeper. Set of brokers form a Kafka cluster. Topics that are created in Kaka are distributed across brokers based on the partition, replication, and other factors. When a broker node fails based on the state stored in zookeeper it automatically rebalances the cluster and also in case if a leader partition is lost then one of the follower petition is elected as the leader.

Broker and topic in a Kafka cluster

You can think of broker as a team leader who takes care of the assigned tasks, in case if a team lead isn’t available then the manager takes care of assigning tasks to other team members.

Replication

Partition replication in a Kafka cluster

A replication is making a copy of a partition available in another broker. Replication enables Kafka to be fault tolerant. When a partition of the topic is available in multiple brokers then one of the partitions in a broker is elected as leader and rest of the replication of partition are followers.

Partition replication by followers in a Kafka cluster

Replication enables Kafka to be fault tolerant even when a broker is down. For example, Topic B partition 0 is stored in both broker 0 and broker 1. Both producers and consumers are severed only by the leader. In case of a broker failure the partition from another broker is elected as a leader and it starts serving the producers and consumer groups. Replica partitions that are in sync with the leader are flagged as ISR (In Sync Replica).

Broker failure and partition leader election in a Kafka cluster

IT Team and Kafka Cluster Analogy

The diagram below depicts an analogy of an IT team and Kafka cluster.

IT Team and Kafka cluster analogy

Summary

Below is the summary of core components in Kafka.

Kafka component relationship

  • ZooKeeper manages Kafka brokers and their metadata.
  • Brokers are horizontally scalable Kafka nodes that contain topics and it’s replications.
  • Topics are message streams with one or more partitions.
  • Partitions contains messages with unique offsets per partition.
  • Replication enables Kafka to be fault tolerant using follower partitions.

Refer Kafka quickstart for Kafka setup.

Deep, Shallow and Lazy Copy with Java Examples

In object-oriented programming, object copying is creating a copy of an existing object, the resulting object is called an object copy or simply copy of the original object.There are several ways to copy an object, most commonly by a copy constructor or cloning.

We can define Cloning as “create a copy of object” Shallow, deep and lazy copy is related to cloning process
these are actually the ways for creating copy object.

Shallow Copy

  • Whenever we use default implementation of clone method we get shallow copy of object means it creates new instance and copies all the field of object to that new instance and returns it as object type, we need to explicitly cast it back to our original object. This is shallow copy of the object.
  • clone() method of the object class support shallow copy of the object. If the object contains primitive as well as nonprimitive or reference type variable in shallow copy, the cloned object also refers to the same object to which the original object refers as only the object references gets copied and not the referred objects themselves.
  • That’s why the name shallow copy or shallow cloning in Java. If only primitive type fields or Immutable objects are there then there is no difference between shallow and deep copy in Java.
//code illustrating shallow copy
public class Ex {
 
    private int[] data;
 
    // makes a shallow copy of values
    public Ex(int[] values) {
        data = values;
    }
 
    public void showData() {
        System.out.println( Arrays.toString(data) );
    }
}

The above code shows shallow copying. data simply refers to the same array as vals.

This can lead to unpleasant side effects if the elements of values are changed via some other reference.

public class UsesEx{
 
    public static void main(String[] args) {
        int[] vals = {3, 7, 9};
        Ex e = new Ex(vals);
        e.showData(); // prints out [3, 7, 9]
        vals[0] = 13;
        e.showData(); // prints out [13, 7, 9]
 
        // Very confusing, because we didn't
        // intentionally change anything about 
        // the object e refers to.
    }
}
Output 1 : [3, 7, 9]
Output 2 : [13, 7, 9]

Deep Copy

  • Whenever we need own copy not to use default implementation we call it as deep copy, whenever we need deep copy of the object we need to implement according to our need.
  • So for deep copy we need to ensure all the member class also implement the Cloneable interface and override the clone() method of the object class.

A deep copy means actually creating a new array and copying over the values.

// Code explaining deep copy
public class Ex {
     
    private int[] data;
 
    // altered to make a deep copy of values
    public Ex(int[] values) {
        data = new int[values.length];
        for (int i = 0; i < data.length; i++) {
            data[i] = values[i];
        }
    }
 
    public void showData() {
        System.out.println(Arrays.toString(data));
    }
}

public class UsesEx{
 
    public static void main(String[] args) {
        int[] vals = {3, 7, 9};
        Ex e = new Ex(vals);
        e.showData(); // prints out [3, 7, 9]
        vals[0] = 13;
        e.showData(); // prints out [3, 7, 9]
 
       // changes in array values will not be 
       // shown in data values. 
    }
}
Output 1 : [3, 7, 9]
Output 2 : [3, 7, 9]

Changes to the array vals will not result in changes to the array data.

when to use what
There is no hard and fast rule defined for selecting between shallow copy and deep copy but normally we should keep in mind that if an object has only primitive fields, then obviously we should go for shallow copy, but if the object has references to other objects, then based on the requirement, shallow copy or deep copy should be done. If the references are not updated then there is no point to initiate a deep copy.

Lazy Copy
A lazy copy can be defined as a combination of both shallow copy and deep copy. The mechanism follows a simple approach – at the initial state, shallow copy approach is used. A counter is also used to keep a track on how many objects share the data. When the program wants to modify the original object, it checks whether the object is shared or not. If the object is shared, then the deep copy mechanism is initiated.

Summary
In shallow copy, only fields of primitive data type are copied while the objects references are not copied. Deep copy involves the copy of primitive data type as well as objet references. There is no hard and fast rule as to when to do shallow copy and when to do a deep copy. Lazy copy is a combination of both of these approaches.

This article is contributed by Abhishek Gupta. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Injecting Prototype Beans into a Singleton Instance in Spring

1. Overview

In this quick article, we’re going to show different approaches of injecting prototype beans into a singleton instance. We’ll discuss the use cases and the advantages/disadvantages of each scenario.

By default, Spring beans are singletons. The problem arises when we try to wire beans of different scopes. For example, a prototype bean into a singleton. This is known as the scoped bean injection problem.

To learn more about bean scopes, this write-up is a good place to start.

2. Prototype Bean Injection Problem

In order to describe the problem, let’s configure the following beans:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Configuration
public class AppConfig {
    @Bean
    @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
    public PrototypeBean prototypeBean() {
        return new PrototypeBean();
    }
    @Bean
    public SingletonBean singletonBean() {
        return new SingletonBean();
    }
}

Notice that the first bean has a prototype scope, the other one is a singleton.

Now, let’s inject the prototype-scoped bean into the singleton – and then expose if via the getPrototypeBean()method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class SingletonBean {
    // ..
    @Autowired
    private PrototypeBean prototypeBean;
    public SingletonBean() {
        logger.info("Singleton instance created");
    }
    public PrototypeBean getPrototypeBean() {
        logger.info(String.valueOf(LocalTime.now()));
        return prototypeBean;
    }
}

Then, let’s load up the ApplicationContext and get the singleton bean twice:

1
2
3
4
5
6
7
8
9
10
11
12
13
public static void main(String[] args) throws InterruptedException {
    AnnotationConfigApplicationContext context
      = new AnnotationConfigApplicationContext(AppConfig.class);
    
    SingletonBean firstSingleton = context.getBean(SingletonBean.class);
    PrototypeBean firstPrototype = firstSingleton.getPrototypeBean();
    
    // get singleton bean instance one more time
    SingletonBean secondSingleton = context.getBean(SingletonBean.class);
    PrototypeBean secondPrototype = secondSingleton.getPrototypeBean();
    isTrue(firstPrototype.equals(secondPrototype), "The same instance should be returned");
}

Here’s the output from the console:

1
2
3
4
5
Singleton Bean created
Prototype Bean created
11:06:57.894
// should create another prototype bean instance here
11:06:58.895

Both beans were initialized only once, at the startup of the application context.

3. Injecting ApplicationContext

We can also inject the ApplicationContext directly into a bean.

To achieve this, either use the @Autowire annotation or implement the ApplicationContextAware interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class SingletonAppContextBean implements ApplicationContextAware {
    private ApplicationContext applicationContext;
    public PrototypeBean getPrototypeBean() {
        return applicationContext.getBean(PrototypeBean.class);
    }
    @Override
    public void setApplicationContext(ApplicationContext applicationContext)
      throws BeansException {
        this.applicationContext = applicationContext;
    }
}

Every time the getPrototypeBean() method is called, a new instance of PrototypeBean will be returned from the ApplicationContext.

However, this approach has serious disadvantages. It contradicts the principle of inversion of control, as we request the dependencies from the container directly.

Also, we fetch the prototype bean from the applicationContext within the SingletonAppcontextBean class. This means coupling the code to the Spring Framework.

4. Method Injection

Another way to solve the problem is method injection with the @Lookup annotation:

1
2
3
4
5
6
7
8
@Component
public class SingletonLookupBean {
    @Lookup
    public PrototypeBean getPrototypeBean() {
        return null;
    }
}

Spring will override the getPrototypeBean() method annotated with @Lookup. It then registers the bean into the application context. Whenever we request the getPrototypeBean() method, it returns a new PrototypeBean instance.

It will use CGLIB to generate the bytecode responsible for fetching the PrototypeBean from the application context.

5. javax.inject API

The setup along with required dependencies are described in this Spring wiring article.

Here’s the singleton bean:

1
2
3
4
5
6
7
8
9
public class SingletonProviderBean {
    @Autowired
    private Provider<PrototypeBean> myPrototypeBeanProvider;
    public PrototypeBean getPrototypeInstance() {
        return myPrototypeBeanProvider.get();
    }
}

We use Provider interface to inject the prototype bean. For each getPrototypeInstance() method call, the myPrototypeBeanProvider.get() method returns a new instance of PrototypeBean.

6. Scoped Proxy

By default, Spring holds a reference to the real object to perform the injection. Here, we create a proxy object to wire the real object with the dependent one.

Each time the method on the proxy object is called, the proxy decides itself whether to create a new instance of the real object or reuse the existing one.

To set up this, we modify the Appconfig class to add a new @Scope annotation:

1
2
3
@Scope(
  value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
  proxyMode = ScopedProxyMode.TARGET_CLASS)

By default, Spring uses CGLIB library to directly subclass the objects. To avoid CGLIB usage, we can configure the proxy mode with ScopedProxyMode.INTERFACES, to use the JDK dynamic proxy instead.

7. ObjectFactory Interface

Spring provides the ObjectFactory<T> interface to produce on demand objects of the given type:

1
2
3
4
5
6
7
8
9
public class SingletonObjectFactoryBean {
    @Autowired
    private ObjectFactory<PrototypeBean> prototypeBeanObjectFactory;
    public PrototypeBean getPrototypeInstance() {
        return prototypeBeanObjectFactory.getObject();
    }
}

Let’s have a look at getPrototypeInstance() method; getObject() returns a brand new instance of PrototypeBean for each request. Here, we have more control over initialization of the prototype.

Also, the ObjectFactory is a part of the framework; this means avoiding additional setup in order to use this option.

8. Create a Bean at Runtime Using java.util.Function

Another option is to create the prototype bean instances at runtime, which also allows us to add parameters to the instances.

To see an example of this, let’s add a name field to our PrototypeBean class:

1
2
3
4
5
6
7
8
9
10
public class PrototypeBean {
    private String name;
    
    public PrototypeBean(String name) {
        this.name = name;
        logger.info("Prototype instance " + name + " created");
    }
    //...  
}

Next, we’ll inject a bean factory into our singleton bean by making use of the java.util.Function interface:

1
2
3
4
5
6
7
8
9
10
11
public class SingletonFunctionBean {
    
    @Autowired
    private Function<String, PrototypeBean> beanFactory;
    
    public PrototypeBean getPrototypeInstance(String name) {
        PrototypeBean bean = beanFactory.apply(name);
        return bean;
    }
}

Finally, we have to define the factory bean, prototype and singleton beans in our configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Configuration
public class AppConfig {
    @Bean
    public Function<String, PrototypeBean> beanFactory() {
        return name -> prototypeBeanWithParam(name);
    }
    @Bean
    @Scope(value = "prototype")
    public PrototypeBean prototypeBeanWithParam(String name) {
       return new PrototypeBean(name);
    }
    
    @Bean
    public SingletonFunctionBean singletonFunctionBean() {
        return new SingletonFunctionBean();
    }
    //...
}

9. Testing

Let’s now write a simple JUnit test to exercise the case with ObjectFactory interface:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Test
public void givenPrototypeInjection_WhenObjectFactory_ThenNewInstanceReturn() {
    AbstractApplicationContext context
     = new AnnotationConfigApplicationContext(AppConfig.class);
    SingletonObjectFactoryBean firstContext
     = context.getBean(SingletonObjectFactoryBean.class);
    SingletonObjectFactoryBean secondContext
     = context.getBean(SingletonObjectFactoryBean.class);
    PrototypeBean firstInstance = firstContext.getPrototypeInstance();
    PrototypeBean secondInstance = secondContext.getPrototypeInstance();
    assertTrue("New instance expected", firstInstance != secondInstance);
}

After successfully launching the test, we can see that each time getPrototypeInstance() method called, a new prototype bean instance created.

10. Conclusion

In this short tutorial, we learned several ways to inject the prototype bean into the singleton instance.

As always, the complete code for this tutorial can be found on GitHub project.