Variance in Java

The other day I came across this post describing the pros and cons of using Go after 8 months. I mostly agree after working full-time with Go for a comparable duration.

Despite that preamble, this is a post about variance in Java, where my goal is to refresh my understanding of what variance is and some of the nuances of its implementation in Java.

ProTip: You’ll need to know this for your OCJP certificate exam.

I will write down my thoughts on using Go in a later post.

What Is Variance?

The Wikipedia article on variance says:

Variance refers to how subtyping between more complex types relates to subtyping between their components.

“More complex types” here refers to higher level structures like containers and functions. So, variance is about the assignment compatibility between containers and functions composed of parameters that are connected via a Type Hierarchy. It allows the safe integration of parametric and subtype polymorphism1. For example, can I assign the result of a function that returns a list of cats to a variable of type “list of animals”? Can I pass in a list of Audi cars to a method that accepts a list of cars? Can I insert a wolf in this list of animals?

In Java, variance is defined at the use-site 2.

Four Kinds of Variance

Paraphrasing the Wiki article, a type constructor is:

  • Covariant if it accepts subtypes but not supertypes
  • Contravariant if it accepts supertypes but not subtypes
  • Bivariant if it accepts both supertypes and subtypes
  • Invariant does not accept either supertypes nor subtypes

(Obviously, the declared type parameter is accepted in all cases.)

Invariance in Java

The use-site must have no open bounds on the type parameter.

If A is a supertype of B, then GenericType<A> is not a supertype ofGenericType<B> and vice versa.

This means these two types have no relation to each other and neither can be exchanged for the other under any circumstance.

Invariant Containers

In Java, invariants are likely the first examples of generics you’ll encounter and are the most intuitive. The methods of the type parameter are useable as one would expect. All methods of the type parameter are accessible.

They cannot be exchanged:

// Type hierarchy: Person :> Joe :> JoeJr
List<Person> p = new ArrayList<Joe>(); // COMPILE ERROR (a bit counterintuitive, but remember List<Person> is invariant)
List<Joe> j = new ArrayList<Person>(); // COMPILE ERROR

You can add objects to them:

// Type hierarchy: Person :> Joe :> JoeJr
List<Person> p = new ArrayList<>();
p.add(new Person()); // ok
p.add(new Joe()); // ok
p.add(new JoeJr()); // ok

You can read objects from them:

// Type hierarchy: Person :> Joe :> JoeJr
List<Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // ok
Person p = joes.get(0); // ok

Covariance in Java

The use-site must have an open lower bound on the type parameter.

If B is a subtype of A, then GenericType<B> is a subtype of GenericType<? extends A>.

Arrays in Java Have Always Been Covariant

Before generics were introduced in Java 1.5, arrays were the only generic containers available. They have always been covariant, eg. Integer[] is a subtype of Object[]. The danger has always been that if you pass your Integer[] to a method that accepts Object[], that method can literally put anything in there. It’s a risk you take — no matter how small — when using third-party code.

Covariant Containers

Java allows subtyping (covariant) generic types but it places restrictions on what can “flow into and out of” these generic types in accordance with the Principle of Least Astonishment3. In other words, methods with return values of the type parameter are accessible, while methods with input arguments of the type parameter are inaccessible.

You can exchange the supertype for the subtype:

// Type hierarchy: Person :> Joe :> JoeJr
List<? extends Joe> = new ArrayList<Joe>(); // ok
List<? extends Joe> = new ArrayList<JoeJr>(); // ok
List<? extends Joe> = new ArrayList<Person>(); // COMPILE ERROR

Reading from them is intuitive:

// Type hierarchy: Person :> Joe :> JoeJr
List<? extends Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // ok
Person p = joes.get(0); // ok
JoeJr jr = joes.get(0); // compile error (you don't know what subtype of Joe is in the list)

Writing to them is prohibited (counterintuitive) to guard against the pitfalls with arrays described above. In the example code below, the caller/owner of a List<Joe> would be astonished if someone else’s method with covariant arg List<? extends Person> added a Jill.

// Type hierarchy: Person > Joe > JoeJr
List<? extends Joe> joes = new ArrayList<>();
joes.add(new Joe());  // compile error (you don't know what subtype of Joe is in the list)
joes.add(new JoeJr()); // compile error (ditto)
joes.add(new Person()); // compile error (intuitive)
joes.add(new Object()); // compile error (intuitive)

Contravariance in Java

The use-site must have an open upper bound on the type parameter.

If A is a supertype of B, then GenericType<A> is a supertype ofGenericType<? super B>.

Contravariant Containers

Contravariant containers behave counterintuitively: contrary to covariant containers, access to methods with return values of the type parameter are inaccessible while methods with input arguments of the type parameter are accessible:

You can exchange the subtype for the supertype:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<Joe>();  // ok
List<? super Joe> joes = new ArrayList<Person>(); // ok
List<? super Joe> joes = new ArrayList<JoeJr>(); // COMPILE ERROR

But you cannot capture a specific type when reading from them:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
Joe j = joes.get(0); // compile error (could be Object or Person)
Person p = joes.get(0); // compile error (ditto)
Object o = joes.get(0); // allowed because everything IS-A Object in Java

You can add subtypes of the “lower bound”:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
joes.add(new JoeJr()); // allowed

But you cannot add supertypes:

// Type hierarchy: Person > Joe > JoeJr
List<? super Joe> joes = new ArrayList<>();
joes.add(new Person()); // compile error (again, could be a list of Object or Person or Joe)
joes.add(new Object()); // compile error (ditto)

Bi-variance in Java

The use-site must declare an unbounded wildcard on the type parameter.

A generic type with an unbounded wildcard is a supertype of all bounded variations of the same generic type. For example, GenericType<?> is a supertype of GenericType<String>. Since the unbounded type is the root of the type hierarchy, it follows that of its parametric types and it can only access methods inherited from java.lang.Object.

Think of GenericType<?> as GenericType<Object>.

Variance of Structures With N-Type Parameters

What about more complex types such as Functions? The same principles apply; you just have more type parameters to consider:

// Type hierarchy: Person > Joe > JoeJr
// Invariance
Function<Person, Joe> personToJoe = null;
Function<Joe, JoeJr> joeToJoeJr = null;
personToJoe = joeToJoeJr; // COMPILE ERROR (personToJoe is invariant)
// Covariance
Function<? extends Person, ? extends Joe> personToJoe = null; // covariant
Function<Joe, JoeJr> joeToJoeJr = null;
personToJoe = joeToJoeJr;  // ok
// Contravariance
Function<? super Joe, ? super JoeJr> joeToJoeJr = null; // contravariant
Function<? super Person, ? super Joe> personToJoe = null;
joeToJoeJr = personToJoe; // ok

Variance and Inheritance

Java allows overriding methods with covariant return types and exception types:

interface Person {
  Person get();
  void fail() throws Exception;
}
interface Joe extends Person {
  JoeJr get();
  void fail() throws IOException;
}
class JoeImpl implements Joe {
  public JoeJr get() {} // overridden
  public void fail() throws IOException {} // overridden
}

But attempting to override methods with covariant arguments results in merely an overload:

interface Person {
  void add(Person p);
}
interface Joe extends Person {
  void add(Joe j);
}
class JoeImpl implements Joe {
  public void add(Person p) {}  // overloaded
  public void add(Joe j) {} // overloaded
 }

Final Thoughts

Variance introduces additional complexity to Java. While the typing rules around variance are easy to understand, the rules regarding accessibility of methods of the type parameter are counterintuitive. Understanding them isn’t just “obvious” — it requires pausing to think through the logical consequences.

However, my daily experience has been that the nuances generally stay out of the way:

  • I cannot recall an instance where I had to declare a contravariant argument, and I rarely encounter them (although they do exist).
  • Covariant arguments seem slightly more common (example4), but they’re easier to reason about (fortunately).

Covariance is its strongest virtue considering that subtyping is a fundamental technique of object-oriented programming (case in point: see note 4).

Conclusion: variance provides moderate net benefits in my daily programming, particularly when compatibility with subtypes is required (which is a regular occurrence in OOP).

Spring Framework Basics: What Is Inversion of Control?

Developers starting with the Spring Framework often get confused with the terminology, specifically dependencies, dependency injection, and Inversion of Control. In this article, we introduce you to the concept of Inversion of Control.

What You Will Learn

  • What is Inversion of Control?
  • What are some examples of Inversion of Control?
  • How does the Spring Framework implement Inversion of Control?
  • Why is Inversion of Control important and what are its advantages?

What Is Inversion of Control?

Approach-1

Have a look at the following implementation of ComplexAlgorithmImpl:

public class ComplexAlgorithmImpl {
BubbleSortAlgorithm bubbleSortAlgorithm = new BubbleSortAlgorithm();
//...
}

One of the numerous things that ComplexAlgorithmImpl does is sorting. It creates an instance of BubbleSortAlgorithm directly within its code.

Approach-2

Now, look at this implementation for a change:

public interface SortAlgorithm {
public int[] sort(int[] numbers);
}

@Component
public class ComplexAlgorithmImpl {
@AutoWired
private SortAlgorithm sortAlgorithm;
//...
}

ComplexAlgorithmImpl here makes use of the SortAlgorithm interface. It also provides a constructor or a setter method where you can set the SortAlgorithminstance into it. The user tells ComplexAlgorithmImpl, which sort algorithm to make use of.

Comparing Approach-1 and Approach-2

Approach-1

  • ComplexAlgorithmImpl can only use BubbleSortAlgorithm; it is tightly coupled.
  • If we need to change ComplexAlgorithmImpl to use quicksort, the relevant code needs to be changed entirely.
  • The control over the BubbleSortAlgorithm dependency is with the ComplexAlgorithmImpl class.

Approach-2

  • ComplexAlgorithmImpl is open to using any implementation of SortAlgorithm, it is loosely coupled.
  • We only need to change the parameter we pass to the constructor or setter of ComplexAlgorithmImpl.
  • The control over the SortAlgorithm dependency is with the user of ComplexAlgorithmImpl.

Inversion Of Control At Play!

In Approach-1, ComplexAlgorithmImpl is tied to a specific sort algorithm.

In Approach-2, it says: give me any sort algorithm and I will work with it.

This is Inversion of Control.

Instead of creating its own dependencies, a class declares its dependencies. The control now shifts from the class to the user of the class to provide the dependency.

Why Is Inversion of Control Important?

Once you write code with Inversion of Control, you can use frameworks like Spring to complete dependency injection and wire up beans and dependencies.

Advantages of Inversion Of Control

  • Inversion of Control makes your code loosely coupled
  • Inversion of Control also makes it easy for the programmer to write effective unit tests

Lastly, be sure to check out the video below on IoC:

image info

Summary

In this article, we talked about Inversion of Control. Instead of a class creating an instance of its own dependency, it leaves it to the user of the class to pass it in and makes code loosely coupled.

Hope you learned something! Let us know what you think in comments below.

Paging with Spring Boot

As a user of a web application we’re expecting pages to load quickly and only show the information that’s relevant to us. For pages that show a list of items, this means only displaying a portion of the items, and not all of them at once.

Once the first page has loaded quickly, the UI can provide options like filters, sorting and pagination that help the user to quickly find the items he or she is looking for.

In this tutorial, we’re examining Spring Data’s paging support and create examples of how to use and configure it along with some information about how it works under the covers.

View on Github    Code Example
This article is accompanied by working example code on github.

Paging vs. Pagination

The terms “paging” and “pagination” are often used as synonyms. They don’t exactly mean the same, however. After consulting various web dictionaries, I’ve cobbled together the following definitions, which I’ll use in this text:

Paging is the act of loading one page of items after another from a database, in order to preserve resources. This is what most of this article is about.

Pagination is the UI element that provides a sequence of page numbers to let the user choose which page to load next.

Initializing the Example Project

We’re using Spring Boot to bootstrap a project in this tutorial. You can create a similar project by using Spring Initializr and choosing the following dependencies:

  • Web
  • JPA
  • H2
  • Lombok

I additionally replaced JUnit 4 with JUnit 5, so that the resulting dependencies look like this (Gradle notation):

dependencies {
  implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
  implementation 'org.springframework.boot:spring-boot-starter-web'
  compileOnly 'org.projectlombok:lombok'
  annotationProcessor 'org.projectlombok:lombok'
  runtimeOnly 'com.h2database:h2'
  testImplementation('org.junit.jupiter:junit-jupiter:5.4.0')
  testImplementation('org.springframework.boot:spring-boot-starter-test'){
    exclude group: 'junit', module: 'junit'
  }
}

Spring Data’s Pageable

No matter if we want to do conventional pagination, infinite scrolling or simple “previous” and “next” links, the implementation in the backend is the same.

If the client only wants to display a “slice” of a list of items, it needs to provide some input parameters that describe this slice. In Spring Data, these parameters are bundled within the Pageable interface. It provides the following methods, among others (comments are mine):

public interface Pageable {
    
  // number of the current page  
  int getPageNumber();
  
  // size of the pages
  int getPageSize();
  
  // sorting parameters
  Sort getSort();
    
  // ... more methods
}

Whenever we want to load only a slice of a full list of items, we can use a Pageable instance as an input parameter, as it provides the number of the page to load as well as the size of the pages. Through the Sort class, it also allows to define fields to sort by and the direction in which they should be sorted (ascending or descending).

The most common way to create a Pageable instance is to use the PageRequest implementation:

Pageable pageable = PageRequest.of(0, 5, Sort.by(
    Order.asc("name"),
    Order.desc("id")));

This will create a request for the first page with 5 items ordered first by name (ascending) and second by id (descending). Note that the page index is zero-based by default!

Confusion with java.awt.print.Pageable?

When working with Pageable, you’ll notice that your IDE will sometimes propose to import java.awt.print.Pageable instead of Spring’s Pageable class. Since we most probably don’t need any classes from the java.awt package, we can tell our IDE to ignore it alltogether.

In IntelliJ, go to “General -> Editor -> Auto Import” in the settings and add java.awt.* to the list labelled “Exclude from import and completion”.

In Eclipse, go to “Java -> Appearance -> Type Filters” in the preferences and add java.awt.* to the package list.

Spring Data’s Page and Slice

While Pageable bundles the input parameters of a paging request, the Pageand Slice interfaces provide metadata for a page of items that is returned to the client (comments are mine):

public interface Page<T> extends Slice<T>{
  
  // total number of pages
  int getTotalPages();
  
  // total number of items
  long getTotalElements();
  
  // ... more methods
  
}
public interface Slice<T> {
  
  // current page number
  int getNumber();
    
  // page size
  int getSize();
    
  // number of items on the current page
  int getNumberOfElements();
    
  // list of items on this page
  List<T> getContent();
  
  // ... more methods
  
}

With the data provided by the Page interface, the client has all the information it needs to provide a pagination functionality.

We can use the Slice interface instead, if we don’t need the total number of items or pages, for instance if we only want to provide “previous page” and “next page” buttons and have no need for “first page” and “last page” buttons.

The most common implementation of the Page interface is provided by the PageImpl class:

Pageable pageable = ...;
List<MovieCharacter> listOfCharacters = ...;
long totalCharacters = 100;
Page<MovieCharacter> page = 
    new PageImpl<>(listOfCharacters, pageable, totalCharacters);

Paging in a Web Controller

If we want to return a Page (or Slice) of items in a web controller, it needs to accept a Pageable parameter that defines the paging parameters, pass it on to the database, and then return a Page object to the client.

Activating Spring Data Web Support

Paging has to be supported by the underlying persistence layer in order to deliver paged answers to any queries. This is why the Pageable and Pageclasses originate from the Spring Data module, and not, as one might suspect, from the Spring Web module.

In a Spring Boot application with auto-configuration enabled (which is the default), we don’t have to do anything since it will load the SpringDataWebAutoConfiguration by default, which includes the @EnableSpringDataWebSupport annotation that loads the necessary beans.

In a plain Spring application without Spring Boot, we have to use @EnableSpringDataWebSupport on a @Configuration class ourselves:

@Configuration
@EnableSpringDataWebSupport
class PaginationConfiguration {
}

If we’re using Pageable or Sort arguments in web controller methods without having activated Spring Data Web support, we’ll get exceptions like these:

java.lang.NoSuchMethodException: org.springframework.data.domain.Pageable.<init>()
java.lang.NoSuchMethodException: org.springframework.data.domain.Sort.<init>()

These exceptions mean that Spring tries to create a Pageable or Sort instance and fails because they don’t have a default constructor.

This is fixed by the Spring Data Web support, since it adds the PageableHandlerMethodArgumentResolver and SortHandlerMethodArgumentResolver beans to the application context, which are responsible for finding web controller method arguments of types Pageable and Sort and populating them with the values of the pagesize, and sort query parameters.

Accepting a Pageable Parameter

With the Spring Data Web support enabled, we can simply use a Pageable as an input parameter to a web controller method and return a Page object to the client:

@RestController
@RequiredArgsConstructor
class PagedController {

  private final MovieCharacterRepository characterRepository;

  @GetMapping(path = "/characters/page")
  Page<MovieCharacter> loadCharactersPage(Pageable pageable) {
    return characterRepository.findAllPage(pageable);
  }
  
}

An integration tests shows that the query parameters pagesize, and sort are now evaluated and “injected” into the Pageable argument of our web controller method:

@WebMvcTest(controllers = PagedController.class)
class PagedControllerTest {

  @MockBean
  private MovieCharacterRepository characterRepository;

  @Autowired
  private MockMvc mockMvc;

  @Test
  void evaluatesPageableParameter() throws Exception {

    mockMvc.perform(get("/characters/page")
        .param("page", "5")
        .param("size", "10")
        .param("sort", "id,desc")   // <-- no space after comma!
        .param("sort", "name,asc")) // <-- no space after comma!
        .andExpect(status().isOk());

    ArgumentCaptor<Pageable> pageableCaptor = 
        ArgumentCaptor.forClass(Pageable.class);
    verify(characterRepository).findAllPage(pageableCaptor.capture());
    PageRequest pageable = (PageRequest) pageableCaptor.getValue();

    assertThat(pageable).hasPageNumber(5);
    assertThat(pageable).hasPageSize(10);
    assertThat(pageable).hasSort("name", Sort.Direction.ASC);
    assertThat(pageable).hasSort("id", Sort.Direction.DESC);
  }
}

The test captures the Pageable parameter passed into the repository method and verifies that it has the properties defined by the query parameters.

Note that I used a custom AssertJ assertion to create readable assertions on the Pageable instance.

Also note that in order to sort by multiple fields, we must provide the sortquery parameter multiple times. Each may consist of simply a field name, assuming ascending order, or a field name with an order, separated by a comma without spaces. If there is a space between the field name and the order, the order will not be evaluated.

Accepting a Sort Parameter

Similarly, we can use a standalone Sort argument in a web controller method:

@RestController
@RequiredArgsConstructor
class PagedController {

  private final MovieCharacterRepository characterRepository;

  @GetMapping(path = "/characters/sorted")
  List<MovieCharacter> loadCharactersSorted(Sort sort) {
    return characterRepository.findAllSorted(sort);
  }
}

Naturally, a Sort object is populated only with the value of the sort query parameter, as this test shows:

@WebMvcTest(controllers = PagedController.class)
class PagedControllerTest {

  @MockBean
  private MovieCharacterRepository characterRepository;

  @Autowired
  private MockMvc mockMvc;

  @Test
  void evaluatesSortParameter() throws Exception {

    mockMvc.perform(get("/characters/sorted")
        .param("sort", "id,desc")   // <-- no space after comma!!!
        .param("sort", "name,asc")) // <-- no space after comma!!!
        .andExpect(status().isOk());

    ArgumentCaptor<Sort> sortCaptor = ArgumentCaptor.forClass(Sort.class);
    verify(characterRepository).findAllSorted(sortCaptor.capture());
    Sort sort = sortCaptor.getValue();

    assertThat(sort).hasSort("name", Sort.Direction.ASC);
    assertThat(sort).hasSort("id", Sort.Direction.DESC);
  }
}

Customizing Global Paging Defaults

If we don’t provide the pagesize, or sort query parameters when calling a controller method with a Pageable argument, it will be populated with default values.

Spring Boot uses the @ConfigurationProperties feature to bind the following properties to a bean of type SpringDataWebProperties:

spring.data.web.pageable.size-parameter=size
spring.data.web.pageable.page-parameter=page
spring.data.web.pageable.default-page-size=20
spring.data.web.pageable.one-indexed-parameters=false
spring.data.web.pageable.max-page-size=2000
spring.data.web.pageable.prefix=
spring.data.web.pageable.qualifier-delimiter=_

The values above are the default values. Some of these properties are not self-explanatory, so here’s what they do:

  • with size-parameter we can change the name of the size query parameter
  • with page-parameter we can change the name of the page query parameter
  • with default-page-size we can define the default of the size parameter if no value is given
  • with one-indexed-parameters we can choose if the page parameter starts with 0 or with 1
  • with max-page-size we can choose the maximum value allowed for the size query parameter (values larger than this will be reduced)
  • with prefix we can define a prefix for the page and size query parameter names (not for the sort parameter!)

The qualifier-delimiter property is a very special case. We can use the @Qualifier annotation on a Pageable method argument to provide a local prefix for the paging query parameters:

@RestController
class PagedController {

  @GetMapping(path = "/characters/qualifier")
  Page<MovieCharacter> loadCharactersPageWithQualifier(
      @Qualifier("my") Pageable pageable) {
    ...
  }

}

This has a similar effect to the prefix property from above, but it also applies to the sort parameter. The qualifier-delimiter is used to delimit the prefix from the parameter name. In the example above, only the query parameters my_pagemy_size and my_sort are evaluated.

spring.data.web.* Properties are not evaluated?

If changes to the configuration properties above have no effect, the SpringDataWebProperties bean is probably not loaded into the application context.

One reason for this could be that you have used @EnableSpringDataWebSupport to activate the pagination support. This will override SpringDataWebAutoConfiguration, in which the SpringDataWebProperties bean is created. Use @EnableSpringDataWebSupport only in a plainSpring application.

Customizing Local Paging Defaults

Sometimes we might want to define default paging parameters for a single controller method only. For this case, we can use the @PagableDefault and @SortDefault annotations:

@RestController
class PagedController {

  @GetMapping(path = "/characters/page")
  Page<MovieCharacter> loadCharactersPage(
      @PageableDefault(page = 0, size = 20)
      @SortDefault.SortDefaults({
          @SortDefault(sort = "name", direction = Sort.Direction.DESC),
          @SortDefault(sort = "id", direction = Sort.Direction.ASC)
      }) Pageable pageable) {
    ...
  }
  
}

If no query parameters are given, the Pageable object will now be populated with the default values defined in the annotations.

Note that the @PageableDefault annotation also has a sort property, but if we want to define multiple fields to sort by in different directions, we have to use @SortDefault.

Paging in a Spring Data Repository

Since the pagination features described in this article come from Spring Data, it doesn’t surprise that Spring Data has complete support for pagination. This support is, however, explained very quickly, since we only have to add the right parameters and return values to our repository interfaces.

Passing Paging Parameters

We can simply pass a Pageable or Sort instance into any Spring Data repository method:

interface MovieCharacterRepository 
        extends CrudRepository<MovieCharacter, Long> {

  List<MovieCharacter> findByMovie(String movieName, Pageable pageable);
  
  @Query("select c from MovieCharacter c where c.movie = :movie")
  List<MovieCharacter> findByMovieCustom(
      @Param("movie") String movieName, Pageable pageable);
  
  @Query("select c from MovieCharacter c where c.movie = :movie")
  List<MovieCharacter> findByMovieSorted(
      @Param("movie") String movieName, Sort sort);

}

Even though Spring Data provides a PagingAndSortingRepository, we don’t have to use it to get paging support. It merely provides two convenience findAll methods, one with a Sort and one with a Pageable parameter.

Returning Page Metadata

If we want to return page information to the client instead of a simple list, we simply let our repository methods simply return a Slice or a Page:

interface MovieCharacterRepository 
        extends CrudRepository<MovieCharacter, Long> {

  Page<MovieCharacter> findByMovie(String movieName, Pageable pageable);

  @Query("select c from MovieCharacter c where c.movie = :movie")
  Slice<MovieCharacter> findByMovieCustom(
      @Param("movie") String movieName, Pageable pageable);

}

Every method returning a Slice or Page must have exactly one Pageableparameter, otherwise Spring Data will complain with an exception on startup.

Conclusion

The Spring Data Web support makes paging easy in plain Spring applications as well as in Spring Boot applications. It’s a matter of activating it and then using the right input and output parameters in controller and repository methods.

With Spring Boot’s configuration properties, we have fine-grained control over the defaults and parameter names.

There are some potential catches though, some of which I have described in the text above, so you don’t have to trip over them.

If you’re missing anything about paging with Spring in this tutorial, let me know in the comments.

You can find the example code used in this article on github.

Functional Programming Paradigm

Introduction
Functional programming is a programming paradigm in which we try to bind everything in pure mathematical functions style. It is a declarative type of programming style. Its main focus is on “what to solve” in contrast to an imperative style where the main focus is “how to solve”. It uses expressions instead of statements. An expression is evaluated to produce a value whereas a statement is executed to assign variables. Those functions have some special features discussed below.

Functional Programming is based on Lambda Calculus:
Lambda calculus is framework developed by Alonzo Church to study computations with functions. It can be called as the smallest programming language of the world. It gives the definition of what is computable. Anything that can be computed by lambda calculus is computable. It is equivalent to Turing machine in its ability to compute. It provides a theoretical framework for describing functions and their evaluation. It forms the basis of almost all current functional programming languages.
Fact: Alan Turing was a student of Alonzo Church who created Turing machine which laid the foundation of imperative programming style.

Programming Languages that support functional programming: Haskell, JavaScript, Scala, Erlang, Lisp, ML, Clojure, OCaml, Common Lisp, Racket.

Concepts of functional programming:

  • Pure functions
  • Recursion
  • Referential transparency
  • Functions are First-Class and can be Higher-Order
  • Variables are Immutable

Pure functions: These functions have two main properties. First, they always produce the same output for same arguments irrespective of anything else.
Secondly, they have no side-effects i.e. they do modify any argument or global variables or output something.
Later property is called immutability. The pure functions only result is the value it returns. They are deterministic.
Programs done using functional programming are easy to debug because pure functions have no side effect or hidden I/O. Pure functions also make it easier to write parallel/concurrent applications. When the code is written in this style, a smart compiler can do many things – it can parallelize the instructions, wait to evaluate results when need them, and memorize the results since the results never change as long as the input doesn’t change.
example of the pure function:

sum(x, y)           // sum is function taking x and y as arguments
    return x + y    // sum is returning sum of x and y without changing them

Recursion: There are no “for” or “while” loop in functional languages. Iteration in functional languages is implemented through recursion. Recursive functions repeatedly call themselves, until it reaches the base case.
example of the recursive function:

fib(n)
    if (n <= 1)
        return 1;
    else
        return fib(n - 1) + fib(n - 2);

Referential transparency: In functional programs variables once defined do not change their value throughout the program. Functional programs do not have assignment statements. If we have to store some value, we define new variables instead. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. State of any variable is constant at any instant.
example:

x = x + 1 // this changes the value assigned to the variable x.
          // So the expression is not referentially transparent. 

Functions are First-Class and can be Higher-Order: First-class functions are treated as first-class variable. The first class variables can be passed to functions as parameter, can be returned from functions or stored in data structures. Higher order functions are the functions that take other functions as arguments and they can also return functions.
example:

show_output(f)            // function show_output is declared taking argument f 
                          // which are another function
    f();                  // calling passed function

print_gfg()             // declaring another function 
    print("hello gfg");

show_output(print_gfg)  // passing function in another function

Variables are Immutable: In functional programming, we can’t modify a variable after it’s been initialized. We can create new variables – but we can’t modify existing variables, and this really helps to maintain state throughout the runtime of a program. Once we create a variable and set its value, we can have full confidence knowing that the value of that variable will never change.

Advantages and Disadvantages of Functional programming

Advantages:

  1. Pure functions are easier to understand because they don’t change any states and depend only on the input given to them. Whatever output they produce is the return value they give. Their function signature gives all the information about them i.e. their return type and their arguments.
  2. The ability of functional programming languages to treat functions as values and pass them to functions as parameters make the code more readable and easily understandable.
  3. Testing and debugging is easier. Since pure functions take only arguments and produce output, they don’t produce any changes don’t take input or produce some hidden output. They use immutable values, so it becomes easier to check some problems in programs written uses pure functions.
  4. It is used to implement concurrency/parallelism because pure functions don’t change variables or any other data outside of it.
  5. It adopts lazy evaluation which avoids repeated evaluation because the value is evaluated and stored only when it is needed.

Disadvantages:

  1. Sometimes writing pure functions can reduce the readability of code.
  2. Writing programs in recursive style instead of using loops can be bit intimidating.
  3. Writing pure functions are easy but combining them with rest of application and I/O operations is the difficult task.
  4. Immutable values and recursion can lead to decrease in performance.

Applications:

  • It is used in mathematical computations.
  • It is needed where concurrency or parallelism is required.

Fact: Whatsapp needs only 50 engineers for its 900M user because Erlang is used to implement its concurrency needs. Facebook uses Haskell in its anti-spam system.

Java Stream API Was Broken Before JDK 10

Of course, not all of it, but history showed that Stream API featured a few interesting bugs/deficiencies that can affect anyone still residing on JDK 8 and JDK 9.

Stream#flatMap

Unfortunately, it turns out that Stream#flatMap was not as lazy as advertised, which allowed for several crazy situations to exist.

For example, let’s take this one:

Stream.of(1)
  .flatMap(i -> Stream.generate(() -> 42))
  .findAny()
  .ifPresent(System.out::println);

In JDK 8 and JDK 9, the above code snippet spins forever waiting for the evaluation of the inner infinite Stream.

One would expect O(1) time complexity from a trivial operation of taking a single element from an infinite sequence — and this is how it works as long as we don’t process an infinite Stream inside  Stream#flatMap:

Stream.generate(() -> 42)
  .findAny()
  .ifPresent(System.out::println);
// completes "immediately" and prints 42

What’s more, it gets worse if we insert some additional processing after a short-circuited Stream#flatMap call:

Stream.of(1)
  .flatMap(i -> Stream.generate(() -> 42))
  .map(i -> process(i))
  .findAny()
  .ifPresent(System.out::println);
private static <T> T process(T input) {
    System.out.println("Processing...");
    return input;
}

Now, not only are we stuck in an infinite evaluation loop, but we’re also processing all of items coming through:

Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
Processing...
...

Imagine the consequences if the process() method contained some blocking operations and unwanted side-effects like email send outs or logging.

Explanation

The internal implementation of Stream#flatMap is to blame, especially the following code:

@Override
public void accept(P_OUT u) {
    try (Stream<? extends R> result = mapper.apply(u)) {
        // We can do better that this too; optimize for depth=0 case and just grab spliterator and forEach it
        if (result != null)
            result.sequential().forEach(downstream);
    }
}

As you can see, the inner Stream is consumed eagerly using Stream#forEach (not even mentioning the lack of curly braces around the conditional statement. Ugh!).

The problem remained unaddressed in JDK 9, but luckily, the solution was shipped with JDK 10:

@Override
public void accept(P_OUT u) {
    try (Stream<? extends R> result = mapper.apply(u)) {
        if (result != null) {
            if (!cancellationRequestedCalled) {
                result.sequential().forEach(downstream);
            }
            else {
                var s = result.sequential().spliterator();
                do { } while (!downstream.cancellationRequested() && s.tryAdvance(downstream));
            }
        }
    }
}

Stream#takeWhile/dropWhile

This one is directly connected to the above one and  Stream#flatMap’s unwanted eager evaluation.

Let’s say we have a list of lists:

List<List<String>> list = List.of(
  List.of("1", "2"),
  List.of("3", "4", "5", "6", "7"));
And, we want to flatten them into a single one:
list.stream()
  .flatMap(Collection::stream)
  .forEach(System.out::println);

// 1
// 2
// 3
// 4
// 5
// 6
// 7

It works just as expected.

Now, let’s take the flattened the Stream and simply keep taking elements until we encounter “4:”

Stream.of("1", "2", "3", "4", "5", "6", "7")
  .takeWhile(i -> !i.equals("4"))
  .forEach(System.out::println);
// 1
// 2
// 3

Again, it works just as we expected.

Let’s now try to combine these two; what could go wrong?

List<List<String>> list = List.of(
  List.of("1", "2"),
  List.of("3", "4", "5", "6", "7"));
list.stream()
  .flatMap(Collection::stream)
  .takeWhile(i -> !i.equals("4"))
  .forEach(System.out::println);

// 1
// 2
// 3
// 5
// 6
// 7

That’s an unexpected turn of events and can be fully attributed to the original issue with Stream#flatMap.

Some time ago, I did run a short poll on Twitter, most of you were quite surprised with the result:

Image title

Parallel Streams on Custom ForkJoinPool Instances

There’s one commonly-known hack (that you should not be using since it relies on internal implementation details of Stream API) that makes it possible to hijack parallel Stream tasks and run them on the custom fork-join pool by running them from within your own FJP instance:

ForkJoinPool customPool = new ForkJoinPool(42);
customPool.submit(() -> list.parallelStream() /*...*/);

If you thought that you managed to trick everyone already, you were partially right.

It turns out that even though tasks were running on a custom pool instance, they were still coupled to the shared pool – the size of the computation would remain in proportion to the common pool and not the custom pool – JDK-8190974.

So, even if you were using these when you shouldn’t have, the fix for that arrived in JDK 10. Additionally, if you really need to use Stream API to run parallel computations, you could use parallel-collectorsinstead.

An Introduction to Docker and Containerization

What is Docker?

Docker is both, a brand and a technology. It was developed under the Open Container Initiative by Docker (the company, formerly known as dotCloud) when it virtually went bankrupt. Docker (the product) not only helped it raise funds, but also paved a way for its strong revival into the game. On a Linux platform, it allows an end user to run multiple containers out of which each container can hold a single application. In precise technical terms, when you run an application on an operating system, it runs on its “user space,” and every OS comes with a single instance of this user space. In Docker, every container has one separate user space to offer. What this means is that containers enable us to have multiple instances of user spaces on a single operating system. Therefore, in the simplest terms, a container is just an isolated version of a user space. That’s it!

How Is It Different From VMs?

Docker is different from a VM in the following ways:

  1. It’s very lightweight in comparison to a VM in terms of size and resource consumption. This is because a container is just the bare bones of an operating system. It contains only the most basic packages required for an OS to run.
  2. It takes less time to spin up in contrast to a virtual machine (it depends on the application that you’ll be hosting on a container, but usually, it’s just a matter of seconds).
  3. Unlike a VM, a container can run only one process and when that process stops out for some reason, the container expires, as well. You can apparently modify this behavior to have it run multiple processes, but with that, you lose the essential concept of “loose coupling” your components. Better stick to VMs then.

Core Constructs of Docker

A Docker-based environment consists mainly of the following things:

Docker Engine

This is the main component responsible for running workloads in the form of a container. You have three options to choose from: the Community edition, the Enterprise edition, and Experimental, the last of which shouldn’t be used in Production.

Docker Client

It comes equipped with the Docker Engine package in form of Docker binary, and by default connects to the locally installed Docker Engine. You interact with the Docker Engine using this client only.

Docker Image

A Docker image for a container is just what an ISO image is for a VM. A Docker image consists of multiple layers stacked on top of one another and presented via union mounts. Here, first layer (zero-indexed) is the base image, second is the application layer (like Tomcat or NGINX), and the third contains any sort of updates. When you start a container using an image, an additional layer gets added to it that is writable, whereas the rest of the layers are read-only.

Docker Repository and Registry

A repository is a place where the Docker images, by default, go when you push one. A repository is contained within a Registry, so these two are different things, so be aware. One well-known public Docker registry is Dockerhub.

Docker Container

A container, as explained above, is an isolated version of user space, starting with using a Docker image. An important thing to note here is that unlike Linux systems where first PID is assigned to init or systemd, in a container, this PID is assigned to the command or service that it is supposed to run. When that process is dead, the container exits out.

How Does Docker Work?

While on a Linux-based OS, a container leverages existing Kernel features.

Name Space

Don’t confuse it with user space, as it’s different. Essentially, these namespaces are Network, PID, IPC, User, Mount, and UTS. They allow a Docker container to have its own view of Network, PID, hostname, users and groups, etc.

CGroups

CGroups, short for Control Groups, are what allow containers to have a reserved/dedicated amount of resources assigned to them in the form of CPU and memory.

Apart from these two (namespace and CGroups), Docker also makes use of storage drivers like AUFS, DeviceMapper, Overlay, BTRFS, and VFS. I won’t explain the difference between them and their features to keep this article as simple as possible. Just keep in mind that the default storage driver for Docker on an RHEL-type OS (like CentOS) is DeviceMapper, while on a Debian-based OS like Ubuntu, it’s AUFS.

What Do We Need to Run a Docker Environment?

At the very basic level you need the following two Docker components:

1.) A Docker Engine

2.) A Docker image (as appropriate)

How Do We Get or Create a Docker Image?

If you don’t have very specific requirements, you can just find and pull an image directly from Docker hub that fulfills your needs using Docker command line. For example, if you just need to run NGINX with default settings, you need not to compile your own image; just pull one from Docker hub. Remember, the higher the star count an image has, the more reliable it is. If you have specific requirements and need a custom image that’s not already available, then you can:

  • Pull a base image, run a container from it, do all the modifications as needed, and commit it as an image.
  • Create a Dockerfile and compile an image from scratch using it. Again, I don’t want to make this article too complex, so I will just give an overview of what a Dockerfile is.

What Is a Dockerfile?

A Dockerfile (case sensitive) is a plain text file where you write your instruction to create an image. These instructions are read one at a time, from left to right, top to bottom. These instructions may include terms like FROM, MAINTAINER, RUN, COMMAND, or ENVIRONMENT. You can read more about a Dockerfile from here, and while you try to gain more familiarity with it, just keep two things in mind:

  • The more RUN instructions you add (these are mainly meant to provision an image), the more layers get added to an image. Recall that an image is comprised of layers.
  • There can be only one COMMAND instruction per Dockerfile. If you add multiple, the last one will overwrite others.

Alright, let’s take a look how Docker commands look like.

Docker Command Examples

  • To pull an image from the default registry (Docker Hub) use: docker pull [NAME OF THE IMAGE] 
  • To run a container using the downloaded image run: docker run –d [NAME_OF_THE_IMAGE] [COMMAND]. By the way, this command will automatically pull an image if the specified one doesn’t already exist on the local Docker host.
  •  —d parameter detaches you from the container and returns you to the host’s shell. Without a command, the container will exit out if one is not already specified in the image (hope you still remember that too).
  • To search an image from Docker Hub use: docker search [ANY_STRING] 
  • To view all the locally available images run: docker images 
  • To view just the running containers use: docker ps 
  • To view all the container run: docker ps –a 
  • To remove a container run: docker rm [CONTAINER_ID/NAME] 
  • To remove an image: docker rmi [IMAGE_NAME] 

I could go on and on and on but would prefer to stop here. For a full list of commands, go to the Docker command documentation.

How Docker Is Used in A Real World

You cannot run merely Docker as-is to handle your workloads, especially the production ones. You need to have a scheduling and orchestration solution in place for a containerized environment. Some of the most popular container orchestration solutions include:

  • Kubernetes from Google
  • EC2 Container Service from AWS (A managed service)
  • Mesos Marathon from Apache
  • Docker Swarm from Docker (mainly a host-based clustering solution rather than a container)

Which one you should use depends mainly upon your business and workload needs, and familiarity.

I’d prefer Kubernetes as it has been used by Google for over a decade and has probably gotten a bit more adept with working in a containerized environment, and because I’m more familiar with it. However, a business/organization runs as per its needs, and one should be ready to understand and respect that fact and work in accordance.

Hystrix : Explained

A typical distributed system consists of many services collaborating together.

These services are prone to failure or delayed responses. If a service fails it may impact on other services affecting performance and possibly making other parts of application inaccessible or in the worst case bring down the whole application.

Of course, there are solutions available that help make applications resilient and fault tolerant – one such framework is Hystrix.

The Hystrix framework library helps to control the interaction between services by providing fault tolerance and latency tolerance. It improves overall resilience of the system by isolating the failing services and stopping the cascading effect of failures.

In this series of posts we will begin by looking at how Hystrix comes to the rescue when a service or system fails and what Hystrix can accomplish in these circumstances.

2. Simple Example

The way Hystrix provides fault and latency tolerance is to isolate and wrap calls to remote services.

In this simple example we wrap a call in the run() method of the HystrixCommand:

class CommandHelloWorld extends HystrixCommand<String> {
    private String name;
    CommandHelloWorld(String name) {
        super(HystrixCommandGroupKey.Factory.asKey("ExampleGroup"));
        this.name = name;
    }
    @Override
    protected String run() {
        return "Hello " + name + "!";
    }
}

and we execute the call as follows:

@Test
public void givenInputBobAndDefaultSettings_whenCommandExecuted_thenReturnHelloBob(){
    assertThat(new CommandHelloWorld("Bob").execute(), equalTo("Hello Bob!"));
}

3. Maven Setup

To use Hystrix in a Maven projects, we need to have hystrix-core and rxjava-core dependency from Netflix in the project pom.xml:

<dependency>
    <groupId>com.netflix.hystrix</groupId>
    <artifactId>hystrix-core</artifactId>
    <version>1.5.4</version>
</dependency>

The latest version can always be found here.

<dependency>
    <groupId>com.netflix.rxjava</groupId>
    <artifactId>rxjava-core</artifactId>
    <version>0.20.7</version>
</dependency>

The latest version of this library can always be found here.

4. Setting up Remote Service

Let’s start by simulating a real world example.

In the example below, the class RemoteServiceTestSimulator represents a service on a remote server. It has a method which responds with a message after the given period of time. We can imagine that this wait is a simulation of a time consuming process at the remote system resulting in a delayed response to the calling service:

class RemoteServiceTestSimulator {
    private long wait;
    RemoteServiceTestSimulator(long wait) throws InterruptedException {
        this.wait = wait;
    }
    String execute() throws InterruptedException {
        Thread.sleep(wait);
        return "Success";
    }
}

And here is our sample client that calls the RemoteServiceTestSimulator.

The call to the service is isolated and wrapped in the run() method of a HystrixCommand. Its this wrapping that provides the resilience we touched upon above:

class RemoteServiceTestCommand extends HystrixCommand<String> {
    private RemoteServiceTestSimulator remoteService;
    RemoteServiceTestCommand(Setter config, RemoteServiceTestSimulator remoteService) {
        super(config);
        this.remoteService = remoteService;
    }
    @Override
    protected String run() throws Exception {
        return remoteService.execute();
    }
}

The call is executed by calling the execute() method on an instance of the RemoteServiceTestCommand object.

The following test demonstrates how this is done:

@Test
public void givenSvcTimeoutOf100AndDefaultSettings_whenRemoteSvcExecuted_thenReturnSuccess()
  throws InterruptedException {
    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroup2"));
    
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(100)).execute(),
      equalTo("Success"));
}

So far we have seen how to wrap remote service calls in the HystrixCommand object. In the section below let’s look at how to deal with a situation when the remote service starts to deteriorate.

5. Working with Remote Service and Defensive Programming

5.1. Defensive Programming with Timeout

It is general programming practice to set timeouts for calls to remote services.

Let’s begin by looking at how to set timeout on HystrixCommand and how it helps by short circuiting:

@Test
public void givenSvcTimeoutOf5000AndExecTimeoutOf10000_whenRemoteSvcExecuted_thenReturnSuccess()
  throws InterruptedException {
    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupTest4"));
    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(10_000);
    config.andCommandPropertiesDefaults(commandProperties);
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}

In the above test, we are delaying the service’s response by setting the timeout to 500 ms. We are also setting the execution timeout on HystrixCommand to be 10,000 ms, thus allowing sufficient time for the remote service to respond.

Now let’s see what happens when the execution timeout is less than the service timeout call:

@Test(expected = HystrixRuntimeException.class)
public void givenSvcTimeoutOf15000AndExecTimeoutOf5000_whenRemoteSvcExecuted_thenExpectHre()
  throws InterruptedException {
    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupTest5"));
    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(5_000);
    config.andCommandPropertiesDefaults(commandProperties);
    new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(15_000)).execute();
}

Notice how we’ve lowered the bar and set the execution timeout to 5,000 ms.

We are expecting the service to respond within 5,000 ms, whereas we have set the service to respond after 15,000 ms. If you notice when you execute the test, the test will exit after 5,000 ms instead of waiting for 15,000 ms and will throw a HystrixRuntimeException.

This demonstrates how Hystrix does not wait longer than the configured timeout for a response. This helps make the system protected by Hystrix more responsive.

In the below sections we will look into setting thread pool size which prevents threads being exhausted and we will discuss its benefit.

5.2. Defensive Programming with Limited Thread Pool

Setting timeouts for service call does not solve all the issues associated with remote services.

When a remote service starts to respond slowly, a typical application will continue to call that remote service.

The application doesn’t know if the remote service is healthy or not and new threads are spawned every time a request comes in. This will cause threads on an already struggling server to be used.

We don’t want this to happen as we need these threads for other remote calls or processes running on our server and we also want to avoid CPU utilization spiking up.

Let’s see how to set the thread pool size in HystrixCommand:

@Test
public void givenSvcTimeoutOf500AndExecTimeoutOf10000AndThreadPool_whenRemoteSvcExecuted
  _thenReturnSuccess() throws InterruptedException {
    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupThreadPool"));
    HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter();
    commandProperties.withExecutionTimeoutInMilliseconds(10_000);
    config.andCommandPropertiesDefaults(commandProperties);
    config.andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()
      .withMaxQueueSize(10)
      .withCoreSize(3)
      .withQueueSizeRejectionThreshold(10));
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}

In the above test, we are setting the maximum queue size, the core queue size and the queue rejection size. Hystrixwill start rejecting the requests when the maximum number of threads have reached 10 and the task queue has reached a size of 10.

The core size is the number of threads that always stay alive in the thread pool.

5.3. Defensive Programming with Short Circuit Breaker Pattern

However, there is still an improvement that we can make to remote service calls.

Let’s consider the case that the remote service has started failing.

We don’t want to keep firing off requests at it and waste resources. We would ideally want to stop making requests for a certain amount of time in order to give the service time to recover before then resuming requests. This is what is called the Short Circuit Breaker pattern.

Let’s see how Hystrix implements this pattern:

@Test
public void givenCircuitBreakerSetup_whenRemoteSvcCmdExecuted_thenReturnSuccess()
  throws InterruptedException {
    HystrixCommand.Setter config = HystrixCommand
      .Setter
      .withGroupKey(HystrixCommandGroupKey.Factory.asKey("RemoteServiceGroupCircuitBreaker"));
    HystrixCommandProperties.Setter properties = HystrixCommandProperties.Setter();
    properties.withExecutionTimeoutInMilliseconds(1000);
    properties.withCircuitBreakerSleepWindowInMilliseconds(4000);
    properties.withExecutionIsolationStrategy
     (HystrixCommandProperties.ExecutionIsolationStrategy.THREAD);
    properties.withCircuitBreakerEnabled(true);
    properties.withCircuitBreakerRequestVolumeThreshold(1);
    config.andCommandPropertiesDefaults(properties);
    config.andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()
      .withMaxQueueSize(1)
      .withCoreSize(1)
      .withQueueSizeRejectionThreshold(1));
    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));
    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));
    assertThat(this.invokeRemoteService(config, 10_000), equalTo(null));
    Thread.sleep(5000);
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
    assertThat(new RemoteServiceTestCommand(config, new RemoteServiceTestSimulator(500)).execute(),
      equalTo("Success"));
}
public String invokeRemoteService(HystrixCommand.Setter config, int timeout)
  throws InterruptedException {
    String response = null;
    try {
        response = new RemoteServiceTestCommand(config,
          new RemoteServiceTestSimulator(timeout)).execute();
    } catch (HystrixRuntimeException ex) {
        System.out.println("ex = " + ex);
    }
    return response;
}

In the above test we have set different circuit breaker properties. The most important ones are:

  • The CircuitBreakerSleepWindow which is set to 4,000 ms. This configures the circuit breaker window and defines the time interval after which the request to the remote service will be resumed
  • The CircuitBreakerRequestVolumeThreshold which is set to 1 and defines the minimum number of requests needed before the failure rate will be considered

With the above settings in place, our HystrixCommand will now trip open after two failed request. The third request will not even hit the remote service even though we have set the service delay to be 500 ms, Hystrix will short circuit and our method will return null as the response.

We will subsequently add a Thread.sleep(5000) in order to cross the limit of the sleep window that we have set. This will cause Hystrix to close the circuit and the subsequent requests will flow through successfully.

6. Conclusion

In summary Hystrix is designed to:

  1. Provide protection and control over failures and latency from services typically accessed over the network
  2. Stop cascading of failures resulting from some of the services being down
  3. Fail fast and rapidly recover
  4. Degrade gracefully where possible
  5. Real time monitoring and alerting of command center on failures

In the next post we will see how to combine the benefits of Hystrix with the Spring framework.

The full project code and all examples can be found over on the github project.