Inversion of Control (Explained Non-Technically)

I will use how businesses evolve to provide an analogy for Inversion of Control.

Businesses don’t set out on day one to be a Fortune 500 company. Typically, they start with you in your garage (maybe with a friend).Over time, your business grows and you hire people, assign clearer functional responsibilities, and start to scale up your business. Businesses have to do this, while also changing quickly to stay competitive.

Within software, we have moved from Waterfall to Agile. Waterfall can be considered your “set out to build a Fortune 500 company,” day one approach. Agile, on the other hand, says to build only things of value and evolve your system over time. Also, Agile focuses on being quicker to react to change. Therefore, Agile is a lot closer to how businesses grow, evolve, and stay competitive.

Unfortunately, our software architectures have still stayed within a Waterfall top-down approach. Architects will produce technology stack diagrams that indicate how the architectural layers of the system work. The nature of these layers is always a bureaucratic control from the top layer to the bottom layer. This is very similar to large companies with many layers of management. So our software architectures force us to design the Fortune 500 company before developers even get to write the first line of code.

Inversion of Control is like empowering employees in a business. Rather than the manager dictating exactly how the employees will work, the manager trusts the employees to undertake the goals of the business. In other words, the employees are in control of how the work gets done. They will decide what help to get (Continuation Injection). They will decide what business resources they require (Dependency Injection). They may decide to escalate issues with the work (Continuation Injection). They may even decide another employee may be better suited to do the work (Thread Injection).

By empowering the employee, we have inverted the control. The employee is now in control of how the work gets done. This is similar to the Inversion of Control in software, wherein the developer is in control of how they write code. They are not restricted by  bureaucratic top down architecture controls from their managers. This allows the developer to evolve the business’s software quickly so it may grow and stay competitive.

Measure Time Spent Coding

One of the most important metrics for the effectiveness of a software development team is one that I often find managers pay little attention to. The metric is how much time developers actually spend coding.

If our goal as software developers is to produce features in software then our process has to support us by allowing us time to create those features. But oftentimes I find that writing software is just a part of the software developer’s duties in many companies. Developers are also responsible for attending meetings, writing test plans, interpreting requirements, and dozens of other duties. Some developers that I know spend so much time in meetings that they actually can’t get their work done and have to code during their lunch hour because the rest of their day is packed with other responsibilities.

It only makes logical sense that if you’re not given a lot of time to write software then you won’t write much software. The other thing that I find, which is almost universally true, is that most software developers would prefer to be writing software then doing just about anything else. Writing code is what we do. It’s how we create value. And we know it. Forcing a developer to go to work but not write code is like keeping a racehorse in the stables all day. We want to get out and flex our muscles.

This is one of the main reasons that I am an advocate for Extreme Programming practices. Virtually all the Extreme Programming practices resolve to some form of writing software. One of the central practices of XP is test-driven development, which is not a testing activity, but really a developer activity. It’s writing software in the form of tests.

Doing TDD greatly reduces the amount of other non-coding work that developers have to do. Unit tests call software the way that they are intended to be used so we find that there is much less need to write internal programmer documentation for the code that we’re developing and that’s great because who likes writing internal programmer documentation? Very few.

We also find that when we have unit tests supporting us as we’re developing software, we make far fewer mistakes so we’re spending less time debugging code and more time writing code. We developers like that as well.

When software developers recognize that doing test-first development has us doing more development and fewer activities that aren’t coding, then they tend to get really on board with doing TDD on projects.

So, how much time should developers spend writing software every day? In some ways, this is an individual decision but I think it’s very unrealistic to assume that an average software developer will write code for eight hours every day five days a week. That is an unrealistic expectation for managers to place on developers or for developers to place on themselves. Writers don’t write starting at 8 AM and ending at 5 PM every day with a break for lunch in between. Some days the ideas flow and others they don’t.

If I get four good hours of focused work done a day, then I’m really happy. Of course, I’m an old guy and when I was younger I could put in eight hours or 10 hours or even 12 hour days on occasion. I don’t think many of us could do this consistently without burning out, but like many young people, I could burn the midnight oil for several days in a row before I needed to take breaks.

Today, I believe that if we want to have a true industry around developing software then it has to be sustainable. Overtime and long hours can’t be the norm or we will burn out. We have to treat our profession like a real profession. Additionally, study after study has proven that when you take a developer and you make them work long, hard hours, the number of bugs that they produce increases exponentially and that ultimately slows the project down even more.

I generally believe that it makes more sense for us to measure our software development process rather than measuring individuals because very often it’s our process that has the most room for improvement. When I’m looking to measure the efficiency of a software development process, the very first place I start is by asking how much quality time the developers spend actually writing code. I find that by increasing the quality of the time or the quantity of the time, we can get a lot of low-hanging fruit for improving the efficiency of a software development process.

Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.

What Cloud Native Apps Really Means

Talk to anyone in the DevOps community and the words “cloud-native” pop up again and again. If you’re new to the world of continuous integration and continuous delivery, you may have heard the phrase “building cloud-native apps.” But seriously, what does it mean? Is it just another buzzword for what we are already doing?

At its most basic, cloud-native applications contain key elements. DevOps teams take a microservices-design approach, and they orchestrate those services on containers. To deliver these applications and the multiple microservices, DevOps leverages automation and continuous delivery. They are built on the cloud, developed for the cloud and deployed to the cloud.

The result? Cloud-native apps are composed of fault-tolerant blocks that enable agility and can be delivered and iterated on quickly.

You can also see how the Cloud Native Computing Foundation defines cloud-native apps:

  • Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds;
  • Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach;
  • These techniques enable loosely-coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

What’s Required

So, why is all of this so important? As a developer, you need to understand the basics of cloud-native applications if you want to build them. To guide you, there are a few must-haves:

  • Resiliency. Cloud-native apps need to guarantee services, no matter what. Developers need to architect the apps for failures. Assume the worst.
  • Reusable services. Architect services so that they can be across applications and by other services. This will also require you have interface contracts and discovery mechanisms.
  • Scalability. Horizontal scalability is critical in order to function effectively. Ensure that the underlying compute, storage, database and networking resources, as well as the application logic, support scaling to multiple instances of services to meet demand.

Cloud-native apps exploit the world of cloud technology, so they’re designed specifically to work in a cloud environment. The inherent benefits of scalability and resiliency make the move towards building cloud-native apps an attractive proposition for developers.

In the world of CI/CD, building cloud-native applications can be an intense process that requires a variety of tools. Among them, they require a DevOps approach where developer and operations needs are aligned and complementary. Either Dev and Ops collaborate, or the pipeline, infrastructure, and tools support full-stack development, in which developers can code, build, test and deploy to production without cross-functional dependencies.

The technology also requires microservices. Individual microservices tend to be smaller and more manageable than traditional applications while communicating across the services, as well as managing the life cycle of the services, can be complex and challenging.

Now that you understand what the buzz is about and you want to begin developing cloud-native apps, how do you get started? If you are an individual or part of a small team, the most obvious place to start is with Jenkins X. A Jenkins sub-project developed and introduced by a number of CloudBees team members including James Strachan, Jenkins X is a way for developers to get up and running building cloud-native apps in the Kubernetes environment. Built on GitOps, Jenkins and best-of-breed tools from the Kubernetes eco-system, Jenkins X allows you to get started tackling cloud-native development in minutes versus weeks or months when trying to just figure out how to port your applications to the cloud yourself or with Jenkins alone.

If you are part of a larger team and need to support cloud-native apps development in enterprise environment CloudBees Core for Kubernetes CD is available, which extends Jenkins X with a graphical interface, enhanced access control and additional enterprise-level features.

Microservices Anti-Patterns

Microservices is a silver bullet, magic pill, instant fix, and can’t-go-wrong solution to all of software’s problems. In fact, as soon you implement even the basics of microservices all of your dreams come true; you will triple productivity, reach your ideal weight, land your dream job, win the lottery 10 times, and be able to fly, clearly.

While this sounds like a lot of hyperbole wrapped up in some BS, if you have been listening to anything around microservices recently you will most likely have heard something not too far from this exaggerated sentiment — especially if it is coming from sales folks.

As a result of this, you or someone you know will likely have been charged by management to implement a solution in microservices or refactor an existing application to take advantage of microservices to ensure that you get all the magic. With so much overinflation of the truth out there, chances are you may have also implemented a microservices antipattern. These antipatterns are actually more common in the wild than fully functional microservices architectures.


In this post, we will cover the most common antipatterns that I have witnessed in the wild:

  • Break the Piggy Bank
  • Everything Micro (Except for the Data)
  • We are Agile! a.k.a. The Frankenstein

Each one of these results from a common misconception. We will do our best to define these patterns and their symptoms. After each, we will also show a way out of the mess so that you can recover and begin to move towards a better implementation. Let’s get started!

Break the Piggy Bank

This anti-pattern is one of the most common when refactoring an existing application to a microservices architecture. When applications start out as monolith applications and grow over time, they eventually get so large that even the basic parts of the SDLC become excruciating tasks:

  • Deployments can take several hours, if not days and are often very high risk.
  • Maintenance becomes part engineering rigor, part archaeology.
  • Performance starts to be measured in “how many days since a sev 1 outage” signs.
  • Regression testing requires teams, automation, dedicated data centers, and an entirely new software organization.

When an application is in this state, it is easy to think about it like a pig. The monolith has become untenable, and now is a prime candidate for microservices.

Microservices addresses deployment, maintenance, performance, and testing by breaking down the large code base into smaller, decoupled services that communicate through “dumb pipes” (most commonly the HTTP protocol).

When a team is first introduced to this, especially when dealing with a pig, the first inclination is to smash the whole thing up into little pieces, like a piggy bank with a hammer. Because, of course, 1000 services will be way better than one bloated one, right? Not so fast. The breaking up of a monolith into its parts is not as simple as just a smash.

The problem with this method is that without intentional division of services either by domain, unit of work, or potential for change, you end up creating several mini-monoliths. Often times, services are decomposed too granularly and are not really separate, so they end up having to be deployed, scaled and maintained together.

The other issue that this exacerbates is the complexity of the overall application. With code, complexity is neither created or destroyed when moving from a monolith to microservices, it simply changes from in-proc service calls to now HTTP(s) calls that add a whole another layer. Orchestration, distributed transactions, service discovery and recovery are just a few new concerns that the application developers need to consider, yay.

A Way Forward

If you find yourself in this situation, all is not lost. There are ways out but they will require some work. The first thing to do that will help all of the issues caused by a “piggy bank smash” is to do some analysis on your application and try to recompose service boundaries based on domains.

This will usually result in the combining of several services that are not really separate and more clearly defining your entities and value objects. We will not dive into DDD in this post, but it is very helpful for refactoring applications that have been broken into too many pieces. For more information check out the following link:

Once you have your reasonably sized and composed services, implementing automation will ensure that you can resolve deployment and build issues for the long haul. If you have not already implemented a CI/CD pipeline then now is the time. With multiple options out there from Jenkins to TFS, you can find a CI/CD platform that will help enable your automation efforts. From there, things like Ansible, SaltStack, Chef/Puppet, and others can help you automate infrastructure as well. The idea with all of this is to take the human element out of repetitive tasks in as many areas of the stack as possible. This way, if something goes wrong with your newly composed services you can quickly recover, or in some cases completely rebuild your application and dependencies with an automated process. This becomes a manageable task if you have a handful of services, and can be burdensome if you have 1000 services.

Even if you do not fully automate your entire solution and do not follow DDD principles to the letter, recomposing your application will result in a much more effective solution that will be maintainable and usually ease deployment woes. In this antipattern, the key to getting out of it is to ensure that you are not creating services on arbitrary boundaries, but instead dividing by units of work/domains so that they can truly be deployed, scaled, and maintained separately from the rest of the application.

Everything Micro (Except for the Data)

This is another extremely common antipattern, especially in enterprise organizations. Most often this design arises out of some form of a cap-ex spend on a data center, RDBMS purchase, or existing data team that is uncomfortable with data in the cloud and/or micro-data in general.

The key sign of this antipattern is that just about everything in the application space is decomposed reasonably, there is a mature CI/CD process in place, maybe even some distributed design patterns are in place. However, there is one giant data store behind all of the microservices. It may be HA and fully replicated, but it is still a monolithic data structure.

These are most common with Microsoft’s SQL Server, Oracle, and DB2 data stores mainly because their licensing models do not lend themselves easily to have a database per service implemented in the wild.

The challenges with this approach are more subtle than other antipatterns, because often times this may not have a noticeable negative impact on the application until later in its life cycle. Keeping track of data/schema changes is a common challenge with this setup because any potential changes to a production database may require a full database outage, or they may cause locks and blocks if the system is still live when changes are executed.

These are usually solvable issues that require detailed governance and approval processes to alleviate issues. SLAs are also at the most risk with this antipattern because of the aforementioned potential for outages while changes are executed. With large data stores, the access control can become extremely complex as applications are usually restricted in what schemas/functions they can affect as well as what activities they can perform, so as not to affect other applications on the same data store.

With multiple applications accessing the same data store, resource contention eventually becomes the main issue. As applications grow and expand, so does the data footprint. Data archival, cleanup, and tuning become crucial tasks to keep everything in balance. Also, the size of a data store is often associated with cost — so very large instances can eventually become cost prohibitive to operate.

With a monolithic data store, vertical scale and some sort of clustering become the only feasible way forward to address performance issues. Nothing is more fun than trying to figure out how many flash I/O cards you can jam into a database server before your application crashes and locks up the database entirely!

A Way Forward

Unfortunately, the way out of this predicament usually involves changing some hearts and minds in your organization. In order to move away from a monolithic data store, you have to start small. If your application is composed with domain bounded context already, it will be easier to take a piece of that and put it into another, single-purpose data store. If you have not implemented bounded context, it can be a great way to help define those boundaries in your application.

For more information, please see the Keyhole blog post on implementing bounded context:

Implementing A Bounded Context

Depending on your platform of choice, there are often several data stores to choose from: either SQL based ones, document-based one, or even simple blob storage can do the trick. The key is to use a data store that matches best with the function your application is trying to perform.

If you have a specific domain in your application that is focused on user profile data, for example, a normalized SQL structure might not be the best fit. A No-SQL data store that is flatter, and object-based may be the best fit as the shapes can be flexible and not require a data model update as their contents change and evolve. Pulling out that specific data and migrating it from your monolithic database to the No-SQL one can be accomplished in a number of ways. Before this can be accomplished the organization needs to be convinced that the new data store will be effective and supported.

A couple of ways of implementing can address these concerns for example, if the application is young enough, a simple lift-and-shift should suffice. For larger data sets, a gateway of some sort in front of the data calls can be employed to leverage a lambda pattern and use normal traffic in the system to fill up the new data store over time. Either way, starting with a defined domain or section of data will allow you to gradually get your application data out of the monolith.

We Are Agile! a.k.a. The Frankenstein

This last antipattern we will look at occurs when teams begin the shift from Waterfall software development to Agile software development. At the beginning of these process shifts, teams usually end up implementing some version of agile-fall. Often this is hallmarked by the saying “We are agile, so we do not have to plan things anymore!” This is in reaction to the heavy design documents, project plans, and Gantt timelines that were standard in the Waterfall methodology.

The misconception that ‘agile’ means the team no longer has to plan things out and can be nimble and adapt to customer needs (read: whims and shiny things) results in functionality being decomposed in a vacuum. What does the customer/client/product owner want? Well, that is what we will build this Sprint!

With enough cycles through this process, you end up with several disparate pieces of functionality, often implemented in the same or similar ways as something previously built that all have to be bolted together to share data and create some semblance of a cohesive application. This ultimately creates a Frankenstein software monster that is just a bunch of parts sewn together and gets worse over time.

This antipattern becomes self-perpetuating because as time goes on it becomes more complex to deploy and bolt on new things, especially if they have to interact with existing parts. This can result in technical debt that seems to balloon and often rears its head in the form of some undesirable but hidden behavior like lost transactions, orphaned instances, and unexplained slowdowns.

Eventually, Frankenstein will start to fall apart at the seams and more effort will be expended just trying to keep things together than on actually developing new functionality.

A Way Forward

To beat Frankenstein in this instance, you need to take some cycles to define what your application does and map that to the concrete implementations in your code. You will most likely find some duplication or code paths that are never touched. Eliminating this cruft is a good first step.

Then, to move forward, focus on the contracts for the interfaces in your system. Every time a service calls another service or a data resource, define that contract in something like Swagger to help bring light to what the system is actually doing.

Once these two activities have been completed, the next step involves working with your Project Manager or Agile Coach to help redefine how features are implemented in the application. Shoot from the hip should no longer be considered. Initially, a small design spike at the beginning of the feature will help to ensure that the functionality is not being duplicated, is using good coding practices/design patterns, and leveraging existing code where appropriate. This activity is usually performed by a senior developer or a technical architect, even if the implementation is completed by another team member. This will help ensure that all code is being purposefully added to the system in a manner that extends functionality without incurring additional technical debt.

The final part to move away from the monster is to begin to add in time or Sprints to address the technical debt that has been built up. This can be done as a percentage of each sprint, or in lump sprints if there are requirements lulls. Either way, incorporating some garbage collection and clean up will begin to put the toothpaste back in the tube and help the team and application move forward.


In this post, we talked about the microservices antipatterns that I have witnessed working with clients of all sizes. The ones we talked about here were:

  • Break the Piggy Bank
  • Everything Micro (Except for the Data)
  • We are Agile! a.k.a. The Frankenstein

After each, we also tried to give some hope and show a path forward to help correct the mistakes of each.

Immutable Data Structures in Java

As part of some of the coding interviews I’ve been conducting recently, the topic of immutability sometimes comes up. I’m not overly dogmatic in it myself, but whenever there’s no need for mutable state, I try to get rid of code which makes code mutable, which is often most visible in data structures. However, there seems to be a bit of a misunderstanding on the concept of immutability, where developers often believe that having a final reference, or val in Kotlin or Scala, is enough to make an object immutable. This blogpost dives a bit deeper in immutable references and immutable data structures.

Benefits of Immutable Data Structures

Immutable data structures have significant benefits, such as:

  • No invalid state
  • Thread safety
  • Easier-to-understand code
  • Easier-to-test code
  • Can be used for value types

No Invalid State

When an object is immutable, it’s hard to have the object in an invalid state. The object can only be instantiated through its constructor, which will enforce the validity of objects. This way, the required parameters for a valid state can be enforced. An example:

Address address = new Address();
// address is in invalid state now, since the country hasn’t been set.
Address address = new Address("Sydney", "Australia");
// Address is valid and doesn’t have setters, so the address object is always valid.

Thread Safety

Since the object cannot be changed, it can be shared between threads without having race conditions or data mutation issues.

Easier-to-Understand Code

Similar to the code example of the invalid state, it’s generally easier to use a constructor than initialization methods. This is because the constructor enforces the required arguments, while setter or initializer methods are not enforced at compile time.

Easier-to-Test Code

Since objects are more predictable, it’s not necessary to test all permutations of the initializer methods, i.e. when calling the constructor of a class, the object is either valid or invalid. Other parts of the code that are using these classes become more predictable, having fewer chances of NullPointerExceptions. Sometimes, when passing objects around, there are methods that could potentially mutate the state of the object. For example:

public boolean isOverseas(Address address) {
    if(address.getCountry().equals("Australia") == false) {
        address.setOverseas(true); // address has now been mutated!
        return true;
    } else {
        return false;

The above code, in general, is bad practice. It returns a boolean as well as potentially changing the state of the object. This makes the code harder to understand and to test. A better solution would be to remove the setter from theAddress class, and return a boolean by testing for the country name. An even better way would be to move this logic to the Address class itself (address.isOverseas()). When state really needs to be set, make a copy of the original object without mutating the input.

Can Be Used for Value Types

Imagine a money amount, say 10 dollars. 10 dollars will always be 10 dollars. In code, this could look like  public Money(final BigInteger amount, final Currency currency). As you can see in this code, it’s not possible to change the value of 10 dollars to anything other than that, and thus, the above can be used safely for value types.

Final References Don’t Make Objects Immutable

As mentioned before, one of the issues I regularly encounter with developers is that a large portion of these developers don’t fully understand the difference between final references and immutable objects. It seems that the common understanding of these developers is that the moment a variable becomes final, the data structure becomes immutable. Unfortunately, it’s not that simple, and I’d like to get this misunderstanding out of the world once and for all:

A final reference does not make your objects immutable!

In other words, the following code does not make your objects immutable:

final Person person = new Person("John");

Why not? Well, whileperson is a final field and cannot be reassigned, the Person class might have a setter method or other mutator methods, making an action like:


This is quite an easy thing to do, regardless of the final modifier. Alternatively, the Person class might expose a list of addresses like this. Accessing this list allows you to add an address to it and, therefore, mutate the person object like so:

person.getAddresses().add(new Address("Sydney"));

Our final reference again didn’t help us in stopping us from mutating the person object.

OK, now that we’ve got that out the way, let’s dive a little bit into how we can make a class immutable. There are a couple of things that we need to keep in mind while designing our classes:

  • Don’t expose internal state in an mutable way
  • Don’t change the state internally
  • Make sure subclasses don’t override the above behaviour

With the following guidelines in place, let’s design a better version of our Person class.

public final class Person {// final class, can’t be overridden by subclasses
    private final String name;     // final for safe publication in multithreaded applications
    private final List<Address> addresses;
    public Person(String name, List<Address> addresses) { = name;
        this.addresses = List.copyOf(addresses);   // makes a copy of the list to protect from outside mutations (Java 10+).
                // Otherwise, use Collections.unmodifiableList(new ArrayList<>(addresses));
    public String getName() {
        return;   // String is immutable, okay to expose
    public List<Address> getAddresses() {
        return addresses; // Address list is immutable
public final class Address {    // final class, can’t be overridden by subclasses
    private final String city;   // only immutable classes
    private final String country;
    public Address(String city, String country) { = city; = country;
    public String getCity() {
        return city;
    public String getCountry() {
        return country;

Now, the following code can be used like this:

import java.util.List;
final Person person = new Person("John", List.of(new Address(“Sydney”, "Australia"));

Now, the above code is immutable, but due to the design of the Person and Address class, while also having a final reference, it makes it impossible to reassign the person variable to anything else.

Update: As some people mentioned, the above code was still mutable because I didn’t make a copy of the list of Addresses in the constructor. So, without calling the new ArrayList() in the constructor, it’s still possible to do the following:

final List<Address> addresses = new ArrayList<>();
addresses.add(new Address("Sydney", "Australia"));
final Person person = new Person("John", addressList);

However, since a new a copy is made in the constructor, the above code will no longer affect the copied address list reference in the Person class, making the code safe. Thanks all for spotting!

I hope the above helps in understanding the differences between final and immutability. If you have any comments or feedback, please let me know in the comments below.

Redirecting HTTP Requests With Zuul in Spring Boot

Zuul is part of the Spring Cloud Netflix package and allows redirect REST requests to perform various types of filters.

In almost any project where we use microservices, it is desirable that all communications between those microservices go through a communal place so that the inputs and outputs are recorded and can implement security or redirect requests, depending on various parameters.

With Zuul, this is very easy to implement since it is perfectly integrated with Spring Boot.

As always you can see the sources on which this article is based on my GitHub page. So, let’s get to it.

Creating the Project

If you have installed Eclipse with the plugin for Spring Boot (which I recommend), creating the project should be as easy as adding a new Spring Boot project type, including the Zuul Starter. To do some tests, we will also include the web starter, as seen in the image below:

We also have the option to create a Maven project from We will then import the necessary information from our preferred IDE.


Let’s assume that the program is listening on http://localhost: 8080/, and that we want that all the requests to the URL http://localhost: 8080/google to be redirected to

To do this we create the application.yml file in the resources directory, as seen in the image below:

This file will include the following lines:

      path: /google/**

They specify that everything requested with the path /google/ and more (**) will be redirected to If such a request is made tohttp://localhost:8080/google/search?q=profesor_pthis will be redirected to other words, what we add after /google/ will be included in the redirection, due to the two asterisks added at the end of the path.

In order for the program to work, it will only be necessary to add the annotation @EnableZuulProxy and the start class, in this case: ZuulSpringTestApplication

public class ZuulSpringTestApplication {
  public static void main(String[] args) {, args);

In order to demonstrate the various features of Zuul, http://localhost:8080/api will be listening to a REST service that is implemented in the TestController class  of this project. This class simply returns in the body, the data of the received request.

public class TestController {
 final static String SALTOLINEA = "\n";
 Logger log = LoggerFactory.getLogger(TestController.class);
 @RequestMapping(path = "/api")
 public String test(HttpServletRequest request) {
  StringBuffer strLog = new StringBuffer();
  strLog.append("................ RECIBIDA PETICION EN /api ......  " + SALTOLINEA);
  strLog.append("Metodo: " + request.getMethod() + SALTOLINEA);
  strLog.append("URL: " + request.getRequestURL() + SALTOLINEA);
  strLog.append("Host Remoto: " + request.getRemoteHost() + SALTOLINEA);
  strLog.append("----- MAP ----" + SALTOLINEA);
  request.getParameterMap().forEach((key, value) -> {
   for (int n = 0; n < value.length; n++) {
    strLog.append("Clave:" + key + " Valor: " + value[n] + SALTOLINEA);
  strLog.append(SALTOLINEA + "----- Headers ----" + SALTOLINEA);
  Enumeration < String > nameHeaders = request.getHeaderNames();
  while (nameHeaders.hasMoreElements()) {
   String name = nameHeaders.nextElement();
   Enumeration < String > valueHeaders = request.getHeaders(name);
   while (valueHeaders.hasMoreElements()) {
    String value = valueHeaders.nextElement();
    strLog.append("Clave:" + name + " Valor: " + value + SALTOLINEA);
  try {
   strLog.append(SALTOLINEA + "----- BODY ----" + SALTOLINEA);
   BufferedReader reader = request.getReader();
   if (reader != null) {
    char[] linea = new char[100];
    int nCaracteres;
    while ((nCaracteres =, 0, 100)) > 0) {
     if (nCaracteres != 100)
  } catch (Throwable e) {
  return SALTOLINEA + "---------- Prueba de ZUUL ------------" + SALTOLINEA +

Filtering: Writing Logs

In this part, we will see how to create a filter so that a record of the requests made is left.

To do this, we will create the class PreFilter.javawhich should extend ZuulFilter:

public class PreFilter extends ZuulFilter {
 Logger log = LoggerFactory.getLogger(PreFilter.class);
 public Object run() {
  RequestContext ctx = RequestContext.getCurrentContext();
  StringBuffer strLog = new StringBuffer();
  strLog.append("\n------ NUEVA PETICION ------\n");
  strLog.append(String.format("Server: %s Metodo: %s Path: %s \n", ctx.getRequest().getServerName(), ctx.getRequest().getMethod(),
  Enumeration < String > enume = ctx.getRequest().getHeaderNames();
  String header;
  while (enume.hasMoreElements()) {
   header = enume.nextElement();
   strLog.append(String.format("Headers: %s = %s \n", header, ctx.getRequest().getHeader(header)));
  return null;
 public boolean shouldFilter() {
  return true;
 public int filterOrder() {
  return FilterConstants.SEND_RESPONSE_FILTER_ORDER;
 public String filterType() {
  return "pre";

In this class, we will overwrite the functions we see in the source. Le’s explain each of these functions:

  • public Object run() – This is run for each request received. Here we can see the contents of the request and handle it if necessary.
  • public boolean shouldFilter() – If it returns true the run function will be executed.
  • public int filterOrder() – Returns when this filter is executed because usually there are different filters for each task. We must take into account that certain redirections or changes in the petition have to be done in a certain order, by the same logic used by Zuul when processing requests.
  • public String Filtertype() – specifies when the filter is executed. If it returns “pre”, it is executed before they have made the redirect and therefore before it has been called the end server (to Google, in our example). If it returns “post”, is executed after the server has responded. In the, we have ve defined the types to be returned: PRE_TYPE, POST_TYPE, ERROR_TYPE or ROUTE_TYPE.

In the example class, we see that, before making a request to the server, some request data is recorded, leaving a log.

Finally, for Spring Boot to utilize this filter, we should add the following function in our class.

public PreFilter preFilter() {
        return new PreFilter();

Zuul looks for beans to inherit from the class, ZuulFilter, and use them.

In this example, Java’s  PostFilter class also implements another filter but only runs after making the request to the server. As I mentioned, this is achieved by returning “post” in the Filtertype() function.

For Zuul to use this class we will create another bean with a function like this:

 public PostFilter postFilter() {
        return new PostFilter();

Remember that there is also a filter for treating errors that need to be addressed just after redirection ( “route”), but this article only looks into the post and pre filter types.

I’d like to clarify that, although this article does not with it,  Zuul can not only redirect to a static URL but also to services provided by the Eureka Server. It also integrates with Hystrix to have fault tolerance, so that if a server cannot reach you can specify what action to take.

Filtering and Implementing Security

Let us add a new file redirection to the application.yml file.

This redirection will take any request type from http: //localhost: 8080/private/foo to the page where this article ( is hosted.

The line sensitiveHeaders will be explained later.

In the PreRewriteFilterclass, I have implemented another pre filter for dealing this redirection. How? Easy. Put this code in the shouldFilter() function.

public boolean shouldFilter() {
  return RequestContext.getCurrentContext().getRequest()

Now, in the run function, we include the following code:

public Object run() {
 RequestContext ctx = RequestContext.getCurrentContext();
 StringBuffer strLog = new StringBuffer();
 strLog.append("\n------ FILTRANDO ACCESO A PRIVADO - PREREWRITE FILTER  ------\n");
 try {
  String url = UriComponentsBuilder.fromHttpUrl("http://localhost:8080/").path("/api").build().toUriString();
  String usuario = ctx.getRequest().getHeader("usuario") == null ? "" : ctx.getRequest().getHeader("usuario");
  String password = ctx.getRequest().getHeader("clave") == null ? "" : ctx.getRequest().getHeader("clave");
  if (!usuario.equals("")) {
  if (!usuario.equals("profesorp") || !password.equals("profe")) {
    String msgError = "Usuario y/o contraseña invalidos";
    strLog.append("\n" + msgError + "\n");
    return null;
   ctx.setRouteHost(new URL(url));
 } catch (IOException e) {
  // TODO Auto-generated catch block
 return null;

This searches the headers of the request (headers) and, if the user header doesn’t exist, it does nothing and the request is redirected to In case there is a user header found that has the value profesorp, and the variable key has the value profe, the request is redirected to http://localhost:8080/api. Otherwise, it returns an HTTP code, forbidden, returning the string "Invalid username and/or password"in the body of the HTTP response. Moreover, the flow of the request is canceled because it calls ctx.setSendZuulResponse (false).

Because the line sensitiveHeaders in the file application.yml I mentioned above has ‘user’ and ‘password’ headers, it not be passed into the flow of the request.

It is very important that this filter is run after the PRE_DECORATION filter, because, otherwise, the call ctx.setRouteHost()will have no effect. Therefore, the  filterOrder function will have this code:

public int filterOrder() {
   return FilterConstants.PRE_DECORATION_FILTER_ORDER+1; 

So a call passing the user and the correct password, we will redirect to http://localhost: 8080/api.

> curl -s -H "usuario: profesorp" -H "clave: profe" localhost:8080/privado
---------- Prueba de ZUUL ------------
................ RECIBIDA PETICION EN /api ......
Metodo: GET
URL: http://localhost:8080/api
Host Remoto:
----- MAP ----
----- Headers ----
Clave:user-agent Valor: curl/7.63.0
Clave:accept Valor: */*
Clave:x-forwarded-host Valor: localhost:8080
Clave:x-forwarded-proto Valor: http
Clave:x-forwarded-prefix Valor: /privado
Clave:x-forwarded-port Valor: 8080
Clave:x-forwarded-for Valor: 0:0:0:0:0:0:0:1
Clave:accept-encoding Valor: gzip
Clave:host Valor: localhost:8080
Clave:connection Valor: Keep-Alive
----- BODY ----

If you put the wrong password the output will look like this:

 > curl -s -H "usuario: profesorp" -H "clave: ERROR" localhost:8080/privado
Usuario y/o contraseña invalidos

Filtering: Dynamic Filter

Finally, we will include two new redirections in the file applicaction.yml

    path: /local/**
    url: http://localhost:8080/api
    path: /url/**

In the first three lines,  when we go to the URL http://localhost:8080/local/XXXXwe will be redirected to http://localhost:8080/api/XXX. I’ll clarify that the label localis arbitrary and we could put json: so that this doesn’t coincide with the path that we want to redirect to.

In the second three lines, when we go to the URL http://localhost:8080/url/XXXXwe will be redirected tohttp://localhost:8080/api/XXXXX

The RouteURLFilter class will be responsible for carrying data to the URL filter. Remember that to use Zuul, the filters must create a corresponding bean.

 public RouteURLFilter routerFilter() {
        return new RouteURLFilter();

In the shouldFilter function of RouteURLFilter, we have this code to only fulfill requests to /url.

public boolean shouldFilter() {
RequestContext ctx = RequestContext.getCurrentContext();
if ( ctx.getRequest().getRequestURI() == null || 
        ! ctx.getRequest().getRequestURI().startsWith("/url"))
return false;
return ctx.getRouteHost() != null && ctx.sendZuulResponse();

In the run function, we have the code that performs the magic. Once we have captured the URL target and the path, as I explain below, it is used in the setRouteHost() function of RequestContext to properly redirect our requests.

public Object run() {
 try {
  RequestContext ctx = RequestContext.getCurrentContext();
  URIRequest uriRequest;
  try {
   uriRequest = getURIRedirection(ctx);
  } catch (ParseException k) {
   return null;
  UriComponentsBuilder uriComponent = UriComponentsBuilder.fromHttpUrl(uriRequest.getUrl());
  if (uriRequest.getPath() == null)
  String uri =;
  ctx.setRouteHost(new URL(uri));
 } catch (IOException k) {
 return null;

It searches the variables  hostDestino and  pathDestino in the header to make the new URL to which it must redirect.

For example, suppose we have a request like this:

> curl --header "hostDestino: http://localhost:8080" --header "pathDestino: api" \

The call will be redirected to http: //localhost: 8080/api?q=profesor-p and displays the following output:

--------- Prueba de ZUUL ------------
................ RECIBIDA PETICION EN /api ......
Metodo: GET
URL: http://localhost:8080/api
Host Remoto:
----- MAP ----
Clave:nombre Valor: profesorp
----- Headers ----
Clave:user-agent Valor: curl/7.60.0
Clave:accept Valor: */*
Clave:hostdestino Valor: http://localhost:8080
Clave:pathdestino Valor: api
Clave:x-forwarded-host Valor: localhost:8080
Clave:x-forwarded-proto Valor: http
Clave:x-forwarded-prefix Valor: /url
Clave:x-forwarded-port Valor: 8080
Clave:x-forwarded-for Valor: 0:0:0:0:0:0:0:1
Clave:accept-encoding Valor: gzip
Clave:host Valor: localhost:8080
Clave:connection Valor: Keep-Alive
---- BODY ----

You can also receive the URL to redirect the request body. The JSON object received must have the format defined by the  GatewayRequest class, which, in turn, contains a URIRequest object.

public class GatewayRequest {
  URIRequest uri;
  String body;

public class URIRequest {
  String url;
  String path;
  byte[] body=null;

This is an example of putting the URL redirect destination in the body:

> curl -X POST \
  'http://localhost:8080/url?nombre=profesorp' \
  -H 'Content-Type: application/json' \
  -d '{
    "body": "The body", "uri": { "url":"http://localhost:8080", "path": "api"    }
  ---------- Prueba de ZUUL ------------
................ RECIBIDA PETICION EN /api ......
Metodo: POST
URL: http://localhost:8080/api
Host Remoto:
----- MAP ----
Clave:nombre Valor: profesorp
----- Headers ----
Clave:user-agent Valor: curl/7.60.0
Clave:accept Valor: */*
Clave:content-type Valor: application/json
Clave:x-forwarded-host Valor: localhost:8080
Clave:x-forwarded-proto Valor: http
Clave:x-forwarded-prefix Valor: /url
Clave:x-forwarded-port Valor: 8080
Clave:x-forwarded-for Valor: 0:0:0:0:0:0:0:1
Clave:accept-encoding Valor: gzip
Clave:content-length Valor: 91
Clave:host Valor: localhost:8080
Clave:connection Valor: Keep-Alive
----- BODY ----
The body

As the body is being dealt with, we send to the server only what is sent in the bodyparameter of the JSON request.

As shown, Zuul has a lot of power and is an excellent tool for redirections. In this article, I’ve only scratched the main features of this fantastic tool, but I hope it has allowed you to see the possibilities.