Top 10 Spring Boot Interview Questions

25+ Spring Boots Interview Questions

In this article, we will discuss some top 10 interview questions in Spring Boot. These questions are a bit tricky and trending heavily, nowadays, in the job market.

1) What does the @SpringBootApplication annotation do internally?

As per the Spring Boot doc, the @SpringBootApplication annotation is equivalent to using @Configuration@EnableAutoConfiguration, and @ComponentScan with their default attributes. Spring Boot enables the developer to use a single annotation instead of using multiple. But, as we know, Spring provided loosely coupled features that we can use for each individual annotation as per our project needs.

2) How to exclude any package without using the basePackages filter?

There are different ways you can filter any package. But Spring Boot provides a trickier option for achieving this without touching the component scan. You can use the exclude attribute while using the annotation  @SpringBootApplication. See the following code snippet:

@SpringBootApplication(exclude= {Employee.class})
public class FooAppConfiguration {}

3) How to disable a specific auto-configuration class?

You can use the exclude attribute of@EnableAutoConfiguration, if you find any specific auto-configuration classes that you do not want are being applied.

//By using “exclude”
@EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})

On the other foot, if the class is not on the classpath, you can use the excludeName attribute of the annotation and specify the fully qualified name instead.

//By using “excludeName”
@EnableAutoConfiguration(excludeName={Foo.class})

Also, Spring Boot provides the facility to control the list of auto-configuration classes to exclude by using the spring.autoconfigure.exclude property. You can add into the application.properties. And you can add multiple classes with comma separated.

//By using property file
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration

4) What is Spring Actuator? What are its advantages?

This is one of the most common interview questions in Spring Boot. As per the Spring doc:

“An actuator is a manufacturing term that refers to a mechanical device for moving or controlling something. Actuators can generate a large amount of motion from a small change.”

As we know, Spring Boot provides lots of auto-configuration features that help developers quickly develop production components. But if you think about debugging and how to debug, if something goes wrong, we always need to analyze the logs and dig through the data flow of our application to check to see what’s going on. So, the Spring Actuator provides easy access to those kinds of features. It provides many features, i.e. what beans are created, the mapping in the controller, the CPU usage, etc. Automatically gathering and auditing health and metrics can then be applied to your application.

It provides a very easy way to access the few production-ready REST endpoints and fetch all kinds of information from the web. But by using these endpoints, you can do many things to see here the endpoint docs. There is no need to worry about security; if Spring Security is present, then these endpoints are secured by default using Spring Security’s content-negotiation strategy. Or else, we can configure custom security by the help of RequestMatcher.

5) How to enable/disable the Actuator?

Enabling/disabling the actuator is easy; the simplest way is to enable features to add the dependency (Maven/Gradle) to the spring-boot-starter-actuator, i.e. Starter. If you don’t want the actuator to be enabled, then don’t add the dependency.

Maven dependency:

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</dependencies>

6) What is the Spring Initializer?

This may not be a difficult question, but the interviewer always checks the subject knowledge of the candidate. It’s often that you can’t always expect questions that you have prepared. However, this is a very common question asked almost all of the time.

The Spring Initializer is a web application that generates a Spring Boot project with everything you need to start it quickly. As always, we need a good skeleton of the project; it helps you to create a project structure/skeleton properly. You can learn more about the Initializer here.

7) What is a shutdown in the actuator?

Shutdown is an endpoint that allows the application to be gracefully shutdown. This feature is not enabled by default. You can enable this by using management.endpoint.shutdown.enabled=true in your application.properties file. But be careful about this if you are using this.

8) Is this possible to change the port of Embedded Tomcat server in Spring boot?

Yes, it’s possible to change the port. You can use the application.properties file to change the port. But you need to mention “server.port” (i.e. server.port=8081). Make sure you have application.properties in your project classpath; REST Spring framework will take care of the rest. If you mention server.port=0 , then it will automatically assign any available port.

9) Can we override or replace the Embedded Tomcat server in Spring Boot?

Yes, we can replace the Embedded Tomcat with any other servers by using the Starter dependencies. You can use spring-boot-starter-jetty  or spring-boot-starter-undertow as a dependency for each project as you need.

10) Can we disable the default web server in the Spring Boot application?

The major strong point in Spring is to provide flexibility to build your application loosely coupled. Spring provides features to disable the web server in a quick configuration. Yes, we can use the application.properties to configure the web application type, i.e.  spring.main.web-application-type=none.

All the best!

For More : https://www.bipinwebacademy.com/2019/04/spring-boot-interview-questions.html

RESTful API Design Principle: Deciding Levels of Granularity

Granularity is an essential principle of REST API design. As we understand, business functions divided into many small actions are fine-grained, and business functions divided into large operations are coarse-grained.

However, discussions about what level of granularity that needs to be in APIs may vary, we will get distinct suggestions, and even end up in debates. Regardless, its best to decide based upon business functions and its use cases, as granularity decisions would undoubtedly vary on a case by case basis.

This article discusses a few points on how API designers would need to choose their RESTful service granularity levels.

Image title

Coarse-Grained and Fine-Grained APIs

In some cases, calls across the network may be expensive, so to minimize them, coarse-grained APIs may be the best fit, as each request from the client forces lot of work at the server side, and in fine-grained, many calls are required to do the same amount of work at the client side.

Example: Consider a service returns customer orders in a single call. In case of fine-grained, it returns only the customer IDs, and for each customer id, the client needs to make an additional request to get details, so n+1 calls need to be made by the clients. It may be expensive round trips regarding its performance and response times over the network.

In a few other cases, APIs should be designed at the lowest practical level of granularity, because combining them is possible and allowed in ways that they suit the customer needs.

Example: An electronic form submission may need to collect addresses as well as, say, tax information. In this case, there are two functions: one is a collection of applicant’s whereabouts, and another is a collection of tax details. Each task needs to be addressed with a distinct API and requires a separate service because an address change is logically a different event and not related to tax time reporting, i.e., why one needs to submit the tax information (again) for an address change.

Levels of Granularity

Level of granularity should satisfy the specific needs of business functions or use cases. While the goal is to minimize calls across the network and for better performance, we must understand the set of operations that API consumers require and how they would give a better idea of the “correctly-grained” APIs in our designs.

At times, it may be appropriate that the API design supports both coarse-grained as well as fine-grained to give the flexibility for the API developers to choose the right APIs for their use cases.

Guidelines

The following points may serve as some basic guidelines for the readers to decide their APIs granularity levels in their API modeling.

  • In general, consider that the services may be coarse-grained, and APIs are fine-grained.
  • Maintain a balance between the amount of response data and the number of resources required to provide that data. It will help decide the granularity.
  • The types of performed operations on the data should also be considered as part of the design when defining the granularity.
  • Read requests are normally coarse-grained. Returning all information as required to render the page; it won’t hurt as much as two separate API calls in some cases.
  • On the other hand, write requests must be fine-grained. Find out everyday operations clients needs, and provide a specific API for that use case.
  • At times, you should use medium grained, i.e., neither fine-grained or coarse-grained. An example could be as seen in the following sample where the nested resources are within two levels deep.Image title

While the above guideline may understandably lead to more API deployment units, this can cause annoyances down the line. There are patterns, especially the API Gateway, that bring a better orchestration with those numerous APIs. Orchestrating the APIs with optimized endpoints, request collapsing, and much more helps in addressing the granularity challenges.

Sidecar Design Pattern in Your Microservices Ecosystem

The sidecar design pattern is gaining popularity and wider adoption within the community. Building a microservice architecture that is highly scalable, resilient, secure, and observable is challenging. The evolution of service mesh architecture has been a game changer. It shifts the complexity associated with microservice architecture to a separate infrastructure layer and provides a lot of functionalities like load balancing, service discovery, traffic management, circuit breaking, telemetry, fault injection, and more.

Read my last post to understand the concepts behind service mesh, why it is needed for your cloud native applications, and the reasons for its popularity: The Rise of Service Mesh Architecture.

What Is a Sidecar Pattern?

Segregating the functionalities of an application into a separate process can be viewed as a Sidecar pattern. The sidecar design pattern allows you to add a number of capabilities to your application without additional configuration code for third-party components.

As a sidecar is attached to a motorcycle, similarly in software architecture a sidecar is attached to a parent application and extends/enhances its functionalities. A sidecar is loosely coupled with the main application.

Let me explain this with an example. Imagine that you have six microservices talking with each other in order to determine the cost of a package.

Each microservice needs to have functionalities like observability, monitoring, logging, configuration, circuit breakers, and more. All these functionalities are implemented inside each of these microservices using some industry standard third-party libraries.

But, is this not redundant? Does it not increase the overall complexity of your application? What happens if your applications are written in different languages — how do you incorporate the third-party libraries which are generally specific to a language like .NET, Java, Python, etc.?

Benefits of Using a Sidecar Pattern:

  • Reduces the complexity in the microservice code by abstracting the common infrastructure-related functionalities to a different layer.
  • Reduces code duplication in a microservice architecture since you do not need to write configuration code inside each microservice.
  • Provide loose coupling between application code and the underlying platform.

How Does the Sidecar Pattern Work?

The service mesh layer can live in a sidecar container that runs alongside your application. Multiple copies of the same sidecar are attached alongside each of your applications.

All the incoming and outgoing network traffic from an individual service flows through the sidecar proxy. As a result, the sidecar manages the traffic flow between microservices, gathers telemetry data, and enforces policies. In a sense, the service is not aware of the network and knows only about the attached sidecar proxy. This is really the essence of how the sidecar pattern works — it abstracts away the network dependency to the sidecar.

Sidecar Design Pattern

Inside a service mesh, we have the concept of a data plane and a control plane.

  • The data plane’s responsibility is to handle the communication between the services inside the mesh and take care of the functionalities like service discovery, load balancing, traffic management, health checks, etc.
  • The control plane’s responsibility is to manage and configure the sidecar proxies to enforce policies and collect telemetry.

In the Kubernetes and Istio world, you can inject the sidecars inside a pod. Istio uses the sidecar model with Envoy as the proxy.

Envoy from Lyft is the most popular open source proxy designed for cloud native applications. Envoy runs along side every service and provides the necessary features in a platform agnostic manner. All traffic to your service flows through the Envoy proxy.
https://www.envoyproxy.io

The shift from monolith to microservices has enabled organizations to deploy applications independently and at scale. In a container and Kubernetes world, the sidecar design pattern is more compatible. The sidecars abstract away complexity from the application and handle functionalities like service discovery, traffic management, load balancing, circuit breaking, etc.

You can learn more about the sidecar pattern here:

Authentication and Authorization in Microservices

Microservices architecture has been gaining a lot of ground as the preferred architecture for implementing solutions, as it provides benefits like scalability, logical and physical separation, small teams managing a part of the functionality, flexibility in technology, etc. But since microservices are distributed the complexity of managing them increases.

One of the key challenges is how to implement authentication and authorization in microservices so that we can manage security and access control.

In this post, we will try to explore a few approaches that are available and see how we should implement them.

There are three approaches that we can follow:

Local Authentication and Authorization (Microservices are responsible for Authentication and Authorization)

      • Pros
        • Different authentication mechanisms can be implemented for each microservice.
        • Authorization can be more fine-grained
      • Cons
        • The code gets bulkier.
        • The probability of each service using different authentication is very low so code gets duplicated.
        • The developer needs to know the permission matrix and should understand what each permission will do.
        • The probability of making mistakes is quite high.

Global Authentication and Authorization (

    • It is an All or Nothing approach if the authorization for a service is there then it is accessible for all else none)

      • Pros
        • Authentication and authorization so there’s no repetition of code.
        • A future change to the authentication mechanism is easy, as there’s only one place to change the code.
        • Microservices’ code is very light and focuses on business logic only.
      • Cons
        • Microservices have no control over what the user can access or not, and finer level permissions cannot be granted.
        • Failure is centralized and will cause everything to stop working.

Global Authentication and Authorization as a part of Microservices

    • Pros
      • Fine-grained object permissions are possible, as microservices can decide what user the will see or not.
      • Global authentication will be easier to manage the lighter the load becomes.
      • Since authorization is controlled by the respective microservice there’s no network latency and it will be faster.
      • No centralized failure for authorization.
    • Cons
      • Slightly more code for developers to write, as they have to focus on permission control.
      • Needs some effort to understand what you can do with each permission.

In my opinion, the third option is the best one, as most of the applications have a common authentication mechanism, thus global authentication makes perfect sense. Microservices can be accessed from multiple applications and clients and they might have different data needs so global authorization becomes a limiting factor on what each application can see. With local authorization, microservices can make sure that the client application is only authorized to see what it needs to see.

My organization implemented the same approach in one of the projects that we were working on recently. We built an authentication service that was mainly responsible for integration with the LDAP system for verifying the user and then contacting the RBAC (Role-Based Access Control) service to populate the permission matrix based on the role the user is playing in the context of the application, e.g. the same user can be a normal user in one of the applications and an admin in another. So we need to understand the context from which the user is coming in and RBAC is the place where we decode the context and populate the relevant set of permissions. The permission matrix was then sent to the microservice as a part of claims in the JWT token. Microservices only apply to those permissions and return what is required to be returned. Please see the below diagram to see how we orchestrate the flow.

Microservice authentication and authorization

Architecture Flow for Authentication and Authorization

Conclusion

The above solution, where authentication is global and microservices control the authorizations of their content based on the permissions that are passed to it, is one of the possible solutions for handling authentication and authorization modules in microservices development. Also, we can enhance this solution by building a sidecar in a service mesh-type architecture, where we offload the authorization to the sidecar.

Top Three Strategies for Moving From a Monolith to Microservices

One of the primary problems facing enterprises is the problem of moving from monolith to microservices. The larger the enterprise, the bigger their monolithic applications become, and it gets harder to refactor them into a microservices architecture.

Everyone seems to agree on the benefits of microservices. We covered this topic at some length in this post. However, not many seem to agree on how to undertake the migration journey. Too often, the decision-making process turns into a chicken-and-egg problem. The bigger the monolith, the bigger the stakeholders’ and management’s footprint becomes. Too much management often leads to decision paralysis and, ultimately, the journey ends up as a mess.

However, many organizations have successfully managed to make this transition. Often, it is a mix of good leadership and a well-defined strategy that determines success or failure.

Good leadership is often not in the hands of an architect or developer undertaking this journey. However, a strategy is. So, let’s look at some strategies that can help in this journey:

Implement New Functionalities as Services

I know it is hard. But if you’ve decided to transition from a monolith to microservices, you have to follow this strategy. Think of the monolithic system as a hole in the ground. To get out of a hole, you don’t dig more. In other words, you don’t add to your problems.

Often, organizations miss this part completely. They think about a grand migration strategy that will take years. However, business requirements come fast. Due to the lack of budget or time, teams end up implementing those requirements into the monolithic application. The grand migration strategy never starts for whatever reason. And, each addition to the monolith makes the goal-post move further ahead.

In order to get around this, stop increasing the size of the monolith. Don’t implement new features in the monolithic code base. Every new feature or functionality should be implemented as services. This, in turn, reduces the growth rate of the monolithic application. In other words, new features implemented as services create inertia towards the migration. It also helps demonstrate the value of the approach and ensures continuous investment.

Separate Presentation Layer From the Backend

This is an extremely powerful strategy to migrate from a monolith to microservices. A typical enterprise application usually has three layers:

  • Presentation logic that consists of modules implementing the web UI. This tier of the system is responsible for handling HTTP requests and generating HTML pages. In any respectable application, this tier has a substantial amount of code.
  • Business logic that consists of modules handling the business rules. Often, this can be quite complex in an enterprise application.
  • Data access logic that consists of modules handling the persistence of data. In other words, it deals with databases.

Usually, there is a clean separation between presentation logic and business logic. The business tier exposes a set of coarse-grained APIs. Basically, these APIs form an API Layer. In other words, this layer acts as a natural border based on which you can split the monolith. This approach is also known as horizontal slicing.

If done successfully, you’ll end up with smaller applications. One application will handle the presentation. The other application will handle the business and data access logic. There are various data management patterns for microservices that can be explored.

There are advantages to this approach:

  • You can develop, deploy, and scale both applications independently. In other words, UI developers can rapidly introduce changes to the interface. They don’t have to worry about the impact on the backend.
  • You will also have a set of well-defined APIs. Also, these APIs will be remotely accessible for use in other microservices.

Extract Business Functionalities Into Services

This is the pinnacle of moving to a microservices architecture. The previous two strategies will take you only so far. Even if you successfully implement them, you’ll still have a very monolithic code base. Basically, they can act only as a spring-board to the real deal.

If you want to make a significant move towards microservices, you need to break apart the monolithic code base. The best way to do this is by breaking up the monolith based on business functionality. Each business functionality is handled by one microservice. Each of these microservices should be independently deployable and scalable. Communication between these services should be implemented using remote API calls or through message brokers.

By using this strategy, over time, the number of business functions implemented as services will grow and your monolith will gradually shrink. This approach is also known as vertical slicing. Basically, you are dividing your domain into vertical slices or functionalities.

Conclusion

Moving from a monolith to microservices is not easy. It requires significant investment and management buy-in. On top of all that, it requires incredible discipline and skill from the team actually developing it. However, the advantages are many.

Often, it is a good idea to start small on this journey. Rather than waiting for a big one-shot move, try to take incremental steps, learn from mistakes, iterate, and try again. Also, don’t try to go for a perfect design from the get-go. Instead, be willing to iterate and improve.

Lastly, remember that microservices is not a destination but a journey. A journey of continuous improvement.

Let me know your thoughts and experiences about this in the comments section below.

How Are Your Microservices Talking

In this piece, which originally appeared here, we’ll look at the challenges of refactoring SOAs to MSAs, in light of different communication types between microservices, and see how pub-sub message transmission — as a managed Apache Kafka Service — can mitigate or even eliminate these challenges.

If you’ve developed or updated any kind of cloud-based application in the last few years, chances are you’ve done so using a Microservices Architecture (MSA), rather than the slightly more dated Service-Oriented Architecture (SOA). So, what’s the difference?

As Jeff Myerson wrote:

“If Uber were built with an SOA, their services might be:GetPaymentsAndDriverInformationAndMappingDataAPIAuthenticateUsersAndDriversAPI

“Whereas, if Uber were built instead with microservices, their APIs might be more like:

SubmitPaymentsService
GetDriverInfoService
GetMappingDataService
AuthenticateUserService
AuthenticateDriverService

“More APIs, smaller sets of responsibilities.”

With smaller sets of responsibilities for each service, it’s easier to isolate functionality. But what are microservices and how do MSAs compare to SOAs and monolithic applications?

What Are Microservices?

Simply put, microservices are a software development method where applications are structured as loosely coupled services. The services themselves are minimal atomic units which together, comprise the entire functionality of the entire app. Whereas in an SOA, a single component service may combine one or several functions, a microservice within an MSA does one thing — only one thing — and does it well.

Microservices can be thought of as minimal units of functionality, can be deployed independently, are reusable, and communicate with each other via various network protocols like HTTP (more on that in a moment).

Today, most cloud-based applications that expose a REST API are built on microservices (or may actually be one themselves). These architectures are called Microservice Architectures, or MSAs.

On the continuum from single-unit, monolithic applications to coarse-grained service-oriented architectures, MSAs offer the finest granularity, with a number of specialized, atomic services supporting the application.

Microservices vs. SOA vs. MonolithicSource: Samarpit Tuli’s Quora Answer: “What’s the difference between SOAs and Microservices?”

Some Challenges

From this, one starts to get a sense of how asynchronous communication at scale could serve as a benefit in the context of apps that pull and combine data from several APIs. Still, while most organizations are considering implementing their applications as MSAs — or already have — the task, especially when refactoring from MSAs or monoliths, is not exactly straightforward.\

For example, 37% of respondents building web apps reported that monitoring was a significant issue in one sample. Why?

Some clues can be seen in some of the challenges cited by those refactoring legacy apps to MSAs — overcoming tight coupling was cited by 28% of respondents, whereas finding where to break up monolithic components was cited by almost as many.

These types of responses suggest a few different, but actually related, conclusions:

  1. Monitoring services built on MSAs is more complicated (as opposed to SOAs or Monolithic apps) because of multiple points of failure (which exist potentially everywhere a service integrates with another).
  2. Tight coupling suggests components and protocols are inflexibly integrated point-to-point in monolithic apps or SOAs (making them difficult to maintain and build functionality around).
  3. Breaking up monolithic apps or large SOA components into atomic, independent, reusable microservices is challenging for exactly those first two reasons.

Also, what sort of problems can one expect when your application scales? We’ll look at these and suggest a solution below. But there’s one question that underlies all of the above concerns: Once we do manage to break up our apps into atomic services, what’s the best way for these services to communicate with each other?

Some Microservices Communication Patterns

In her article “Introduction to Microservices Messaging Protocols,” Sarah Roman provides an excellent breakdown of the taxonomy of communication patterns used by and between microservices:

Synchronous

Synchronous communication is when the sender of the event waits for processing and some kind of reply, and only then proceeds to other tasks. This is typically implemented as REST calls, where the sender submits an HTTP request, and then the service processes this and returns an HTTP response. Synchronous communication suggests tight coupling between services.

Asynchronous

Asynchronous communication means that a service doesn’t need to wait on another to conclude its current task. A sender doesn’t necessarily wait for a response, but either polls for results later or records a callback function or action. This typically is done over message buses like Apache Kafka and/or RabbitMQ. Asynchronous communication actually invokes loose coupling between component services, because there can be no time dependencies between sending events and a receiver acting on them.

Single Receiver

In this case, each request has one sender and one receiver. If there are multiple requests, they should be staggered, because a single receiver cannot receive and process them all at once. Again, this suggests tight coupling between sender and receiver.

Multiple Receivers

As the category indicates, there are multiple receivers processing multiple requests.

We believe that, while each of these methods (in combination) have their purpose within an MSA, the most loosely coupled arrangement of all is when microservices within a distributed application communicate with each other asynchronously, and via multiple receivers. This option implies that there are no strict dependencies between the sender, time of send, protocol, and receiver.

Pub-Sub

The pub-sub communication method is an elaboration on this latter method. The sender merely sends events — whenever there are events to be sent— and each receiver chooses, asynchronously, which events to receive.

Apache Kafka may be one of the more recent evolutions of pub-sub. Apache Kafka works by passing messages via a publish-subscribe model, where software components called producers publish (append) events in time-order to distributed logs called topics (conceptually a category-named data feed to which records are appended).

Consumers are configured to separately subscribe from these topics by offset (the record number in the topic). This latter idea — the notion that consumers simply decide what they will consume — removes the complexity of having to configure complicated routing rules into the producer or other components of the system at the beginning of the pipe.

diagram of apache kafka producers writing to and consumers reading from a topic

We argue that, when asynchronous communication to multiple receivers is required, Apache Kafka is a promising way to go, as it solves the problem of tight-coupling between components and communication, is monitorable, and facilitates breaking up larger components into atomic, granular, independent, reusable services.

Why Apache Kafka?

Routing Rules Configured by Consumer

When the routing rules are configured by the consumer (a feature of pub-sub and Apache Kafka generally), then, as mentioned, there is no need to build additional complexity into the data pipe itself. This makes it possible to decouple components from the message bus (and each other) and develop and test them independently, without worrying about dependencies.

Built-in Support for Asynchronous Messaging

All of the above makes it reasonably simple to decouple components and focus on a specific part of the application. Asynchronous messaging, when used correctly, removes yet another point of complexity by letting your services be ready for events without being synced to them.

High Throughput/Low Latency

It’s easier to have peace of mind about breaking up larger, SOA-type services into smaller, more atomic ones when you don’t have to worry about communication latency issues. Aiven managed Kafka services have been benchmarked and feature the highest throughput and lowest latency of any hosted service in the industry.

Why Managed Apache Kafka?

Apache Kafka was built to leverage the best of what came before while improving on it even more.

However, Apache Kafka can be challenging to set up. There are many options to choose from, and these vary widely depending on whether you are using an open-source, proprietary, free, or paid version (depending on the vendor). What are your future requirements?

If you were choosing a bundled solution, then your choice of version and installation type, for example, may come back to haunt you in the future, depending on the functionality and performance you later decide you need.

These challenges alone may serve as a compelling argument for a managed version. With the deployment, hardware outlay costs and effort and configuration out of your hands, you can focus entirely on the development for which you originally intended your Kafka deployment.

What’s more, managed is monitorable. Are you tracking throughput? You need not worry about where the integration points are in your app to instrument custom logging and monitoring; simply monitor each of your atomic services’ throughput via your provider’s Kafka backend and metrics infrastructure.

Auto-Scaling

What sort of problems can you expect when your application scales? Bottlenecks? Race conditions? A refactoring mess to accommodate for them?

A managed Kafka solution can scale automatically for you when the size of your data stream grows. As such, you needn’t worry when it’s time to refactor your services atomically, and you needn’t force your teams to maintain blob-style, clustered services with complicated dependencies just for the sake of avoiding latency between them.

Centralized, No-Fuss Management

If you’re managing your own cluster, you can expect to be tied down with installs, updates, managing version dependencies, and related issues. A managed solution handles all of that for you, so you can focus on your core business.

High Availability

Apache Kafka is already known for its high availability, so you never have to worry about your services being unable to communicate because a single node supporting your middleware is down.

Kafka’s ability to handle massive amounts of data and scale automatically lets you scale your data processing capabilities as your data load grows. And a managed solution has redundancy built right in.

Wrapping Up

We looked at some common challenges of refactoring from SOAs to MSAs: monitorability, tight coupling of components, and the challenges of breaking up larger monolithic or SOA components into microservices.

We considered different types of communication and looked at how a pub-sub mechanism serves asynchronous message transmission between multiple services. Finally, we examined how a managed Apache Kafka solution may be an excellent candidate for such a use case.q

Questions to Ask Before Choosing Microservices

One of the companies I worked with had over 300 microservices, all set up with CI/CD pipelines, Consul service discovery, and decoupled system integration using messaging and queues. I’m now working with a mid-size company where I’ve introduced a few microservices, but I’ve since then started to wonder: When should I and when shouldn’t I go down this path? I don’t have a definitive answer here, so I’m just going to throw a few questions and for us to discuss.

  • Is my business suitable for more monolithic or distributed systems? Think product portfolio and revenue streams: The more complex your current business model is, the more you are probably leaning towards distributed systems.
  • Will business logic get updated later on? Do I need a centralized logic engine or data mapping engine (e.g. Dell BOOMI)?
  • If distributed systems are the way to go, do I use microservices or modularized applications?
  • How frequently is the data updated? How frequently is the data retrieved? Do I need the data to be eventually consistent or always consistent? This will affect more than just your application structure — it will affect how you implement caching as well.
  • Is service discovery going to be a problem? Will it be a messy mesh of endpoints to manage and maintain?
  • Are my devs comfortable with the structure we are heading to?
  • Is there an integration expert/architect that can lead the project and follow through?
  • What if one of the service endpoints fail? How do I recover? How do I monitor all these effectively?
  • How do I manage security settings and authentication across all the services?

And once you’ve answered all these questions, you probably also want to make a calculation on the cost side — this should include both infrastructure and resource costs (your dev team, your DevOps team, training, and ongoing maintenance).

Let me know what you think!