Skip to content

Decoupling in Cloud Era: Building Cloud native microservices with Spring Cloud Azure

Yawei Wang edited this page Sep 19, 2018 · 54 revisions

Introduction

For the past decade, Spring is famous for its dependency injection feature to help Java developers to build loose coupled system. Put it simply, users just need to focus on abstraction provided by interface, then concrete instance of implementation will be ready for use. As cloud become increasingly popular in recent years, how to exploit the auto-scaling features provided by cloud environment and loose coupling with specific cloud vendor become another interesting challenge. That's where cloud native come into play. Let's move forward to see what's cloud native.

Cloud native and microservice

What exactly is "Cloud Native"

Even I have seen this term hundred times, it's still not an easy question. Cloud native is a lot more than just signing up with a cloud provider and using it to run your existing applications. It affects the design, implementation, deployment, and operation of your application. Let's look at two popular definitions first:

Pivotal, the software company that offers the popular Spring framework and a cloud platform, describes cloud native as:

Cloud native is an approach to building and running applications that fully exploit the advantages of the cloud computing model.

The Cloud Native Computing Foundation, an organization that aims to create and drive the adoption of the cloud-native programming paradigm, defines cloud-native as:

Cloud native computing uses an open source software stack to be:

  1. Containerized. Each part (applications, processes, etc) is packaged in its own container. This facilitates reproducibility, transparency, and resource isolation.
  2. Dynamically orchestrated. Containers are actively scheduled and managed to optimize resource utilization.
  3. Microservices-oriented. Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.

Let's try to summarize both definitions:

An approach that builds software applications as microservices and runs them on a containerized and dynamically orchestrated platform to utilize the advantages of the cloud computing model.

This is much more clear except for utilize the advantages of the cloud computing model. Let's try to explain it in another way:

A cloud native application is specifically designed for a cloud computing environment as opposed to simply being migrated to the cloud.

What exactly is "Microservice"

Let's check Martin Fowler's definition of microservice first:

microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

But I love another simple one:

Microservices are small, focused and autonomous services that work together.

Small and focused is the application of Single Responsibility. One service just need to do one thing well. Autonomous means fault tolerance and each service evolves independently from each other (each is a separate deployable entity).

Microservices are intrinsically linked to cloud-computing platforms, but the actual concept of microservices is nothing new. This approach has been there for many years, but now, through the popularity of cloud solutions, it has evolved to a higher level. It is easy to point out the reasons for this popularity. The use of a cloud offers you scalability, reliability, and low maintenance costs in comparison with on-premises solutions inside the organization. This has led to the rise of cloud-native approaches that are intended to give you the benefits from all of the advantages offered by cloud-like elastic scaling, immutable deployments, and disposable instances.

Key benefits as below:

  1. Resilience. One component's failure shouldn't take down the whole system. This is usually achieved by defining clear service boundary.
  2. Scaling. You don't need to scale everything together when only one part is constrained in performance.
  3. Ease of deployment. Making a change to a single service can speed the release cycle and smooth the troubleshooting process.
  4. Composablility. Since each service focuses on one thing, it's easy to reuse them like unix pipeline.
  5. Optimizing for replaceablity. One small individual service is easier to replaced with a better implementation or technology.

Spring Cloud

In order to implement microservice architecture easily, industry has identified some common patterns to help this. Some well-known patterns such as centralized configuration management, service discover, asynchronous message-driven and distributed tracing. Spring Cloud puts these microservice architecture patterns into practice and helps us follow the cloud native best practices. Besides this, Spring Cloud's unique value could be represented in several aspects:

  1. Define a common abstraction for frequently used patterns. This is another beautiful application of Spring's decoupling philosophy. Each pattern is not tightly coupled with concrete implementation. Take config server as example, you have the freedom to change backend storage of config without affecting other services. Discovery and Stream also followed this.
  2. Modular component. Many people think Spring Cloud as a fully-packaged solution at their first impression. Actually, it's not a all-or-nothing solution. You can only choose one module, then used this module as one microservice, other services have the freedom to use any other framework. It's like Lego-brick, you can only pick the pieces you like, and the only thing you need to make sure this piece is compatible with others.

Now let's look at how spring cloud module fit into microservice pattern.

Centralized config management via Spring Cloud Config

To meet the requirement of Store config in environment and micro-service architecture, we need to put all services' config in one centralized place. Besides this, the following features are also needed:

  1. Support multiple environment such as dev, test and prod. Then we can have one package built for all environment.
  2. Transparent config fetching. These centralized config should be fetched automatically without any user coding.
  3. Automatic property refresh when property changes. Service should be notified by such change and reload new property.
  4. Maintain change history and easily to revert to older version. This is a really useful feature to revert mistaken changes in production environment.

Spring Cloud Config supports all these features by using two simple annotation. What you need is annotate @EnableConfigServer in config service and include starter in other services to enable client. For more details, please refer Spring Cloud Config doc.

Service Discovery via Spring Cloud Discovery

Service discovery is a key component of most distributed systems and service oriented architectures. The problem seems simple at first: How do clients determine the IP and port for a service that exist on multiple hosts? Things get more complicated as you start deploying more services and in cloud environment.

There are two sides to the problem of locating services. Service Registration and Service Discovery.

  • Service Registration - The process of a service registering its location in a central registry. It usually register its host and port and sometimes authentication credentials, protocols, versions numbers, and/or environment details.
  • Service Discovery - The process of a client application querying the central registry to learn of the location of services.

How to choose service discovery solution should be considered in several aspects:

  • Monitoring - What happens when a registered service fails? Sometimes it is unregistered immediately, after a timeout, or by another process. Services are usually required to implement a heartbeating mechanism to ensure liveness and clients typically need to be able to handle failed services reliably.
  • Load Balancing - If multiple services are registered, how do all the clients balance the load across the services? If there is a master, can it be deteremined by a client correctly?
  • Integration Style - Does the registry only provide a few language bindings, for example, only Java? Does integrating require embedding registration and discovery code into your application or is a sidekick process an option?
  • Availability Concerns - Can you lose a node and still function? Can it be upgraded without incurring an outage? The registry will grow to be a central part of your architecture and could be a single point of failure.

Spring Cloud provides a common abstraction for both registration and discovery, which means you just need to use @EnableDiscoveryClient to make it work. Examples of discovery implementations include Spring Cloud Discovery Eureka and Spring Cloud Discovery Zookeeper. You should choose concrete implementation based on your user case. For more details, please refer Spring cloud Discovery doc.

Message-driven architecture via Spring Cloud Stream

Nowadays, everything cloud be considered as a message. So various messaging middle-wares get birth with their own message formats and APIs. It's a disaster to make all these middle-wares communicate with each other. Actually, solving this problem is easy, just define a unified message interface, then each middle-ware provides adapter which knows how to convert between their message format and standard message. Now you have grasped the core design principle of Spring Integration. Spring Integration is motivated by the following goals:

  1. Provide a simple model for implementing complex enterprise integration solutions.
  2. Facilitate asynchronous, message-driven behavior within a Spring-based application.
  3. Promote intuitive, incremental adoption for existing Spring users.

And guided by the following principles:

  1. Components should be loosely coupled for modularity and testability.
  2. The framework should enforce separation of concerns between business logic and integration logic.
  3. Extension points should be abstract in nature but within well-defined boundaries to promote reuse and portability.

For more details, please refer Spring Integration doc.

However, Spring Integration is still at low level, and with a not so cool name and many confusing terminology, so I believe nobody would want to use it directly. So Spring Cloud Stream based on standard message format and various adapters provided by Spring integration, works at a high level abstraction to produce, process and consume message in an easy way. It looks like unix pipe, you just need to worry about how to process message. Message will come and go as you expected. For more details, please refer Spring Cloud Stream doc.

Spring Cloud Azure

Spring Cloud Azure follows best practices and common abstractions provided by Spring Cloud, then take one step further to provide automatic resource provision and auto config Azure service specific properties. With this, users just need to understand Azure Service in high level to use it, without touching and suffering from low level details about config and SDK API. Take Azure Event Hub as a example, you only need to know this is message service with similar design as Kafka, then you can use Spring Cloud Stream Binder for Event hub to produce and consume from it.

The detailed design motivations are as below:

  1. Seamless Spring Cloud integration with Azure. Users can easily adopt Azure services without having to modify existing code. Only dependencies and a few configuration settings are needed.
  2. Least configuration. This is done by taking advantage of Spring boot auto config to preconfigure default properties values based on Azure Resource Management API. Users can override these properties with their own.
  3. Provision resource automatically. If resources are not existed, our modules will create them under user specified subscription and resource group.
  4. No cloud vendor lock in. With our modules, users can easily consume Azure services as well as benefit from conveniences provided by Spring Cloud, without getting locked in with one specific cloud vendor.

Auto config and resource provision with Azure Resource Manager

One of things developers hate doing is configuration. Before configuring each property, developers need to go through the documentation and fully understand each property's meaning, then carefully copy each property from some place and paste into application's property file. However, it's not done yet, they also need to properly comment each property to make other developers to know which one to change in which scenario and avoid mistaken changes. That's the pain point we need to solve, so we have auto config based on Spring boot. If you want to use event hub, you don't need to understand what's connection string, you just fill event hub namespace(like Kafka cluster name) and event hub name(like Kafka topic name), other things will be auto configured. Of course, you have the ability to provide your customized config to override default ones.

One great benefit of cloud is programmable API to create and query resource you own. This is the key to automation. Based on Azure resource manager, Spring Cloud Azure provides automatic resource provision. The resource has a broader range than you expect, one example is consumer group of event hub. When you have a new service acting as another new consumer group, you really don't want to create this consumer group manually.

Spring Cloud Stream Binder with Event Hub

We have talked the benefits of Spring Cloud Stream. Let's suppose you have used this project, but you want to migrate into Azure. You might have already used Kafka or RabbitMQ binder, but it seems that Azure doesn't provide such managed Kafka or RabbitMQ offering. Then how could you migrate with little change effort? Actually, you don't care which message middle-ware you actually use, you just want to have one thing provides similar function and performance requirement. So you can just change the dependency from kafka binder into event hub binder without any code change to smooth cloud migration. If you want to take full advantage of event hub binder in the future, you need to know the following features:

Consumer Group

Event Hub provides similar support of lightweight consumer group as Apache Kafka, but with slight different logic. While Kafka stores all committed offsets in the broker, you have to store offsets of event hub messages being processed manually. Event Hub SDK provide the function to store such offsets inside Azure Storage Account.

Partitioning Support

Event Hub provides similar concept of physical partition as Kafka. But unlike Kafka's auto rebalancing between consumers and partitions, Event Hub provides a kind of preemptive mode. Storage account acts as lease to determine which partition is owned by which consumer. When a new consumer starts, it will try to steal some partitions from most heavy-loaded consumer to achieve workload balancing.

Checkpoint support

In a distributed publish-subscribe messaging system, there're three messaging semantics: at-least-once, at-most-once and exactly-once. For now, we only consider consumer side:

  • At least once: If the consumer receives the message, then processes it. After processing finished, it sends acknowledgement to message broker. But if before sending acknowledgement, the consumer is down, then the message will be resent to guarantee at least once consuming since broke has no idea whether consumer received the message or not. In this case, consumer may process one message multiple times. This rely on manual checkpoint mode provided by event hub binder to do manual checkpoint after processing the message.
  • At most once: If the consumer receives the message, then sends acknowledgement to message broker immediately. Then it starts processing. But if the consumer is down during processing, then the message will not be resent again since broken think consumer already received the message. In this case, consumer processes one message at most once, but it may miss some messages due to processing failure. The is default batch checkpoint mode supported by event hub binder.
  • Exactly once: With at least once semantics, we can have a unique message id to deduplicate already processed message. Or we can have idempotent message processing. Learn more about Exactly once.

By exposing Checkpointer through custom message header, event hub binder could support different message consuming semantics.

For more details, please refer Spring Cloud Stream Event Hub binder doc. Also you can follow Sample to try it.

Spring resource with Azure Storage blob

Spring resource provide a common interface to manipulate stream based resource, such as UrlResource, ClassPathResource and FileSystemResource. It's obvious that Azure storage blob is a good fit for this as BlobResource. In this Resource, all implementation details are hidden and missing file will be automatically created.

public interface Resource extends InputStreamSource {

    boolean exists();

    boolean isOpen();

    URL getURL() throws IOException;

    File getFile() throws IOException;

    Resource createRelative(String relativePath) throws IOException;

    String getFilename();

    String getDescription();
}

For more details, please refer Spring Resource with Azure Storage doc. Also you can follow Sample to try it.

Spring Cloud Azure on Github

This project is open source. You can contribute and submit issue here. Please star if you like it.