Microservices – Legolizing Software Development II

Welcome to part two of our microservices series. If you’ve missed a previous post you can read it here:

I) Architecture
II) Caching
III) Security
IV) Continuous Integration
V) Lessons Learned


The microservice structure can generate a heavy communication between many services. Worst case scenario is a long tail of dependencies, resulting in a high latency of the response for the initial request. This can get even worse, e.g. if the services were running on different servers placed in various data centers. Even if some requests can run parallel, the response time for the initial requested service will take at least the answering time of the tail it depends on.

Request Flow

An examplary request flow: The user enters a website, which sends a GET request to a service A. The requested service depends on data of the services B and C, which can be requested in parallel. In turn, service C depends on data of service D. Due to the security setup, every service has to authorize the request by asking the auth service. In sum the single request of the user is followed by seven interservice requests.

This leads to the question, what can we do to improve the response time?

  1. Skip the authentication for all services that don’t have to be accessed publicly and keep them only inside the architecture.
  2. Keep the login information, e.g. session keys, at every service in its own database.
  3. Cache every GET request if possible.
Partially cached request flow

In this example, the user requested similar data with slightly changed parameters. The single services don’t have to communicate with the auth service anymore, because they cached the response from the first request. We reduced the seven interservice requests to only three.

Full cached request flow

Now let’s assume service A processes some raw data from the other services and the user sends a third request with changed parameters that only affects this processing. Service A has still cached the Auth response and the raw data responses of the other services. It can now simply process the cached results and answer the initial request directly. In this round, we don’t need any interservice communication.

Summing up the examples, we can see how to improve the response latency and reduce the communication overhead a microservices architecture generates. Of course, the caching could be implemented in every interface, but it might be a good solution to use REST and HTTP as protocol, because it already implements it.

Additionally, we have some nice side effects.

  1. The service which sends the response, can decide if and how long the data is cacheable. It might also be useful to implement a delete cache interface for every service, to clear caches that expired earlier.
  2. Database access can be reduced, if we cache data that might be requested repeatedly.

Security is a topic that always occurs with microservices. In the next blog post we will present our solution for managing both, authentication and authorisation at one single point.
Continue with Part III – Security

Kost, Christof [ck154@hdm-stuttgart.de]
Kuhn, Korbinian [kk129@hdm-stuttgart.de]
Schelling, Marc [ms467@hdm-stuttgart.de]
Mauser, Steffen [sm182@hdm-stuttgart.de]
Varatharajah, Calieston [cv015@hdm-stuttgart.de]