This is part two of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. This part depicts how we went from a basic Docker Compose setup to running our application on our own »bare-metal« Kubernetes cluster.
This is part one of our series on how we designed and implemented a scalable, highly-available and fault-tolerant microservice-based Image Editor. The series covers the various design choices we made and the difficulties we faced during design and development of our web application. It shows how we set up the scaling infrastructure with Kubernetes and what we learned about designing a distributed system and developing a production-grade Kubernetes cluster running on multiple nodes.
The last two years in software development and operations have been characterized by the emerging idea of “observability”. The need for a novel concept guiding the efforts to control our systems arose from the accelerating paradigm changes driven by the need to scale and cloud native technologies. In contrast, the monitoring landscape stagnated and failed to meet the new challenges our massively more complex applications pose. Therefore, observability evolved as a mission-critical property of modern systems and still attracts much attention. The numerous debates differentiated monitoring from observability and covered its technical and cultural impact on building and operating systems. At the beginning of 2019, the community reached consensus on the characteristic of observability and elaborated its core principles. Consequently, new tools and SaaS applications appeared marking the beginning of its commercialization. This post identifies the forces driving the evolution of observability, points out trends we presently perceive and tries to predict future developments.Continue reading
Welcome to the final part of our microservices series. If you’ve missed a previous post you can read it here:
Respect for Stumbling Blocks
Hopefully you have enjoyed our blog posts and have learned a lot. We answered following questions in our last four posts
- How to build a microservices architecture?
- How to use the advantages of caching with microservices?
- How to secure microservices and handle authentication between them?
- How to set up a seamless Continuous Integration workflow for microservices combining Jenkins, Git and Docker?
Welcome to part four of our microservices series. If you’ve missed a previous post you can read it here:
In our fourth part of Microservices – Legolizing Software Development we will focus on our Continuous Integration environment and how we made the the three major parts – Jenkins, Docker and Git – work seamlessly together.
Welcome to part three of our microservices series. If you’ve missed a previous post you can read it here:
Today we want to give you a better understanding of the security part of our application. Therefore, we will talk about topics like security certificates and enable you to gain a deeper insight into our auth service.
Welcome to part two of our microservices series. If you’ve missed a previous post you can read it here:
The microservice structure can generate a heavy communication between many services. Worst case scenario is a long tail of dependencies, resulting in a high latency of the response for the initial request. This can get even worse, e.g. if the services were running on different servers placed in various data centers. Even if some requests can run parallel, the response time for the initial requested service will take at least the answering time of the tail it depends on.
Welcome to our five-part series about microservices and a legolized software development. We’d like to share our lessons learned about architecture, development environment and security considerations with you. We will also explain some issues we stumbled over and what solutions we chose to solve them.
I) In the first part, we present an example microservice structure, with multiple services, a foreign API interface and a reverse proxy that also allows load balancing.
II) Part two will take a closer look on how caching improves the heavy and frequent communication within our setup. [read]
III) Security is a topic that always occurs with microservices. We’ll present our solution for managing both, authentication and authorization at one single point. [read]
IV) An automated development environment will save you. We explain how we set up Jenkins, Docker and Git to work seamlessly together. [read]
V) We finish with a concluding review about the use of microservices in small projects and give an overview about our top stumbling blocks. [read]