Microservices are very popular right now, because supposedly, they allow you to develop, deploy and maintain them separately. It allows you to keep a clean architecture and avoid coupling. A contrasting opinion is that a monolith can do these as well, and remains simpler to deploy and support. But it should be a well-factored monolith. But not many people really tell you what that looks like. Let’s do the exercise of thinking what a well-factored monolith would look like.
Note: if you’re unfamiliar with these architectural patterns and want to learn more, I have a LinkedIn Learning course on Software Architecture Patterns which explains these (and other) patterns in an easy-to-understand way.
A monolith is an architectural pattern where the (main) application is a single executable that is deployed as a single unit. The code is usually also in a single repository or project. It contains all database, business, infrastructure and user interface logic. Schematically, this is what it looks like:
It’s a single application that we build and deploy. Of course, it can have a database and different connections to external services. But the idea is that it is a (big) executable that contains all the logic required.
This doesn’t really tell us much about the internal quality. Is it a well-factored monolith or not? Let’s dive in.
A typical monolith has many modules. Some have a clear goal, others not so much. Usually, there are “shared” modules with logic that is used by many of the implemented features (things like logging, UI components or common database functions come to mind). Sometimes you see a large module (like BusinessLogic) that became too large and someone decided to start writing submodules (e.g. BusinessLogic.Customer and BusinessLogic.Product), except the original module is never split up.
Whatever the case, the dependencies are all over the place and it’s become a mess:
This is a difficult thing to understand, maintain and to develop in.
A Well-Factored Monolith
A well-factored monolith would have to start by identifying the different area’s in the problem domain. In Domain Driven Design, these would be our bounded contexts. In a microservice architecture, they could be the different microservices. In our example, we might want an architecture that looks like this:
We still have a single executable and a single codebase. But at the top, you can distinguish 4 main pillars. These are our sub-domains, our bounded contexts in DDD-speak.
If you look closely, you’ll see that I remove the lines between the different BusinessLogic modules. This now poses a problem.
Communication Between Contexts
What do we do with communication and data flows between these bounded contexts? In a typical monolith, everything is running in a single executable and one piece of code can call another easily:
With microservices, it means making a call to another service through the network:
In a well-factored monolith, I would imagine we do the same. Maybe not through the network (although possible), but at least we should make a call to the top-most level of our application just as if it were a service on a different machine:
This has the benefit that contexts remain separated from each other, while still offering the benefits of the monolith (single codebase, single deployment). In case the monolith grows too large, it will be easier to split off a context into a real separate service. All that should be done is to send the communication over the network.
A Word on Shared Modules
Remember those shared modules (logging, database, security)? In the architectural diagram, they don’t pose a problem. But in monolithic codebases, I often see them in the same source code repository as the monolith. These are prime candidates to move to a separate repository and release as software libraries (on npm, NuGet, etc).
Architecturally, and when deployed, they’re there alongside all the other modules of the monolith. But in the version control system, they’re separate entities:
The advantages of this approach are many:
- reduced build times of the monolith because it doesn’t have to compile and test the other libraries
- separate lifecycles of the common components with own versioning and documentation (basically you can manage it independently of the others)
- it’s easier to create new applications that use the same common modules
- new applications can live in their own source code repository instead of adding them to the monolith repository because it’s where the common modules live
Some would say a disadvantage is that it harder to debug these common modules in a running application. However, if you approach these modules like open source projects or like software library providers do, you can manage and release them successfully. It involves a decent test strategy (automated tests!) of course. But this approach pushes you towards better practices, which is hardly a disadvantage.
So those are my ideas on how a well-factored monolith should look like. Let me know if I’m missing anything or if you don’t agree. The monolith definitely has advantages if done well. Of course, it has disadvantages as well (especially when multiple teams are involved), but so do microservices. It’s also possible to combine the monolith with microservices (for smaller functionalities or in a transition period).
It’s never perfect, but with this approach, I believe the monolith can be more manageable.