This is the first part in my series of articles in moving from macro to microservices. Here, I'll present first the reasoning as to why a developer or manager would make such a move. Later posts in the series will provide code examples and procedures for building your own solution and, finally, creating your whole environment where you gradually replace your macroservice with a bundle of microservices.
Why the Transition from Macro to Micro Can Be Intimidating
One of the more frequent problems I’ve come across when developing large systems is that the technology that was originally used has started to become obsolete and the learning curve to add new features is a long one. This situation requires you to learn a lot about the system that you are about to meddle with.
Doing a rewrite of the system is one idea, but especially with larger systems, that opens a can of worms of its own. Not only are you likely to introduce new bugs on top of existing ones, you’re stuck temporarily maintaining two systems. Having your developers work on the new system will (possibly) pay off later when it’s online, but until that happens, they're an expense.
A large macroservice also tends to demand a longer learning curve; modules impact other modules, and adding functionality usually requires you to have a good understanding of the system you are about to modify.
For example, a data-exporting functionality might be easy to create in theory and only require a few days or a week’s worth of programming in practice. However, people who are unfamiliar with the underlying system might end up spending several weeks just to find all the nooks and crannies that'll be impacted by their addition. This leads to systems being dependent on people that are familiar with them, which makes outsourcing a practical impossibility.
Bridge the Gap with Facade Services
The main problem with a particular macroservice system that I was working with was its obsolete tech. There were some parts of it that originated from a 10-year-old design, and dependencies on old systems were high. It needed to be constantly maintained while adding new features or changing them, making it a mess to handle.
Since this system had a lot of old dependencies, it also required some libraries that were no longer maintained or that had had their functionality changed over the years. This forced us to keep the old versions or spend valuable time updating the system to take advantage of new libraries, which often meant doing work that offered no measurable value.
So on to facade services.
Facade services are often used when you have a complex underlying system and need to open it up for other services to use, but you don’t want to open the entire endpoint. So you create a proxy interface instead, which offers just the functionality you want it to. For example, you might create a REST service to read data from a SOAP system.
My first experience transitioning a system into a facade service was when we did just that: We had a macroservice directly tied up to an HTML frontend, and we needed to have a way to open it up for a mobile app to use.
In this case, we already had an interface that was open to the world, so the idea for the facade service was to initially create a simple proxy that only relayed the requests and didn’t offer any other functionality.
Creating something like this and having a load balancer use it as a primary interface and the actual API as secondary allows easy testing until the proxy service is stable. After this is done, it becomes a lot easier to replace old functionalities with new code, as well as add new features to the API that didn’t exist before.
New functionality, new technology, and in a safe way
After our facade proxy API goes live, it becomes easier to replace existing functionality with new, as well as adding new features to the system. All you have to do is analyze the request that is coming in.
For example, is the request from a source that we know is a developer from the IP/username/etc.? Route it to the new service. If not, then use the old one. For users of your system, the addition is completely transparent, possibly adding a few milliseconds of delay to the response, but from a developer’s point of view, this allows you to safely test and deploy the new services.
For example, if the endpoint is just a rewrite, you can have 10 percent of the requests use the new endpoint and 90 percent use the old one and do analysis on speed, stability, and the like. If things go wrong, switching back to only using the old system is just a toggle away.
Having an endpoint such as this also allows you to do a lot of new things that might have previously been impossible.
Another system I was working with had a very jumpy load. Sometimes there were a lot of users running complicated requests, which resulted in higher loads and longer response times. The system didn’t have a proper way of replicating itself.
Running several instances of it or creating a new feature to boot up more instances to run simultaneously would have been a difficult task since the underlying system wasn’t designed with that in mind at all. In the end, it would have required a lot of manhours of work for something that was not really that frequent of an occurrence. It looked bad when it happened, but it wasn’t a real problem.
“We can rebuild him. We have the technology.”
So on to new technologies. My choice for building new architecture was clear: Docker containers. Not only do containers prevent the obvious problems of “it worked on my computer,” but having isolated packages that you can multiply at will and which have a modular architecture by design makes the change from old system to new noticeably easier and cheaper.
Instead of having large instances with a lot of resources standing idle during off hours, you can easily spawn more endpoints when needed, saving you money and, in our case, fixing cosmetic problems such as the slowed response times.
Apart from Docker containers, the framework I originally went for when building the new system was Node.js. I had a good deal of experience with it, but after checking up on where the cutting edge stands, it seemed that Node.js was losing ground fast as the go-to for simple web APIs, and the new king of the hill was Go. Even to the point that the Node.js main developer jumped ship.
Go is fairly easy to learn if you have a background in C-like languages. It has matured and has a lot of qualities that really make it the language to go to for small- to medium-sized web applications.
With Docker, Facade Services Open Up Possibilities
With Docker offering the possibility to create your microservices with the tools of your choice, you're no longer locked to one language and one framework. Each microservice can use whatever suits its task best.
Instead of having to rely on developers that have learned an old system and its secrets, you can contract a small team to develop a specific functionality and just provide them the documentation of the old API or sources for it in a dummy database/services, etc. The contracted team no longer has to become expert in the application, only in the specific feature that it’s building. This alone opens the door to new possibilities for the future of your application.
Your inhouse developers also won’t be working so much on adding features to an obsolete system, creating more dependencies to maintain, but rather on creating new features that can be deployed right after they are made and tested instead of having to test the whole system against the changes.
While there will be unprofitable work when creating the skeleton functionality to replace the old one, creating a prototype microservice is much more cost-worthy than the price of even starting to arrange meetings about designing a whole new system. And in the end, microservices created in this way might require fewer developers as a workforce for maintaining and enriching your service.
Conclusion
I think it’s apparent that microservices are the answer for today’s development needs. Together with Docker containers, they offer fewer dependencies and a higher degree of scalability than could be easily achieved with traditional macroservices. They also offer a lot of solutions for maintaining a small, focused development crew and even allow outsourcing future feature development with relative ease.
We'll follow up in the next post in the series with a tutorial for deploying a facade service to AWS as a Docker container.
Want to test and deploy your microservices with Codeship Pro? Find out more here.
PS: If you liked this article you can also download it as a PDF eBook here: Breaking up your Monolith into Microservices or watch our re-run of our webinar: An Introduction to Building Your Apps with Microservices.