Following on from yesterday's post, "Do One Thing And Do It Well".
The ASP.NET 4.5, MVC 5 application that I'm migrating from runs as a monolith, but the Web API controllers are split across multiple "area" projects in a Visual Studio solution. At the same time as I ported them to ASP.NET 5 and MVC 6, I was learning about Docker and containers, and how easy they make it to run, deploy and manage multiple services. So instead of a basic, naive port, I added a
Startup.cs file to each of the "area" projects and got them running in their own containers. OK, it ended up being a bit more complex than that, but the end result is a more modular, maintainable solution.
My original solution contained several shared libraries for handling things like user authentication, secure credential storage, messaging and so on. These are implemented as you'd expect, with interfaces and implementations to allow mocking in tests.
Migrating these to ASP.NET 5 was very simple, for the most part I just used
yo aspnet to generate a class library and added the
.cs files to it. Reference the class library from each service project and done.
But then I got to thinking about this "do one thing" idea, and wondering if it applied to containers as well. If I have half a dozen containers all using the same library to achieve the same thing, have I violated some kind of rule? Or at least, could there be a better way?
As an example, the project uses JSON Web Tokens for authentication. Validating these tokens and extracting the claims from them needs to happen in all the services that make up the solution. The code to do that was in a shared library, referenced by all the service projects.
As I got to the stage of deploying and testing the new code, I noticed a couple of problems with this approach:
- The library used an
InMemoryCachefor a couple of things, and this cache was being created – and consuming memory – in multiple containers, although the data contained in the cache was likely duplicated in all of them
- If I changed the library, I would have to rebuild and redeploy all the containers that referenced it
The surface API of the library was incredibly basic: a single method which took a
string argument and either returned a
ClaimsPrincipal or threw an exception. Which is the same thing as a web service with a single end-point, which seemed to me to be what all this microservices talk was about.
Again, the refactoring was very simple. I added a
Startup.cs file to the JWT handler project, and wrapped the exposed method in a POCO Controller with a single
POST action, which takes the base64-encoded token and returns either the JSON representation of the
ClaimsPrincipal, or a 403 status code if the token is invalid.
As a bonus, because the service is so simple, I was able to use the .NET Core runtime instead of the much heavier Mono (most of the rest of the projects still have to run on Mono because they have dependencies that haven't yet been ported to Core).
Consuming this service from the other containers is trivial. I use Docker Compose to spin up the entire solution (I even have live reloading; another post on that soon). In the config file for Compose, you can specify a
container_name, and that name will get added to the
/etc/hosts file in each container. I give the JWT container the name
jwtauth, and I can do
await httpClient.PostAsync<ClaimsPrincipal>("http://jwtauth/check", ...) from all the other containers. (That's not a bulletproof production-ready solution, but it works for now.)
Now, the in-memory caches exist in a single, shared container, and if I change the implementation of the JWT authentication, I just deploy a new version of that one container and the others don't know the difference. The overhead of a network call over the Docker network bridge is minimal, although at some point I'd like to replace it with a UNIX Domain Socket to make it even quicker, but right now the CoreFx HttpMessageHandler on Linux doesn't support that (although the Kestrel HTTP server will serve over a domain socket).
All The Things?
Once I realised how frictionless it was to break out even the simplest of libraries into a stand-alone service, I started spotting more places where I could apply the pattern. Sometimes it was a shared library (or some instances of shamefully duplicated code); in one case it was thinking "this would be so much easier in Node..."
Obviously some restraint is needed, otherwise you'll end up with every class or every method of every class running as a service, with all the attendant complexity and process overhead.
Thoughtfully applied and implemented sensibly, though, the micro-service pattern makes a lot of sense in a container-based environment where multiple containers need some shared functionality. And it works best when each service does one thing, and does it well.