Why Microservices Should Scare You More

Why Microservices Should Scare You MoreBenjamin HustonBlockedUnblockFollowFollowingMay 14Can’t I just write lots of small programs?Image by PublicDomainPictures from PixabayAnother day passes of you trawling the internet for a new way to rewrite your entire codebase.

You’ve considered using a new language, deploying on a different cloud, and using a niche programming pattern.

Then you find it.

Microservices.

Suddenly you have justification to do all three.

For the uninitiated, “microservices” is an architectural pattern where a system is composed of numerous different applications, all of which run in their own process and communicate with each other over a chosen protocol.

In your opinion, this should solve all of your problems:Designing new products or features is as easy as creating a new codebase which owns that particular problem.

It’s the ultimate realisation of development modularity.

Large applications with numerous dependencies can be broken down, making it quicker and easier to run tests suites and reason about domain logic.

You’ll get to feel like Google.

Microservices work on API contracts using something like RESTful APIs or gRPC.

If you’ve started building PWAs which communicate with backend services using this architecture, microservices feel even more grown-up.

You’ll get to feel like Monzo.

You can use different languages and stacks in each service in order to ‘use the best toolsets for the job’.

What could possibly go wrong?Below I give three arguments why microservices are not an easy win.

I do think that they are powerful for architecting large and scalable systems, but it is essential to recognise the operational and technical debt involved with their adoption.

The Standardisation ProblemThe obvious way to get started writing microservices is to take your standard application framework and write a couple of cheeky services with it.

Simple.

This is probably the approach the Django, Rails, .

Net Core and Spring Boot crews would advocate and it’s certainly not considered an anti-pattern.

You might have services for functionality such as push notifications, image resizing, and isolated bits of domain logic.

The issue here is that there are elements across all of the codebases which you may wish to standardise:Metrics.

Managing cloud-specific logic such as using object storage or a key management services.

Secret management and any use of environment variables.

Schema definitions for API contracts.

Server logic including validation, authentication, authorisation, logging and limiting.

Database abstractions.

You could standardise these through creating a shared library or simply reimplementing the logic again and again.

There are two considerable issues with this:Changes made to a shared library will not automatically be updated across all of your services.

This makes things like security fixes slow and unreliable to propagate.

It also creates an incentive not to optimise these shared components.

Adopting new technologies or languages will result in the need to create and maintain a new library for those technologies before you can even get started.

Similar arguments are made in an awesome lecture by Ben Christensen where he discusses the risks of creating a “distributed monolith”.

It’s an unlisted video which is linked to from Monzo’s blog post here.

I think Monzo’s blog also hints at a solution to some of these problems when it mentions that they are considering building services for all “shared infrastructure” such as databases and message queues.

This enables all of the cross-cutting service dependencies to be expressed in API contracts rather than shared codebases.

This is very neat, but it clearly requires a lot of work.

The Devops ProblemNeedless to say that when you are deploying lots of services, continuous integration and deployment could become a headache.

Here are two examples:Sophisticated end-to-end tests might be required to ensure that all services obey their API contracts.

But what do you do if you have a service, such as a mailer, which needs to be mocked out for certain test conditions when numerous services are under test?.Do you replace the entire service or have some logic in the service itself to recognise when it’s in an end-to-end test scenario?The more services the more time you will spend configuring your deployments.

For example, if you have an architecture with each service having its own database, different secrets may need to be provisioned for each service.

This creates a greater surface area for misconfigurations in production and makes it harder to automate the creation of new environments.

The riposte of the microservice engineer is probably going to involve a shed load of .

yaml files and a container orchestrator such as Kubernetes.

But I don’t think I would touch Kubernetes unless I could afford to have a dedicated engineer to manage it.

Here’s why:Networking is hard.

Distributed systems presumes networking between services, but doing this securely and reliably takes some thought.

What happens if there is a service timeout?.Do we redirect failed HTTP requests?.Do we need to encrypt traffic?Setting up an orchestrator is not easy.

Cloud providers will still require you to think about what hardware you are provisioning and this often feels like a far cry from the equally trendy serverless solutions appearing on the market.

In some cases, serverless architectures could be a better choice for an early stage startup looking for a small amount of deployment configuration.

Defining services, however much I like the declarative approach of manifest files, is not as simple as some people will say.

Doing things properly and defining resource limits as well as scaling policies is rarely obvious and will require experimentation before a production release can be thought to be ready.

None of these problems are insurmountable.

In fact I’m looking forward to Knative products (such as Google Cloud’s Cloud Run) bringing elements of the serverless paradigm to more traditional microservice deployments.

Equally, CI/CD problems can be solved by a smart and slightly neurotic development team.

But, again, it’s not easy and it needs to be properly planned.

The Programming ProblemChanging to a microservice architecture also requires a significant change in the mindset about how you write code.

Here are some examples:Microservices often presuppose that each application should be written so that it can be horizontally scaled automatically by an orchestrator.

This might require some thought for those coming from monoliths due to issues such as managing connection pools to shared resources and sharing in-memory information.

An orchestrator such as Kubernetes may also frequently shut down or spin up services and expects that any application can handle this.

Despite this, graceful shutdowns are not always easy to reason about and — depending on your stack — may require some thought about logging and ensuring any ongoing computation is completed.

Finally, many microservice architectures will make use of event-based architectures built with a message queue.

Some message queue technologies will guarantee at least once delivery, requiring handlers to be idempotent in case messages are duplicated.

In a large system, this can be quite a headache to achieve, test and maintain.

To be fair, the three points above are probably things to think about in deployments in most architectures.

However, microservices may mean you need to face these problems numerous times for each of your services.

ConclusionMicroservices have their time and place but their barrier to entry is much higher than I think many may consider.

I would need quite a good deal of convincing that the architecture can be adopted gradually or without a great deal of effort and expertise in the design and planning stages.

.

. More details

Leave a Reply