Microservices Pattern: Semver Auto Deployment

Microservices Pattern: Semver Auto DeploymentUse semantic versioning to stay in control while seamlessly deploying microservices to productionRonny RoellerBlockedUnblockFollowFollowingMar 19Who controls what?Giving autonomy to service owners is one of the major goals when adopting a microservice architecture.

Developers should improve & operate their service at their own pace with minimal dependency on other services.

Yet, the overall product consists of many microservices — and all of them have to work together to give users a proper experience.

Hence, there will be shared infrastructure like event buses to connect services.

In turn, there will be an Ops/infrastructure team that maintains the shared infrastructure and provides guidance to the service teams.

Soon the question arises: How much control should the service developers have over deployments — and when should the Ops team get involved?Extreme answersLet’s consider two extreme cases of how to handle deployments:Ticket to Ops team: Service developers create a new release, and then ask the Ops team to deploy it to productionContinuous Deployment: Service developers push new code, the CI system builds a new release and automatically deploys it to productionWe found both approaches problematic.

Asking the Ops team to deploy is a sure way back to 6 month release cycles: developers hate to ask, Ops hate to execute monotonous tasks — everybody is perfectly aligned to deploy as seldom as possible.

So, that’s a no-go.

More interestingly, Continuous Deployment turned out to be problematic as well:Continuous Deployment requires highly disciplined developers (e.

g.

review processes) and a mature code base (e.

g.

broad automated testing) to prevent rolling out broken code to production.

Sometimes pushing every commit to production just isn’t a good idea.

For example, we don’t want to loose browser caching due to a cosmetic commit to our web application.

Coordinated deployments become really hard, e.

g.

if an API change requires two services to be deployed in sync.

This becomes particularly challenging when a change impacts infrastructure — for which Ops surely wants to be involved.

So, is there something workable in between these two extremes?Signalize impact via SemverWe ended up settling on a compromise that a) gives the service team control over what gets deployed when and b) involves the Ops team when it matters.

Firstly, we auto-deploy to production, but only tagged versions.

This means that service developers keep some control of what gets deployed — without requiring any manual deployment work.

Secondly, we use semantic versioning to signalize the expected impact of a new version.

Semantic version is the practice of assigning version numbers based on the severity of the change.

For example, v1.

0.

1 will have only small patches compared to v1.

0.

0; v1.

1.

0 will come with new features v1.

0.

0; and v2.

0.

0 will contain big, breaking changes.

The beauty of semantic versioning is that it removes the need to write long change descriptions, based on which the Ops team would have to guess the potential impact.

Instead, the service developer gives its estimate for the change impact in a widely-used, machine-understandable format (v1.

0.

1).

Automatically handle predictable changes, collaborate for risky changesAt the start of the service life, the team agrees with the Ops team what kind of releases should be auto deployed: only the patch releases or also all minor releases.

Pushing a new major release requires by definition always Ops to be involved.

Based on this discussion, Ops sets up an auto deploy rule, e.

g.

v0.

x.

x auto-deploys all patch and minor releases.

Here is what’s happening during development: For a small fix, the developer creates a patch tag and our deployment system automatically deploys the new version to production.

Developers can deploy major releases to an integration environment to test it.

Then they open a ticket for Ops, describing how the auto deploy version should be updated (e.

g.

from v1.

x.

x to v2.

x.

x) and what the change contains (allowing Ops to do their own impact assessment).

The ticket could also request to do a synchronized update with another service (e.

g.

upgrade service-A from v1.

x.

x to v2.

x.

x and service-B from v3.

1.

x to v.

3.

2.

x).

Pitfalls and learningsWe’ve been using semver-based auto deployments for over 2 years.

It led to higher velocity for developers, minimized repetitive work for Ops, and thereby removed lots of frustrations between developers and Ops.

Along the way, we learned two important lessons:Developers have to truly understand semantic version.

In particular, they need to internalize that “version numbers are cheap”.

When in doubt: make it a minor release instead of a patch release.

In short, seeing version v8.

42.

231 makes us feel relaxed— v1.

0.

1 is the red flag.

The goal is in no way to avoid communication between developers and Ops.

Instead, we only want to get rid off the frustrating conversations to which Ops can’t add anything valuable.

We invest the saved time in deeper conversations about the hard cases.

Especially, when a new release might impact infrastructure: Ops is the developer’s best friend for a smooth deployment to production.

Happy coding!.. More details

Leave a Reply