How the Lagom framework enables scalable, reactive Microservices in Java and Scala

Each will incorrectly assume that the data they have written is still present.

Java gives us a number of concurrency primitives to combat this: the synchronized keyword, synchronized blocks, various lock types, atomic variables, futures, and the thread ExecutorService.

But ensuring that all those concurrency primitives are correctly applied across your application is a mammoth task, and ensuring objects in your application are fully and correctly synchronized is a challenge for which limited supporting tools exists.

When these types of bugs do arise, often the only solution is a highly caffeinated programmer staring long and hard at the code until the problem becomes clear.

We know that synchronizing data is hard, but lets presume that somehow you are able to ensure that all shared-thread data is correctly synchronized.

Let’s delve deep into our collective imaginations and envision a world where you have synchronized, locked, and futured your way into a complex multithreaded application system which somehow against all odds does not have inconsistent data sharing.

????Even in this magical world of perfect mutexes and locks, you are not yet out of the woods when it comes to concurrency bugs: your application may still experience deadlocks, when two-or-more threads create a dependency cycle by both owning a shared lock and needing a shared lock that another thread owns.

In the deadlock scenario, both threads are fully dead in the water without manual intervention.

Another contention related issue you will experience even with perfect synchronization is race conditions.

Back to our thread diagram: imagine a scenario where 95% of the time the red thread updates the data before the green thread, but 5% of the time instead the green thread updates the data first.

Timing on thread ordering can affect overall program behaviour, even though the actual code that is executed in both the 95% and 5% scenarios is exactly the same.

In contrast to the traditional concurrency model: The actor modelIn direct contrast to the traditional model of sharing data between threads is the actor model.

In this model, an ‘actor’ is a service that contains both some local data, and some local code that is executed on that local data.

However, critically, that local code is never allowed to access the data of any other actor.

Nor is the thread that actually runs that actor code allowed to touch any other data that is used by any other thread.

Actor code is always limited only to that actor’s own data.

In this diagram, the blue circles represent these ‘actors’.

You can imagine that the top left actor represents a mortgage service which receives customer requests for new mortgages.

In order to process a mortgage, a thread (“thread A”) executes the actor code and checks the actor’s local data to see if the customer already has a mortgage.

Once this check is complete, the mortgage actor needs to request the customer’s credit rating to determine the mortgage interest rate.

However, the customer’s credit rating is stored in a separate actor: the blue circle to the immediate right of the mortgage actor.

Since the mortgage actor needs to request a credit rating check, it sends an immutable message to the credit rating actor.

This immutable credit request message is placed in the incoming mailbox of the credit rating actor.

At this point, the mortgage actor — having other business to complete — continues to executes any remaining code that does not depend on a response from the credit rating application.

At some point the mortgage actor will receive a response to its credit rating request, but it is not blocked on the request and can process other data during this time.

Independently, on a new thread (“thread B”) the actor code that handles credit ratings is executed.

The credit rating code checks its incoming mailbox, and discovers an incoming message from the mortgage actor.

The credit actor then checks its internal database of credit ratings, locates the customer’s credit rating, and sends a message containing that data back to the mortgage actor.

This new response message is placed in the mortgage actor’s incoming mailbox, which will be processed the next time the mortgage actor code/thread runs.

The credit rating thread then continues its other business.

As you can see, in this scenario these two threads share data between them by passing messages to each other, rather that calling methods on shared Java objects.

Notice that at no point did thread A ever have access to the local data of thread B, and vice versa.

In both cases, data was shared by passing lightweight message between the two actors.

This lightweight messaging is the foundation of the actor concurrency model: rather than sharing data by multiple threads calling methods of shared objects, data is passed through lightweight messages that are shared between threads via a messaging system.

The actor model drives Reactive programmingSo that’s the actor concurrency model, and that’s the model that Akka, Lagom, and Play are based on.

By following the tenants of reactive programming via the actor model, you will get Lagom-based microservices that are:LightweightMessage-drivenUse an asynchronous, non-blocking thread modelSupport Apache Kafka for message passing (and general support for non-Kafka message-broker scenarios)Provide a holistic solution to developing distributed systems (build tools, test tools, Apache Kafka infrastructure configuration)When building within the constraints (‘opinions’) imposed by Lagom, a Lagom-based application will therefore necessarily have the desirable reactive qualities: responsiveness, resilience, scalability, and elasticity.

This, combined with the features described above, make it a compelling choice for moving your application development from a monolithic architecture to a scalable microservices-based architecture.

Trade-offsOf course, like everything in software development, there are trade-offs to any technical choice and Lagom is no exception.

Lagom is a strongly opinionated framework, and like any opinionated framework, the farther you diverge from those opinions the greater the pain.

One instance of this is that it can be more difficult to integrate with other (multithreaded) Java libraries that don’t play well Akka’s actor model, for example third-party libraries that manage their own threads with thread pools.

These may interfere with Akka’s ability to efficiently pass messages between actors and threads.

Lagom requires you to split your applications into a set of independent services, which will necessarily be more complex than a traditional monolithic application built on a more traditional framework (but of course with a monolithic application you lose the scaling/performance benefits of Lagom).

Lagom requires a greater understanding of the vagaries of distributed computing and concurrent data sharing, in order to avoid the pitfalls/“footguns” inherent to both of these topics.

Finally, Lagom is arguably a less mature technology than an industry bedrock like Java EE or Spring: Lagom dates back to 2016 (as per the GitHub commits), while Akka has been around since 2009, and the Play framework has been around since 2007.

Jump InIf you’re looking to dip your toes into the Lagom framework, try creating a Hello World application using IBM Microclimate.

Microclimate is a free-of-charge, container-based, multi-language, cloud-friendly development IDE, that you can download and try right now!.Microclimate does not include the ‘advanced’ features of the IBM Reactive Platform, but still incorporates Lagom, Akka, and Play, the fundamental open source technologies of Reactive Platform.

If you’re in the market for a microservice platform that is developer-focused, enterprise-friendly, and commercially-supported, check out IBM’s Reactive Platform.

It’s a collaborative development initiative between IBM and Lightbend to provide reactive technology builds on Lagom, Akka, and Play, while providing advanced features such application management, intelligent monitoring, and enterprise integrations.

.

. More details

Leave a Reply