How to Synchronize Azure Container Services at Scale

June 28, 2018   //   Application Development Cloud

We’ve previously talked about containers and how to use them to scale your application. But while you’re building a highly-distributed, scalable system, you need some way to make sure you’re maintaining a consistent database.

Today, mission-critical parts of your system most likely exist on multiple servers across Azure. You’ve taken a monolith application – one that ran as a single instance with a single thread – and scaled it out across Azure in multiple directions, including VMs, a multi-instance App Service, and/or a swarm of containers. Instead of just one program, you now have multiple instances trying to work on the same row of your database.

This is why locking for the correctness of execution is important. If the lock fails and two nodes concurrently work on the same piece of data, the result is a corrupted file, data loss, permanent inconsistency, or some other serious problem.

How can you prevent this? And how can you maintain control in a world where single-threaded is a thing of the past?

Using distributed locking scheme

To combat these issues, we use distributed locking scheme – a part of the container high-performance world. To help you get started, follow these five steps:

Step 1: Set up a Redis cache instance on Azure

Azure offers Redis caches as a Platform as a Service (PaaS) offering. The will manage the instance and the underlying hardware. Simply configure the appropriate pricing, generate your connection information, and go back to coding.

Redis

Access KeysInfo for connecting to the cache – think of this like a database connection string

Step 2: Add the Redisson library to your java project set

We’ll be using Java as the language here, but steps are similar for .Net.

Add the Redisson dependency to your project. Redisson is a powerful library that abstracts all Redis management. It lets you write code like you’re talking to just another database. In this example, we only care about the distributed locks offering from this library, but Redisson is a full-featured library offering multiple ways to leverage Redis caching.

Step 3: Create a Redisson client connection to your Redis cache

Creating a client takes only a few lines of code. A few important settings are:

  • Connection Pool Size: This is the number of connections that the client can open to their cache. This number is especially important in lower pricing tiers, which support limited concurrent connections.
  •  Idle Pool Size: The minimum number of connections that the client keeps around if new work arrives. Make this number smaller than the Pool Size to save on memory and some JVM overhead.

Redisson

Step 4: Create a Fair Lock

A Fair Lock ensures resources that request a lock receive it in the respective order. This is important to ensure your threads aren’t starved (kept waiting for a lock forever).

Use a single line of code to create your lock. You’ll want to ensure that the name of the lock will be the same across all instances of your code. Any typos here will mean you’re not waiting for the same lock.

Fair Lock

Step 5: Wait for the lock

Try to obtain the lock. You can specify important times at this stage. For example:

  • Max Wait Time: How long to wait for the lock before giving up and not running the logic that requires a lock
  • Max Lease Time: How long the code can hold the lock before Redisson forces the lock to be released

Lock Redisson

After following these steps, you can return mission critical parts of the system to a safe mode, where only a single instance can run the process at a time. This will prevent data duplication and wasted processing time.

Interested in learning more tricks of the trade for custom app development? Contact us to discuss your next project!
Tech Insights Report