Easy Tutorial
❮ Webpack Tutorial 25 Best Game Engine Libraries For Developers ❯

14.0 Zookeeper Distributed Lock Implementation Principle

Category Zookeeper Tutorial

Distributed locks are a way to control synchronized access to shared resources between distributed systems.

The following introduces how Zookeeper implements distributed locks, explaining exclusive locks and shared locks, two types of distributed locks.

Exclusive Locks

Exclusive locks, also known as write locks or exclusive locks, if transaction T1 adds an exclusive lock to the data object O1, then during the entire locking period, only transaction T1 is allowed to perform read and update operations on O1, and no other transactions can read or write.

Define the lock:

/exclusive_lock/lock

Implementation method:

Utilize the uniqueness of the peer nodes of Zookeeper. When needing to acquire an exclusive lock, all clients attempt to create a temporary child node /exclusivelock/lock under the /exclusivelock node by calling the create() interface. In the end, only one client can create successfully, and this client obtains the distributed lock. At the same time, all clients that did not obtain the lock can register a child node change watcher event on the /exclusive_lock node to try to acquire the lock again.

Shared Locks

Shared locks, also known as read locks. If transaction T1 adds a shared lock to the data object O1, then the current transaction can only perform read operations on O1, and other transactions can only add shared locks to this data object until all shared locks on the data object are released.

Define the lock:

/shared_lock/[hostname]-request type W/R-number

Implementation method:

  1. The client calls the create method to create a temporary sequential node similar to the defined lock method.

  2. The client calls the getChildren interface to obtain the list of all created child nodes.

  3. Determine if the lock is obtained. For read requests, if all child nodes smaller than oneself are read requests or there are no child nodes smaller than one's own number, it indicates that the shared lock has been successfully obtained, and the read logic begins to execute. For write requests, if one is not the smallest numbered child node, then wait.

  4. If the shared lock is not obtained, read requests register a watcher to listen to the last write request node smaller than their own number, and write requests register a watcher to listen to the last node smaller than their own number.

In actual development, the curator toolkit's encapsulated API can help us implement distributed locks.

<dependency>
  <groupId>org.apache.curator</groupId>
  <artifactId>curator-recipes</artifactId>
  <version>x.x.x</version>
</dependency>

Several lock schemes of curator:

The following example simulates 50 threads using the reentrant exclusive lock InterProcessMutex to compete for the lock at the same time:

Example

``` public class InterprocessLock { public static void main(String[] args) { CuratorFramework zkClient = getZkClient(); String lockPath = "/lock"; InterProcessMutex lock = new InterProcessMutex(zkClient, lockPath); // Simulate 50 threads competing for the lock for (int i = 0; i < 50; i++) { new Thread(new TestThread(i, lock)).start(); } }

static class TestThread implements Runnable {
    private Integer threadFlag;
    private InterProcessMutex lock;

    public TestThread(Integer threadFlag, InterProcessMutex lock) {
        this.threadFlag = threadFlag;
        this.lock = lock;
    }

    @Override
    public void run() {
        try {
            lock.acquire();
            System.out.println("Thread number " + threadFlag + " has acquired the lock");
            // Wait for 1 second before releasing the lock
            Thread.sleep(1000);
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
                lock.release();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }
}

private static CuratorFramework getZkClient() {
    String zkServerAddress = "192.168.3.39:2181";
    ExponentialBackoffRetry retryPolicy = new ExponentialBackoffRetry(1000, 3, 5000);
    CuratorFramework zkClient = CuratorFrameworkFactory.builder()
            .connectString(zkServerAddress)
            .sessionTimeoutMs(5000)
            .connectionTimeoutMs(5000)
            .retryPolicy(retryPolicy)
            .build();
    zkClient.start();
❮ Webpack Tutorial 25 Best Game Engine Libraries For Developers ❯