Gluu Blog

Follow us:
Back to Blog

Deploy a Highly Available Redis Cache Cluster for Gluu

Christopher Blanton June 25, 2018

In this blog we will detail a relatively easy way to get a functioning Redis-cluster to cache session data and tokens generated by the Gluu Server. The lessons here can be used for any other utility that uses Redis for caching as long as the client has a library that can utilize the Redis-cluster protocol. See Redis Clients. Gluu Server utilizes Jedis.

Before starting, you should already have Gluu Server installed, either standalone or clustered. Each node that you’d like to deploy Redis-clusters on should have a Redis-server installed. We tested this with 4.0.9.

Configure and Deploy Redis-Cluster

Create three configuration files, a.conf, b.conf, and c.conf, on each node you want the Redis-cluster to be on. They should be configured as follows:

a.conf

b.conf

c.conf

Ideally, you should have 3 or more Redis-cluster nodes. If a node fails, there needs to be at least 2 active masters to adjudicate in a quorum to upgrade a slave to a master.

Launch a.conf, b.conf and c.conf on both nodes:

They should be broadcasting on ports 6379, 16379, 6380, 16380, 6381, and 16381 on all servers. Note that the 163* ports are used for Redis-to-Redis gossip communication.

Next, install Ruby and Gem for the Redis-cluster Ruby Script.

After that, locate Redis-trib.rb:

This will prompt an option to configure the server in a certain way. Write yes.

Note that there’s also a Python Redis cluster script that can be seen here. I have only tested the Redis-trib.py.

Configure Gluu Server

In either the oxTrust/Identity GUI (/identity/configuration/update) or LDAP directly, set cacheProviderType to Redis.

In RedisConfiguration:

  • Change RedisProviderType to CLUSTER
  • Add the following to oxCacheConfig for Redis servers.
    • In the above, ${NODE_N} refers to the servers you have Redis-clusters deployed on.
  • Set config to CLUSTER instead of STANDALONE.

Things to be Aware Of

When a node or a worker fails, the Redis-cluster quorum will convert the presumed most up to date Redis-server slave in the cluster that has the missing master keyring to master. This can lead to a couple of problems going forward.

In the example provided above, the cluster is configured as follows:

Node 1:

Node 2:

Node 3:

There won’t be a master on every single node. In the scenario of a downed node that comes back up and rejoins the cluster, you have to manually redistribute masters and slaves around the cluster. This can be done with the following commands:

You’ll get an output like the following:

A simple breakdown:

49115bec337bb5194f67595a46ab9c1304f1a5f3 is a node_id. It is the identifier of the Redis server running on port 6380 on the server with the IP address 3.3.3.3. It also shows the gossip port as 16380. It is a master as well. The - refers to the master it is connected to, which is none.

Alternatively the slaves of this server can be found by locating the node_id which is attached to the aforementioned node_id. fce995e6a9808b0aee5b8f560705ba7d04fa2d0b (on port 6380 of 1.1.1.1) and 895513958a17c0ceb3a95512d2bc3611b0c38ad5 (on port 6381 of 2.2.2.2).

Take this information and determine which server(s) don’t have a master and appropriate slave configuration to restore your cluster back to its original redundant state. Each Redis-server worker will provide which server it is attached to and, if it’s a slave, which master it is replicating. It will also identify itself with the myself marker.

The idea is to make sure that every master has one slave replicating it on each server (not including its own server). To do this sort of process, use the following commands.

This brings the Redis worker on that port to master status. This can also be a valid strategy for a cluster that is failing because too many nodes went down and the Redis-cluster could not come to Quorum about slave to master transition. Manual intervention can be applied with:

This will bypass a quorum requirement and force the Redis worker on that node to become a master. Additional documentation can be found here

Now, use the same information previously attained to determine if each nodes slaves are in the same redundant configuration as before. If you don’t have a slave on each node replicating every other node, then run the following command to redistribute the cluster:

Repeat this until you have a master on every node and a slave for each master on each node.

Further reading

Redis Cluster tutorial

Redis Cluster CLI Commands

Redis Cluster Spec

Redis.trib Cluster Cheat Sheet By Will Warren

Be sure to subscibe to
our RSS Feed

Christopher Blanton

Chris Blanton is an identity engineer at Gluu, focusing on DevOps to simplify Gluu Server management and operations. Chris is also a veteran of the U.S. Air Force.