Distributed Geo-Replication in glusterfs

How to use new distributed geo-replication in glusterfs-3.5

M S Vishwanath Bhat
5 min readMar 28, 2014

Now thatglusterfs-3.5 is being released with new high performance distributed geo-replication, we’ll see how to use it.

NOTE: This article is targeted towards users/admins who want to try new geo-replication, without going much deeper into internals and technology used.

How is it different from earlier geo-replication?

  1. Until now, when you start geo-replication, only one node in master volume used to participate in syncing data to slave. Other nodes of master volume, even though part of the cluster and have access to data in master would sit idle and let this one node take care of syncing. With distributed-geo-replication, as you might have guessed from the name, all the nodes of master volume take the responsibility of syncing (part of) data to slave. If there are replica pairs, then one of them will be ‘Active’ and other will be ‘Passive’. When the ‘Active’ replica pair goes down, the ‘Passive’ node takes over.

2. The other thing that has changed is the change detection mechanism. Earlier it would crawl through the volume to identify the files that have changed. Now we depend on changelog xlator to identify it.

3. One more syncing method. Until now, rsync was the only syncing method that was being used. But this used to cause lot of overhead for data sets which has large number of small files. Now there is tar+ssh method, which makes syncing these type of data sets much more efficient.

Using Distributed geo-replication:

Prerequisites:

  1. At least one node of the master volume has password-less ssh setup to at least one node of the slave volume. This is node on which you’ll run geo-rep create command, which creates a geo-rep session for you.
  2. To be honest, distributed-geo-replication requires password less ssh between all the nodes in master to all the nodes in slave. But this can become tiring very fast. So gluster provides a way to do it, if above prerequisite is met.

gluster system:: execute gsec_create

This will create the secret pem pub file which would have information of RSA key of all the nodes in the master volume. Now this file will be used by geo-rep create to have the password less ssh setup between all the nodes in master to all the nodes in slave.

Creating geo-replication session.

Unlike previous version, where there was no need to create a session, now you have to. And this is the command to create relationship between master and slave.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create push-pem [force]

The node in which this command is to be run and the <slave_host> specified in the command should have password less ssh setup between them. The push-pem option actually uses the secret pem pub file created earlier and establishes password less ssh between each node in master to each node of slave. And this expects both the master and slave volume to be started. If the total available size in slave volume is more than the total size of master, the command will throw error message. You’ll have to use force option in both those cases.

Starting a geo-rep session

There is no change in this command from previous versions to this version.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start

This command actually starts the session. Meaning the gsyncd monitor process will be started, which in turn spawns gsync worker processes whenever required. This also turns on changelog xlator which starts recording all the changes on each of the glusterfs bricks. And if master is empty during geo-rep start, the change detection mechanism will be changelog. Else it’ll be xsync (the changes are identified by crawling through filesystem)

Status of geo-replication

gluster now has variants of status command.

gluster volume geo-replication <master_volume> <slave_volume>::<slave_volume> status

This displays the status of session from each brick of the master to each brick of the slave node. Like below.

Think that’s not enough information? Don’t worry, gluster provides more information upon request.

gluster volume geo-replication <master_volume> <slave_volume>::<slave_volume> status detail

This command displays extra information like, total files synced, files that needs to be synced etc. Sample output below.

Stopping geo-replication session

The command have not changed from before, if you have worked with previous version of glusterfs geo-replication.

gluster volume geo-replication <master_volume> <slave_volume>::<slave_volume> stop [force]

Force option is to be used, when one of the node (or glusterd in one of the node) is down. Once stopped, the session can be restarted any time. Note that upon restarting of the session, the change detection mechanism falls back to xsync mode. This happens even though you have changelog generating journals, while the geo-rep session is stopped.

Deleting geo-replication session

Now you’ll have to delete the glusterfs geo-rep session.

gluster volume geo-replication <master_volume> <slave_volume>::<slave_volume> delete

This deletes all the gsync conf files in each of the nodes. This returns failure, if any of the node is down. And unlike geo-rep stop, gluster doesn’t provide force option with this.

Changing the config values

There are some configuration values which can be changed using the CLI. And you can see all the current config values with this command.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config

But if you want to check only one of them, like log_file or change-detector

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config log-file

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config working-dir

How to change these values? Simple, just provide new value. Note that, not all the config values are allowed to change.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config change-detector xsync

Make sure you provide the proper value to the config value. And if you have large number of small files data set, then you can use tar+ssh sync method. Note that this restarts the gsyncd, if the geo-rep session is running.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config use-tarssh true

How to reset? Again, simple.

gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> config \!use-tarssh

That makes the config key (tar-ssh in this case) to fall back to it’s default value.

Now, below is the screen cast I have captured with my test machines. You can go through them for more detailed look.

Screencasts will come soon… ☺ Sorry for inconvenience ☺

--

--

M S Vishwanath Bhat
M S Vishwanath Bhat

Written by M S Vishwanath Bhat

DevOps, Running, Swimming, hiking, travelling, football and music