This cluster sandbox is orchestrated using Docker Compose, which builds and configures a stack consisting of 5 containers:
- 3 x CouchDB nodes
- 1 x HAProxy load balancer
- 1 x Init container, which configures the nodes as a cluster
The CouchDB nodes are based on an official Docker image, modified with a custom configuration file to ensure the same salted administrator credentials are deployed to each node.
The load balancer service is based on an official HAProxy image, with a custom configuration file containing a 'backend' that includes the 3 nodes.
The 'init' container is a small Alpine image embellished with with 'curl' and 'jq' packages. These utilities are used by the container init script to wait for each CouchDB node to come online, configure each in a cluster once this happens.
.
├── cluster-init Build files for init container
├── cluster-lb Build files for load balancer container
├── cluster-node Build files for node containers
└── nodes Data and config mounts for each node
├── 1
│ ├── data
│ └── etc
├── 2
│ ├── data
│ └── etc
└── 3
├── data
└── etc
Build and start the stack:
docker-compose up -d
Check the logs of the init script to confirm that the cluster initialisation has worked:
docker logs -f cluster-init
Stop and tear down the stack:
docker-compose down
Nuke the data directories:
rm -rf nodes/1/ nodes/2/ nodes/3/
Sample output for a new cluster:
Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
Configure consistent UUID on all nodes
"6a8b456660e83bb3730dcfb4fa7c3782"
"107705b1dc3b0c4a034f9e14c79403e6"
"9a6fd000dbca2691187d957f87b2fe0c"
Set up common shared secret
""
""
""
Configure nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Add nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Finish cluster
{"ok":true}
Check cluster membership
{
"all_nodes": [
"[email protected]",
"[email protected]",
"[email protected]"
],
"cluster_nodes": [
"[email protected]",
"[email protected]",
"[email protected]"
]
}
Done!
Check https://localhost:5984/_haproxy_stats for HAProxy info.
Use https://localhost:5984/_utils for CouchDB admin.
Sample output if the cluster has already been configured:
Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
CouchDB cluster already set up with 3 nodes
The default administrator credentials are admin
and secret
.
On the Docker host:
- The load-balanced CouchDB endpoint is exposed as https://localhost:5984.
- Fauxton can be accessed at https://localhost:5984/_utils.
- HAProxy statistics can be accessed at https://localhost:5984/_haproxy_stats.
If the ports are enabled, the nodes can be directly accessed respectively at:
The relevant config lines in docker-compose.yml
must be uncommented to enable
these ports.
Each node should have common server UUIDs and shared secrets. Compare the
docker.ini
files in each node's config mount directory to confirm this:
./nodes/1/etc/docker.ini
./nodes/2/etc/docker.ini
./nodes/3/etc/docker.ini
When configured correctly, the UUID reported by each node's root URL should also match. For example:
$ curl -s -X GET https://localhost:59841 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b