This project was set up to assist with learning and documenting the process of setting up a CouchDB cluster.
As it is intended primarily for learning and experimenting, the number of nodes and other settings have been hard-coded. It should be straightforward to update the project with a dynamic configuration.
Feel free to submit issues and PRs with any corrections or ideas for improvement.
The cluster is orchestrated using Docker Compose, which builds and
configures a stack consisting of 5 containers on a network (cluster
):
- 3 × CouchDB nodes (
node1.cluster
,node2.cluster
andnode3.cluster
) - 1 × HAProxy load balancer (
cluster-lb.cluster
) - 1 × ‘Init’ container (
cluster-init.cluster
), which configures and enrols the nodes in the cluster
The CouchDB nodes are based on an official Docker image, modified with a custom configuration file to ensure the same salted administrator credentials are deployed to each node.
The load balancer service is based on an official HAProxy image, with a custom configuration file containing a ‘backend’ that includes the 3 nodes.
The ‘init’ container is a small Alpine image embellished with with ‘curl’ and ‘jq’ packages. These utilities are used by the cluster init script to wait for each CouchDB node to come online, then configure each in a cluster once this happens.
.
├── cluster-init Build files for init container
├── cluster-lb Build files for load balancer container
├── cluster-node Build files for node containers
└── nodes Data, config and log mounts for each node
├── 1
│ ├── data
│ ├── etc
│ └── log
├── 2
│ ├── data
│ ├── etc
│ └── log
└── 3
├── data
├── etc
└── log
Build and start the stack in the foreground (use the -d
option to
background):
docker-compose up [-d]
Check the logs of the init script to confirm that the cluster initialisation has worked:
docker logs -f cluster-init
Check the CouchDB logs of all nodes:
tail -qf nodes/*/log/couch.log
Stop and tear down the stack:
docker-compose down
Nuke the data directories:
rm -rf nodes/1/ nodes/2/ nodes/3/
Sample output for a new cluster:
Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
Configure consistent UUID on all nodes
"6a8b456660e83bb3730dcfb4fa7c3782"
"107705b1dc3b0c4a034f9e14c79403e6"
"9a6fd000dbca2691187d957f87b2fe0c"
Set up common shared secret
""
""
""
Configure nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Add nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Finish cluster
{"ok":true}
Check cluster membership
{
"all_nodes": [
"[email protected]",
"[email protected]",
"[email protected]"
],
"cluster_nodes": [
"[email protected]",
"[email protected]",
"[email protected]"
]
}
Done!
Check https://localhost:5984/_haproxy_stats for HAProxy info.
Use https://localhost:5984/_utils for CouchDB admin.
Sample output if the cluster has already been configured:
Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
CouchDB cluster already set up with 3 nodes
[
"[email protected]",
"[email protected]",
"[email protected]"
]
The default administrator credentials are admin
and secret
.
On the Docker host:
- The load-balanced CouchDB endpoint is exposed as https://localhost:5984.
- Fauxton can be accessed at https://localhost:5984/_utils.
- HAProxy statistics can be accessed at https://localhost:5984/_haproxy_stats.
If the ports are enabled, the nodes can be directly accessed respectively at:
The relevant config lines in docker-compose.yml
must be uncommented to
enable these ports.
Each node should have common server UUIDs and shared secrets. Compare
the docker.ini
files in each node’s config mount directory to confirm
this:
./nodes/1/etc/docker.ini
./nodes/2/etc/docker.ini
./nodes/3/etc/docker.ini
When configured correctly, the UUID reported by each node’s root URL should also match. For example:
$ curl -s -X GET https://localhost:59841 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
Items of note encountered during the setup process:
- Even if a node is reachable by simple hostname on the network, node
names must use an IP address or fully-qualified domain name for
the hostname portion, e.g.
[email protected]
in the case of this Docker network. See the relevant documentation for details.