Skip to content

A Docker Compose stack for learning to build and configure a CouchDB cluster

License

Notifications You must be signed in to change notification settings

j13k/couchdb-cluster-sandbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CouchDB Cluster Sandbox

Outline

This cluster sandbox is orchestrated using Docker Compose, which builds and configures a stack consisting of 5 containers:

  • 3 x CouchDB nodes
  • 1 x HAProxy load balancer
  • 1 x Init container, which configures the nodes as a cluster

The CouchDB nodes are based on an official Docker image, modified with a custom configuration file to ensure the same salted administrator credentials are deployed to each node.

The load balancer service is based on an official HAProxy image, with a custom configuration file containing a 'backend' that includes the 3 nodes.

The 'init' container is a small Alpine image embellished with with 'curl' and 'jq' packages. These utilities are used by the container init script to wait for each CouchDB node to come online, configure each in a cluster once this happens.

Directory structure

.
├── cluster-init            Build files for init container
├── cluster-lb              Build files for load balancer container
├── cluster-node            Build files for node containers
└── nodes                   Data and config mounts for each node
    ├── 1
    │   ├── data
    │   └── etc
    ├── 2
    │   ├── data
    │   └── etc
    └── 3
        ├── data
        └── etc

Commands

Build and start the stack:

docker-compose up -d

Check the logs of the init script to confirm that the cluster initialisation has worked:

docker logs -f cluster-init

Stop and tear down the stack:

docker-compose down

Nuke the data directories:

rm -rf nodes/1/ nodes/2/ nodes/3/

Things to look out for

Cluster Init

Sample output for a new cluster:

Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
Configure consistent UUID on all nodes
"6a8b456660e83bb3730dcfb4fa7c3782"
"107705b1dc3b0c4a034f9e14c79403e6"
"9a6fd000dbca2691187d957f87b2fe0c"
Set up common shared secret
""
""
""
Configure nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Add nodes 2 and 3 on node 1
{"ok":true}
{"ok":true}
Finish cluster
{"ok":true}
Check cluster membership
{
  "all_nodes": [
    "[email protected]",
    "[email protected]",
    "[email protected]"
  ],
  "cluster_nodes": [
    "[email protected]",
    "[email protected]",
    "[email protected]"
  ]
}
Done!
Check https://localhost:5984/_haproxy_stats for HAProxy info.
Use https://localhost:5984/_utils for CouchDB admin.

Sample output if the cluster has already been configured:

Initialising a 3-node CouchDB cluster
Check all nodes active
Waiting for node1
Waiting for node2
Waiting for node3
Check cluster status and exit if already set up
CouchDB cluster already set up with 3 nodes

Endpoints

The default administrator credentials are admin and secret.

On the Docker host:

If the ports are enabled, the nodes can be directly accessed respectively at:

The relevant config lines in docker-compose.yml must be uncommented to enable these ports.

Configuration consistency

Each node should have common server UUIDs and shared secrets. Compare the docker.ini files in each node's config mount directory to confirm this:

  • ./nodes/1/etc/docker.ini
  • ./nodes/2/etc/docker.ini
  • ./nodes/3/etc/docker.ini

When configured correctly, the UUID reported by each node's root URL should also match. For example:

$ curl -s -X GET https://localhost:59841 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b
$ curl -s -X GET https://localhost:59843 | jq -r .uuid
2d964d11d414ecd61d4eceb3fc00024b

About

A Docker Compose stack for learning to build and configure a CouchDB cluster

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages

  • Shell 94.9%
  • Dockerfile 5.1%