We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
{ "couchdb": "Welcome", "version": "3.1.1", "git_sha": "ce596c65d", "uuid": "8d406054df5edac06ee4906f3259e62f", "features": [ "access-ready", "partitioned", "pluggable-storage-engines", "reshard", "scheduler" ], "vendor": { "name": "The Apache Software Foundation" } }
I have a 3 node couchdb 3.1.1 cluster with following configuration:
3.1.1
[cluster] q=1 n=2
There is a non-partitioned database named test2, whose shards resides on node1 and node2. test2 database has following cluster settings:
test2
node1
node2
"cluster":{"q":1,"n":2,"w":2,"r":2}
test2 database has few documents and it's _security is:
_security
{"admins":{"names":["superuser"],"roles":["admins"]},"members":{"names":["user1","user2"],"roles":["developers"]}}
I'm now running a scenario where any of the node's disk crashes. Let's say node2's disk crashes.
I have performed following steps:
At this stage, test2 database's shards file(test2.1628258896.couch) can be seen on node2.
{"members":{"roles":["_admin"]},"admins":{"roles":["_admin"]}}
If I retrieve _security from node1 or node3, it responds with the correct: (which I set earlier before node2 crash)
node3
node2 | [error] 2021-08-06T12:59:21.842335Z couchdb@node2 <0.4465.0> -------- Bad security object in <<"test2">>: [{{[{<<"members">>,{[{<<"roles">>,[<<"_admin">>]}]}},{<<"admins">>,{[{<<"roles">>,[<<"_admin">>]}]}}]},1},{{[{<<"admins">>,{[{<<"names">>,[<<"superuser">>]},{<<"roles">>,[<<"admins">>]}]}},{<<"members">>,{[{<<"names">>,[<<"user1">>,<<"user2">>]},{<<"roles">>,[<<"developers">>]}]}}]},1}]
When 1 node is down, PUT _security fails with: {"error":"error","reason":"no_majority"}
PUT _security
{"error":"error","reason":"no_majority"}
test3
_sync_shards
Bad security object
q=2
node1 | [notice] 2021-08-06T13:38:55.064067Z couchdb@node1 <0.10379.1> c89bd8ac41 localhost:5984 172.20.0.1 admin POST /test2/_sync_shards 202 ok 2 node2 | [error] 2021-08-06T13:38:55.080064Z couchdb@node2 <0.7232.0> -------- Bad security object in <<"test2">>: [{{[{<<"members">>,{[{<<"roles">>,[<<"_admin">>]}]}},{<<"admins">>,{[{<<"roles">>,[<<"_admin">>]}]}}]},1},{{[{<<"admins">>,{[{<<"names">>,[<<"superuser">>]},{<<"roles">>,[<<"admins">>]}]}},{<<"members">>,{[{<<"names">>,[<<"user1">>,<<"user2">>]},{<<"roles">>,[<<"developers">>]}]}}]},1}] node1 | [error] 2021-08-06T13:38:55.080503Z couchdb@node1 <0.10417.1> -------- Bad security object in <<"test2">>: [{{[{<<"members">>,{[{<<"roles">>,[<<"_admin">>]}]}},{<<"admins">>,{[{<<"roles">>,[<<"_admin">>]}]}}]},1},{{[{<<"admins">>,{[{<<"names">>,[<<"superuser">>]},{<<"roles">>,[<<"admins">>]}]}},{<<"members">>,{[{<<"names">>,[<<"user1">>,<<"user2">>]},{<<"roles">>,[<<"developers">>]}]}}]},1}]
This was discussed with @janl and @rnewson on Slack at https://couchdb.slack.com/archives/C49LEE7NW/p1628257123045300 which can be helpful.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Your Environment
Description
I have a 3 node couchdb
3.1.1
cluster with following configuration:There is a non-partitioned database named
test2
, whose shards resides onnode1
andnode2
.test2
database has following cluster settings:test2
database has few documents and it's_security
is:I'm now running a scenario where any of the node's disk crashes. Let's say
node2
's disk crashes.I have performed following steps:
node2
from the clusternode2
disk with a new blank disknode2
node2
into cluster1: Now, when I retrieve
_security
fortest2
database fromnode2
, it is reset to the default like:If I retrieve
_security
fromnode1
ornode3
, it responds with the correct: (which I set earlier beforenode2
crash)2: When I restart all nodes, there are following error logs shown:
3: Not relevant to crash, but relevant to same
_security
thing.When 1 node is down,
PUT _security
fails with:{"error":"error","reason":"no_majority"}
node2
is downtest3
, whose shards would reside onnode1
andnode2
PUT _security
fortest3
, it would fail with:Other observations
_sync_shards
throws sameBad security object
errorq=2
_security
oftest2
DB again_sync_shards logs
This was discussed with @janl and @rnewson on Slack at https://couchdb.slack.com/archives/C49LEE7NW/p1628257123045300 which can be helpful.
The text was updated successfully, but these errors were encountered: