-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.30813.1> exit with reason normal in context child_terminated #1741
Comments
I had this issue as well. CouchDB 2.3.1 Strangely, I was unable to log out as well, when it happens. In Fauxton, I noticed that there was a database that was "unable to load" even though I was logged into Fauxton. As far as I can tell, it had something to do with per_user databases getting in a weird state; I'm not entirely sure how I got there. I was messing with permissions on the _users DB and that perhaps I caused it (I was creating users from another _users account.) Either a user hadn't been properly deleted (somehow) or some kind of conflict between userdb- and the corresponding username. My fix was from Fauxton, find the userdb-xyz database that "cannot be loaded" and manually issue a DELETE from curl, Postman, (or tool of your choice.) I used Basic auth for this DELETE command, using my administrator user/pw. UPDATE: Also realized I had some dead nodes from playing with clustering before--this is perhaps why I was seeing the "cannot load DB". http:https://localhost:5984/_membership showed 3 nodes even though I was intending to run just 1. I followed this to axe the extra ones. BE VERY CAREFUL WITH THIS- it can potentially delete data if you axe the wrong cluster, it appears. https://docs.couchdb.org/en/master/cluster/nodes.html |
This problem happens to me everytime after deleting a peruser user from _users table, whether I delete it from fauxton UI or by CURL command. couchdb.log files writes a new row like this every 5 seconds: The only way I've found to stop it is to reinstall couchdb. |
I have the same issue. This issue seems appears when few data into |
Same issue. Any workarounds? Is there a way to identify bad _users entries? |
I am seeing this with the official Docker image of CouchDB
I would very much like to make use of the I should add that after user creation I also store some addition data in each user document, also I am currently running a single node without any kind of replication (at this point). |
@Crispy1975 can you check if setting the And can you share the response from |
@janl here is the output from the CouchDB logs with
Currently The output from the
|
and this is just during CouchDB startup? Or when you DELETE users |
I will create a fresh install and test the scenarios later on and report back. Also worth adding is that since the restart this morning and turning debug on I see a lot of crashes, not sure if this is related. But the process eventually died completely. I created a gist of the output here: https://gist.github.com/Crispy1975/904e4579108ce4c883a3063ee76787e2 |
I restarted CouchDB after the above crash and perhaps the startup log might also help, that is here in this gist: https://gist.github.com/Crispy1975/ea936e3a01a4a83ce6286c2b27219c1e Interestingly it's still referencing the deleted users and having issues with an unstable shard? |
An interesting update. The errors seem to have stopped. I thought that perhaps the manual deletion of users and their associated databases were perhaps getting CouchDB into a bad state as some internal process was not running correctly. I turned on the The only thing left to do is for CouchDB to purge the deleted docs from the So in summary it seems that this error is caused by deleting users and their associated databases manually. Not sure if that is a server bug, but turning on the |
Just ran into this as well and can confirm |
Seeing the following messages repeatedly in /var/log/couchdb/couchdb.log:
[error] 2018-11-14T15:54:51.599025Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.4622.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:54:56.606364Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.4775.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:01.614952Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.4886.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:06.622537Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5031.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:11.631585Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5141.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:16.638075Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5284.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:21.645321Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5396.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:26.652175Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5537.2> exit with reason normal in context child_terminated
[error] 2018-11-14T15:55:31.658958Z [email protected] <0.419.0> -------- Supervisor couch_peruser_sup had child couch_peruser started with couch_peruser:start_link() at <0.5647.2> exit with reason normal in context child_terminated
Expected Behavior
No error messages.
Current Behavior
Many error messages and beam.smp memory usage gradually rises until it crashes.
Otherwise, no visible issues with CouchDB installation.
Possible Solution
Steps to Reproduce (for bugs)
Context
Happy to provide more information.
Your Environment
The text was updated successfully, but these errors were encountered: