-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Couchdb error - Purge checkpoint '_local/purge-mrview-' not updated in 59 seconds #4181
Comments
See some documentation about https://docs.couchdb.org/en/stable/cluster/purging.html#config-settings Wonder if your |
Thank you for the reply, No, the configuration is the default one: However, unlike described in the documentation, the seconds in my logs are not 86400. In my example logs appears "not update in 59 seconds" but testing I have seen other values like: 1335, 1237, 126, 9 seconds ... |
@aalegriadg You're right, thanks for checking. It looks like it's a bug in the logging bit. It logs how much over the threshold it is, and not the threshold limit itself as the message implied: couchdb/src/couch/src/couch_db.erl Lines 525 to 529 in 9621478
It does look like a warning only, and indicates that the views have not been updated (yet?) after the documents have been purged from the main db. Try to query the views (which forces an update) or perhaps adjust the auto-indexer, ken settings. What I suspect might be happening is that we don't have have a decent auto-indexer trigger based on purged only. Only on document updates. |
Thank you for your answer, it is very useful for my team.
As it is a bug, this log should only appear if the purge takes more than 1 day (86400 seconds), which is not going to happen. So, do I stop worrying about these warnings?
As it is a partitioned database, the reindexing only affects the documents belonging to the queried partition right? i.e. if I purge partition1 the query time of partition2 should not be affected. I ask this because in my project we work with temporary replication windows: Documents are stored by daily partitions and when they are considered old enough they are not replicated and furthermore, they are purged. The query is for the partition with recent documents (those of the current day), not the purged ones. By the way I also opened this issue about purging. |
* Refactor lag logging to a separate function. * When purge client validity throws an error log the error as well. * Use the newer `erlang:system_time(second)` call to get time. * When warning about update lag, log both the lag and the limit. * Add specific log message if update timestamp is invalid * Clarify that "Invalid purge doc" is not always because there is a malform checkpoint document, the most likely it's a stale view client checkpoint which hasn't been cleaned up properly. Fix: #4181
* Refactor lag logging to a separate function. * When purge client validity throws an error log the error as well. * Use the newer `erlang:system_time(second)` call to get time. * When warning about update lag, log both the lag and the limit. * Add a specific log message if update timestamp is invalid. * Clarify that an invalid is not always because of a malformed checkpoint document. The most likely case is it's a stale view client checkpoint which hasn't been cleaned up properly. Fix: #4181
* Refactor lag logging to a separate function. * When purge client validity throws an error log the error as well. * Use the newer `erlang:system_time(second)` call to get time. * When warning about update lag, log both the lag and the limit. * Add a specific log message if update timestamp is invalid. * Clarify that an invalid is not always because of a malformed checkpoint document. The most likely case is it's a stale view client checkpoint which hasn't been cleaned up properly. Fix: #4181
When I make a purge request of 1000 documents, I get the following error logs:
Purging is successful but the logs generate mistrust.
These errors only appear when I make a purge request for a large number of documents. If, for example, the request only includes the ids and revs of 100 documents, the error does not appear.
Why do these errors occur? Is this a problem? How can I avoid it?
The text was updated successfully, but these errors were encountered: