-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beam.smp spikes and eats all available CPU #869
Comments
Keep an eye in the logs for emfile errors, that might mean running out of file descriptors. Also try increasing max_dbs_open if you see In general try to see if there is something in the logs around the time this behavior starts. Look for things that looks like stack traces (file names and lines of code) as well. |
@nickva I'm also facing the same, above described issue. I'm using couchdb 1.6.1. In my case I'm doing a continuous replication between two couch databases for ~20K databases of each ~10MB on average, to and forth. After a certain time couch db crashes and beam process eats up all the available CPU. Restarting the couch process, or deleting the replications didn't help. Could you tell me what information I should be looking at ? Or can provide any pointers to solve this issue ? d The out put of
Last lines out put of
|
Just incase if anyone facing the same issue, I managed to bring back the CPU utilisation to normal levels by shutting down the couchdb instance that is running as service.
And later spawning the couchdb as a background process by using
Somehow If the couchdb instance is again started as a background service, it eats up all the available CPU. Didn't get enough time to debug this (I'm guessing upstart script to be debugged). |
When I tried |
This is a production system, and a bit frustrated, sorry if this in the wrong spot. Don't know where else to look. I have searched high and low for answers, posted in the IRC, and nothing. In a nutshell, I have a 2 node cluster running, with cpu at normal levels, I do some inserts and queries, seriously, not a heavy load at all. The database(s) being used have ~39 million rows and some views, but mostly mango indexes. After ~12-20 insert/queries, the beam.smp process takes off. The current request to insert that caused the spike, never returns and times out. I have no idea where else I can look for clues. The logs are debug level and verbose, and everything looks pretty normal. The 2 nodes are very large 1TiB machines with 4cpu/4cores each. Resources are not an issue at all. Something is fundamentally wrong here, but don't know where to look. I have tweaked and turned every possible knob there is in the couch config, and have had no results. If someone can tell me which additional places to look or log - I just need to understand what rock to look under. I have no problem putting in the work to debug.
Version used: 2.0
Operating System and version (desktop or mobile): Ubuntu 16
The text was updated successfully, but these errors were encountered: