-
Notifications
You must be signed in to change notification settings - Fork 275
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] MirthConnect consumes 100% CPU what causes long queue #6131
Comments
When the CPU is high take a thread dump from the Java process. That thread dump will show you what CPU intensive operations Mirth is performing. Thread dumps will not contain PHI or PII but will contain channel names. If you're comfortable with disclosing your channel names you can post the entire dump here as a file attachment. If you are not look for threads that are in the During the CPU spikes what is your database doing? Any slow running queries? |
I've found something like that
Queue/channel named 031-queue consumes cpu=1480876.34ms |
You have a stuck or slow running query. You can recover from the condition by having postgres kill the queries your DB poller is running. Then you can tune your queries to be faster and more efficient for each poll. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
OS:
Red Hat Enterprise Linux release 9.3 (Plow)
Java:
java-21-openjdk.x86_64 1:21.0.2.0.13-1.0.1.el9 @ol9_appstream
java-21-openjdk-headless.x86_64 1:21.0.2.0.13-1.0.1.el9 @ol9_appstream
javapackages-filesystem.noarch 6.0.0-4.el9 @ol9_appstream
MirthConnect:
mirthconnect.i386 4.4.2.b326-1 @System
Server has 4 CPUs and 16GB of RAM
We have MirthConnect installed on the Linux server (configuration above). Regularly, after a few hours of server operation, CPU utilization increases to 90 - 100% and the event processing time in channels and queues increases up to ten times, causing very large delays and an increase in the event queue.
The text was updated successfully, but these errors were encountered: