Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] MirthConnect consumes 100% CPU what causes long queue #6131

Closed
KarolLipnicki opened this issue Mar 15, 2024 · 3 comments
Closed

[BUG] MirthConnect consumes 100% CPU what causes long queue #6131

KarolLipnicki opened this issue Mar 15, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@KarolLipnicki
Copy link

KarolLipnicki commented Mar 15, 2024

OS:
Red Hat Enterprise Linux release 9.3 (Plow)

Java:
java-21-openjdk.x86_64 1:21.0.2.0.13-1.0.1.el9 @ol9_appstream
java-21-openjdk-headless.x86_64 1:21.0.2.0.13-1.0.1.el9 @ol9_appstream
javapackages-filesystem.noarch 6.0.0-4.el9 @ol9_appstream

MirthConnect:
mirthconnect.i386 4.4.2.b326-1 @System

Server has 4 CPUs and 16GB of RAM

We have MirthConnect installed on the Linux server (configuration above). Regularly, after a few hours of server operation, CPU utilization increases to 90 - 100% and the event processing time in channels and queues increases up to ten times, causing very large delays and an increase in the event queue.

image

@KarolLipnicki KarolLipnicki added the bug Something isn't working label Mar 15, 2024
@jonbartels
Copy link
Contributor

When the CPU is high take a thread dump from the Java process.

That thread dump will show you what CPU intensive operations Mirth is performing.

Thread dumps will not contain PHI or PII but will contain channel names. If you're comfortable with disclosing your channel names you can post the entire dump here as a file attachment.

If you are not look for threads that are in the RUNNABLE state and post snippets of the stack traces to find the offending operation.

During the CPU spikes what is your database doing? Any slow running queries?

@KarolLipnicki
Copy link
Author

I've found something like that

"Database Reader Polling Thread on 031-queue (122d9579-b839-4866-baf6-05b4759d3483) < 122d9579-b839-4866-baf6-05b4759d3483_Worker-1" #202 [204782] prio=5 os_prio=0 cpu=1480876.34ms elapsed=78307.11s tid=0x00007f92b801c0b0 nid=204782 waiting on condition [0x00007f924a9f8000]
`"122d9579-b839-4866-baf6-05b4759d3483_QuartzSchedulerThread" #203 [204783] prio=5 os_prio=0 cpu=3143.78ms elapsed=78074.08s tid=0x00007f92b8028e70 nid=204783 in Object.wait() [0x00007f924a8f7000] `
"122d9579-b839-4866-baf6-05b4759d3483_Worker-1" #202 [204782] prio=5 os_prio=0 cpu=1480870.80ms elapsed=78074.08s tid=0x00007f92b801c0b0 nid=204782 in Object.wait() [0x00007f924a9f8000]

Queue/channel named 031-queue consumes cpu=1480876.34ms

@jonbartels
Copy link
Contributor

You have a stuck or slow running query. You can recover from the condition by having postgres kill the queries your DB poller is running.

Then you can tune your queries to be faster and more efficient for each poll.

@pacmano1 pacmano1 converted this issue into discussion #6134 Mar 18, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants