-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cpudist stop working when there are too many fork #2567
Comments
This change fixes the cpudist tool to avoid issue when too many tasks are running. Fixes iovisor#2567 -- cpudist stop working when there are too many fork
Do you mean |
I meant #!/bin/sh -e
PYTHONUNBUFFERED=1 python3 ./cpudist.py -P 1 > log &
sleep 1
for i in $(seq 10000); do echo fork | dd of=/dev/null &> /dev/null & done
sleep 5
echo "Log count: $(wc -l log)"
kill -9 $(jobs -p) With the PR #2568 applied, the log file contains around |
The program uses map update with replacement enabled. if there is a collision in the hash table, the new one just overwrite the old one. If this case, for your use case, I guess it make sense to add a command line option like --hash-storage-size to have a user input for storage for hash table |
This change fixes the cpudist tool to avoid issue when too many tasks are running. Fixes iovisor#2567 -- cpudist stop working when there are too many fork
Running
cpudist -P 1
when a service forks a lot (such as the update-mandb cron) results in incorrect results where only a handfull of process' distributions are taken into account after the forks dies.This can be reproduced by running this command:
for i in $(seq 10000); do echo fork | dd of=/dev/null & done
. After the command completes, cpudist prints only a few distribution and miss most the load happening after.It seems like it is related to the default BPF_HASH size and it can be fixed by the proposed PR below, but perhaps there is a better way to avoid that issue?
The text was updated successfully, but these errors were encountered: