You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Initially I run scheduler inside app container and there was no problem with zombie processes. After I move scheduler to separate container, problem appered.
As I understand this process in simple words:
Laravel schedule:run spawn child process for each runInBackground() task and quits. Tasks become zombie (orphan processes = with no parent process) after completion, because supercronic (main init process in container) seems to doesn't know how to 'reap' orphan child processes or doesn't have to.
[supervisord]
nodaemon=true
user=%(ENV_NON_ROOT_USER)s
logfile=/var/log/supervisor/supervisord.log
# use /var/log, because user has no write permissions to write to /var/run (symlink to /run)
pidfile=/var/log/supervisord.pid
[program:scheduler]
process_name=%(program_name)s_%(process_num)02d
command=supercronic -overlapping /etc/supercronic/laravel
user=%(ENV_NON_ROOT_USER)s
autostart=%(ENV_APP_WITH_SCHEDULER)s
autorestart=true
stdout_logfile=/var/www/html/scheduler.log
I checked this config, and it seems that problem is gone.
The text was updated successfully, but these errors were encountered:
Initially I run scheduler inside app container and there was no problem with zombie processes. After I move scheduler to separate container, problem appered.
As I understand this process in simple words:
Laravel
schedule:run
spawn child process for eachrunInBackground()
task and quits. Tasks become zombie (orphan processes = with no parent process) after completion, becausesupercronic
(main init process in container) seems to doesn't know how to 'reap' orphan child processes or doesn't have to.More on PID 1 zombie reaping problem - https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
My suggestion is to run
scheduler
container inside supervisordstart-container
supervisord.scheduler.conf
I checked this config, and it seems that problem is gone.
The text was updated successfully, but these errors were encountered: