Hello all,
Step 2: Add these changes mentioned below to create a group service
[group:test_groups]
programs=process_worker1,process_worker2
Step 3: Restart supervisord
# systemctl restart supervisord.service
Step 4: Now to stop/start/restart/status a group, do the following
"Necessity is the mother of invention"
As always while i was working on, i came across some requirements with supervisord. It was quite interesting and useful. So i thought i will blog it on.
Step 1: Install sendmail. You need this to send emails
# yum install sendmail supervisor
Step 2: Configure sendmail
# vim /etc/mail/sendmail.mc
Modify from
dnl define(`SMART_HOST', `smtp.your.provider')dnl
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
to
define(`SMART_HOST', `<smtp server>')dnl
dnl # DAEMON_OPTIONS(`Port=smtp,Addr=<smtp server>, Name=MTA')dnl
Step 3: Install in your native python or virtual environment
# pip install superlance
Step 4: Configure crashmail as a part of your supervisord configuration
Add the section below to your supervisord conf
[eventlistener:crashmail]
command=/path/to/your/app -a -m <To email id> -s "/usr/sbin/sendmail -t -i"
events=PROCESS_STATE
buffer_size=500
Step 5: Restart supervisord service
# systemctl restart supervisord.service
With the above changes, an email is triggered when ever a process(s)/program(s) mentioned in the supervisord configuration file crashes or moves into fatal state.
Step 1: Lets assume our original supervisord configuration is as mentioned below
[program:process_worker1]
directory=/usr/share/test/
command=/usr/bin/test1
process_name=%(process_num)s
autostart=true
autorestart=true
stdout_logfile=/var/log/test/process_worker1.log
stderr_logfile=/var/log/test/process_worker1_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
[program:process_worker2]
directory=/usr/share/test/
command=/usr/bin/test2
process_name=%(process_num)s
autostart=true
autorestart=true
stdout_logfile=/var/log/test/process_worker2.log
stderr_logfile=/var/log/test/process_worker2_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
Tweak 1: Send emails from supervisord when something is broken
Step 1: Install sendmail. You need this to send emails
# yum install sendmail supervisor
Step 2: Configure sendmail
# vim /etc/mail/sendmail.mc
Modify from
dnl define(`SMART_HOST', `smtp.your.provider')dnl
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
to
define(`SMART_HOST', `<smtp server>')dnl
dnl # DAEMON_OPTIONS(`Port=smtp,Addr=<smtp server>, Name=MTA')dnl
Step 3: Install in your native python or virtual environment
# pip install superlance
Step 4: Configure crashmail as a part of your supervisord configuration
Add the section below to your supervisord conf
[eventlistener:crashmail]
command=/path/to/your/app -a -m <To email id> -s "/usr/sbin/sendmail -t -i"
events=PROCESS_STATE
buffer_size=500
Step 5: Restart supervisord service
# systemctl restart supervisord.service
With the above changes, an email is triggered when ever a process(s)/program(s) mentioned in the supervisord configuration file crashes or moves into fatal state.
Tweak 2: Creating process groups
[program:process_worker1]
directory=/usr/share/test/
command=/usr/bin/test1
process_name=%(process_num)s
autostart=true
autorestart=true
stdout_logfile=/var/log/test/process_worker1.log
stderr_logfile=/var/log/test/process_worker1_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
[program:process_worker2]
directory=/usr/share/test/
command=/usr/bin/test2
process_name=%(process_num)s
autostart=true
autorestart=true
stdout_logfile=/var/log/test/process_worker2.log
stderr_logfile=/var/log/test/process_worker2_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
Step 2: Add these changes mentioned below to create a group service
[group:test_groups]
programs=process_worker1,process_worker2
Step 3: Restart supervisord
# systemctl restart supervisord.service
Step 4: Now to stop/start/restart/status a group, do the following
# supervisorctl -c supervisor.conf restart test_groups:*
This configuration lets us group multiple programs/process together in supervisord
Supervisord tries to keep all the process/programs alive by making constant retries. There could be number of use cases, where we may need to limit the retries.
The highlighted values in the section mentioned below would describe about the parameters that would help us limit the retries
[program:test_worker]
directory=/usr/share/test/
command=/usr/bin/test_worker
process_name=%(process_num)s
autostart=true
autorestart=unexpected
startsecs=10
startretries=3
stdout_logfile=/var/log/test/test_worker.log
stdout_logfile=/var/log/test/test_worker_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
startsecs - It means the number of seconds the system has to wait to consider the process running
startretries - It means the number of retries the process can take. But this retry is applied only on process which is not in running state
So in the example above if the program crashes (in less than 10 sec), the maximum retries that can happen is 3
This configuration lets us group multiple programs/process together in supervisord
Tweak 3: Limited Retries for failed programs
The highlighted values in the section mentioned below would describe about the parameters that would help us limit the retries
[program:test_worker]
directory=/usr/share/test/
command=/usr/bin/test_worker
process_name=%(process_num)s
autostart=true
autorestart=unexpected
startsecs=10
startretries=3
stdout_logfile=/var/log/test/test_worker.log
stdout_logfile=/var/log/test/test_worker_err.log
environment=PYTHONPATH="/usr/share/test/",xyz="/opt/config.ini"
startsecs - It means the number of seconds the system has to wait to consider the process running
startretries - It means the number of retries the process can take. But this retry is applied only on process which is not in running state
So in the example above if the program crashes (in less than 10 sec), the maximum retries that can happen is 3