python – Celeryd multi with supervisord
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了python – Celeryd multi with supervisord,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含3637字,纯文字阅读大概需要6分钟。
内容图文
![python – Celeryd multi with supervisord](/upload/InfoBanner/zyjiaocheng/762/1925d1ea44c047dfa2b81b81f0242b7f.jpg)
尝试用芹菜多来运行supervisord(3.2.2).
似乎是supervisord无法处理它.单个芹菜工人工作正常.
这是我的supervisord配置
celery multi v3.1.20 (Cipater)
> Starting nodes...
> celery1@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.
> celery2@parzee-dev-app-sfo1: OK
Stale pidfile exists. Removing it.
celeryd.conf
; ==================================
; celery worker supervisor example
; ==================================
[program:celery]
; Set full path to celery program if using virtualenv
command=/usr/local/src/imbue/application/imbue/supervisorctl/celeryd/celeryd.sh
process_name = %(program_name)s%(process_num)d@%(host_node_name)s
directory=/usr/local/src/imbue/application/imbue/conf/
numprocs=2
stderr_logfile=/usr/local/src/imbue/application/imbue/log/celeryd.err
logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log
stdout_logfile_backups = 10
stderr_logfile_backups = 10
stdout_logfile_maxbytes = 50MB
stderr_logfile_maxbytes = 50MB
autostart=true
autorestart=false
startsecs=10
我使用以下supervisord变量来模仿我开始芹菜的方式:
>%(program_name)s
>%(process_num)d
> @
>%(host_node_name)s
Supervisorctl
supervisorctl
celery:celery1@parzee-dev-app-sfo1 FATAL Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1 FATAL Exited too quickly (process log may have details)
我尝试将/usr/local/lib/python2.7/dist-packages/supervisor/options.py中的此值从0更改为1:
numprocs_start = integer(get(section, 'numprocs_start', 1))
我还是得到:
celery:celery1@parzee-dev-app-sfo1 FATAL Exited too quickly (process log may have details)
celery:celery2@parzee-dev-app-sfo1 EXITED May 14 12:47 AM
Celery正在开始,但是supervisord没有跟踪它.
根@ parzee-DEV-APP-SFO1:在/ etc /主管#
ps -ef | grep celery
root 2728 1 1 00:46 ? 00:00:02 [celeryd: celery1@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery1@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/1.pid)
root 2973 1 1 00:46 ? 00:00:02 [celeryd: celery2@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery2@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pidfile=/usr/local/src/imbue/application/imbue/log/2.pid)
celery.sh
source ~/.profile
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log
CELERYD_OPTS=" --loglevel=DEBUG"
CELERY_WORKERS=2
CELERY_PROCESSES=16
cd /usr/local/src/imbue/application/imbue/conf
exec celery multi start $CELERY_WORKERS -P processes -c $CELERY_PROCESSES -n celeryd@{HOSTNAME} -f $CELERY_LOGFILE $CELERYD_OPTS
类似:
Running celeryd_multi with supervisor
How to use Supervisor + Django + Celery with multiple Queues and Workers?
解决方法:
由于管理程序监视(启动/停止/重新启动)进程,因此该进程应在??前台运行(不应该进行守护程序).
Celery multi daemonizes自己,所以它不能与主管一起运行.
您可以为每个工作人员创建单独的流程并将其分组.
[program:worker1]
command=celery worker -l info -n worker1
[program:worker2]
command=celery worker -l info -n worker2
[group:workers]
programs=worker1,worker2
你也可以写一个像这样的makes daemon process run in foreground的shell脚本.
#! /usr/bin/env bash
set -eu
pidfile="/var/run/your-daemon.pid"
command=/usr/sbin/your-daemon
# Proxy signals
function kill_app(){
kill $(cat $pidfile)
exit 0 # exit okay
}
trap "kill_app" SIGINT SIGTERM
# Launch daemon
$celery multi start 2 -l INFO
sleep 2
# Loop while the pidfile and the process exist
while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do
sleep 0.5
done
exit 1000 # exit unexpected
内容总结
以上是互联网集市为您收集整理的python – Celeryd multi with supervisord全部内容,希望文章能够帮你解决python – Celeryd multi with supervisord所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。