celery(分布式任务队列) celery是一个基于python开发的分布式异步消息任务队列,通过它可以轻松实现任务的异步处理。
如以下场景就可以使用celery: - 你想对100台机器执行一条批量命令,可能会花很长时间 ,但你不想让你的程序等着结果返回,而是给你返回 一个任务ID,你过一段时间只需要拿着这个任务id就可以拿到任务执行结果, 在任务执行ing进行时,你可以继续做其它的事情。
- 你想做一个定时任务,比如每天检测一下你们所有客户的资料,如果发现今天 是客户的生日,就给他发个短信祝福。
celery的优点有: - 简单:一旦熟悉了celery的工作流程后,配置和使用还是比较简单的。
- 高可用:当任务执行失败或执行过程中发生连接中断,celery会自动尝试重新执行任务。
- 快速:一个单进程的celery每分钟可处理上百万个任务
- 灵活:几乎celery的各个组件都可以被扩展及自定制
celery工作流程图:
安装celery
pip install celery 创建一个任务文件task.py from celery import Celeryapp = Celery('task', broker='redis://localhost', backend='redis://localhost')@app.taskdef add(x,y):#这是worker可以执行的一个任务 print('running .....',x,y) return x + y启动Celery Worker来开始监听并执行任务 $ celery -A task worker –loglevel=info
调用任务
再打开一个终端, 进行命令行模式,调用任务 >>> from tasks import add
>>> add.delay(4, 4) 看你的worker终端会显示收到 一个任务,此时你想看任务结果的话,需要在调用 任务时 赋值个变量 >>> result = add.delay(4, 4) The ready() method returns whether the task has finished processing or not: >>> result.ready()
False You can wait for the result to complete, but this is rarely used since it turns the asynchronous call into a synchronous one: >>> result.get(timeout=1)
8 In case the task raised an exception, get() will re-raise the exception, but you can override this by specifying the propagate argument: >>> result.get(propagate=False) If the task raised an exception you can also gain access to the original traceback: >>> result.traceback 项目中使用celery
可以把celery配置成一个应用
目录格式如下:
proj/init.py
/celery.py
/tasks.py
proj/celery.py内容 from celery import Celeryapp = Celery('proj', broker='amqp://', backend='amqp://', include=['proj.tasks'])# Optional configuration, see the application user guide.app.conf.update( result_expires=3600,)if __name__ == '__main__': app.start()proj/tasks.py中的内容 from .celery import app@app.taskdef add(x, y): return x + y@app.taskdef mul(x, y): return x * y@app.taskdef xsum(numbers): return sum(numbers)启动worker
$ celery -A proj worker -l info 后台启动worker
In the background
In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial. The daemonization scripts uses the celery multi command to start one or more workers in the background: $ celery multi start w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Starting nodes…
> w1.halcyon.local: OK
You can restart it too: $ celery multi restart w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Stopping nodes…
> w1.halcyon.local: TERM -> 64024
> Waiting for 1 node…..
> w1.halcyon.local: OK
> Restarting node w1.halcyon.local: OK
celery multi v4.0.0 (latentcall)
> Stopping nodes…
> w1.halcyon.local: TERM -> 64052
or stop it: $ celery multi stop w1 -A proj -l info
The stop command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the stopwait command instead, this ensures all currently executing tasks is completed before exiting: $ celery multi stopwait w1 -A proj -l info celery与django结合
django 可以轻松跟celery结合实现异步任务,只需简单配置即可 If you have a modern Django project layout like: - proj/
- proj/init.py
- proj/settings.py
- proj/urls.py
- manage.py then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:
file: proj/proj/celery.py from __future__ import absolute_import, unicode_literalsimport osfrom celery import Celery# set the default Django settings module for the 'celery' program.os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')app = Celery('proj')# Using a string here means the worker don't have to serialize# the configuration object to child processes.# - namespace='CELERY' means all celery-related configuration keys# should have a `CELERY_` prefix.app.config_from_object('django.conf:settings', namespace='CELERY')# Load task modules from all registered Django app configs.app.autodiscover_tasks()@app.task(bind=True)def debug_task(self): print('Request: {0!r}'.format(self.request))- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
Then you need to import this app in your proj/proj/init.py module. This ensures that the app is loaded when Django starts so that the @shared_task decorator (mentioned later) will use it:
proj/proj/__init__.py: from __future__ import absolute_import, unicode_literals# This will make sure the app is always imported when# Django starts so that shared_task will use this app.from .celery import app as celery_app__all__ = ['celery_app']Note that this example project layout is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in the First Steps with Celery tutorial. Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module won’t clash with the library: from __future__ import absolute_import
Then we set the default DJANGO_SETTINGS_MODULE environment variable for the celery command-line program: os.environ.setdefault(‘DJANGO_SETTINGS_MODULE’, ‘proj.settings’)
You don’t need this line, but it saves you from always passing in the settings module to the celery program. It must always come before creating the app instances, as is what we do next: app = Celery(‘proj’)
This is our instance of the library. We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted. The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_, so for example the task_always_eager` setting becomes CELERY_TASK_ALWAYS_EAGER, and the broker_url setting becomes CELERY_BROKER_URL. 在setting里面配置: #for celeryCELERY_BROKER_URL = 'redis://localhost'CELERY_RESULT_BACKEND = 'redis://localhost'You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object. app.config_from_object(‘django.conf:settings’, namespace=’CELERY’)
Next, a common practice for reusable apps is to define all tasks in a separate tasks.pymodule, and Celery does have a way to auto-discover these modules: app.autodiscover_tasks()
With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py convention:
- app1/
- tasks.py
- models.py
- app2/
- tasks.py
- models.py Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance. 然后在具体的app里的tasks.py里写你的任务 # Create your tasks herefrom __future__ import absolute_import, unicode_literalsfrom celery import shared_task@shared_taskdef add(x, y): return x + y@shared_taskdef mul(x, y): return x * y@shared_taskdef xsum(numbers): return sum(numbers)- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
在你的django views里调用celery task from django.shortcuts import render,HttpResponse# Create your views here.from bernard import tasksdef task_test(request): res = tasks.add.delay(228,24) print("start running task") print("async task res",res.get() ) return HttpResponse('res %s'%res.get())结果可以通过任务id获得: from tasks import add ,mulfrom celery.result import AsyncResultresult = add.delay(2,2)task_id = result.task_idresult.get()result = AsyncResult(id = task_id)result.get()result = add.AsyncResult(id = task_id)result.get()celery 定时任务
celery支持定时任务,设定好任务的执行时间,celery就会定时自动帮你执行, 这个定时任务模块叫celery beat
写一个脚本 叫periodic_task.py: from celery import Celeryfrom celery.schedules import crontabapp = Celery()@app.on_after_configure.connectdef setup_periodic_tasks(sender, **kwargs): # Calls test('hello') every 10 seconds. sender.add_periodic_task(10.0, test.s('hello'), name='add every 10') # Calls test('world') every 30 seconds sender.add_periodic_task(30.0, test.s('world'), expires=10) # Executes every Monday morning at 7:30 a.m. sender.add_periodic_task( crontab(hour=7, minute=30, day_of_week=1), test.s('Happy Mondays!'), )@app.taskdef test(arg): print(arg)- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
add_periodic_task 会添加一条定时任务 上面是通过调用函数添加定时任务,也可以像写配置文件 一样的形式添加, 下面是每30s执行的任务 app.conf.beat_schedule = { 'add-every-30-seconds': { 'task': 'tasks.add', 'schedule': 30.0, 'args': (16, 16) },}app.conf.timezone = 'UTC'更复杂的定时配置
上面的定时任务比较简单,只是每多少s执行一个任务,但如果你想要每周一三五的早上8点给你发邮件怎么办呢?哈,其实也简单,用crontab功能,跟linux自带的crontab功能是一样的,可以个性化定制任务执行时间 from celery.schedules import crontabapp.conf.beat_schedule = { # Executes every Monday morning at 7:30 a.m. 'add-every-monday-morning': { 'task': 'tasks.add', 'schedule': crontab(hour=7, minute=30, day_of_week=1), 'args': (16, 16), },}上面的这条意思是每周1的早上7.30执行tasks.add任务 任务添加好了,需要让celery单独启动一个进程来定时发起这些任务, 注意, 这里是发起任务,不是执行,这个进程只会不断的去检查你的任务计划, 每发现有任务需要执行了,就发起一个任务调用消息,交给celery worker去执行 具体看 http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#solar-schedules 启动任务调度器 celery beat
$ celery -A periodic_task beat django 中使用计划任务功能 - 首先装一个包
$ pip install django-celery-beat 在配置文件中添加配置
INSTALLED_APPS = (
…,
‘django_celery_beat’,
) 数据库表更新
$ python manage.py migrate 开启django任务调度器
$ celery -A proj beat -l info -S django
|