Skip to content Skip to sidebar Skip to footer

Starting Worker With Dynamic Routing_key?

I have one queue with several task types and I need to run worker for specific task. Something like: 'celery worker --routing_key task.type1 --app=app' Queue configuration: CELERY_

Solution 1:

Np, you can't bind a worker to a routing_key.

  • Workers consume queues not routing_key.
  • Producers send messages with a routing_key, that rabbitmq route to queues.

It's not possible with pika also.

In the tutorial the worker/consumer binds its own queue to the routing key.

  1. The producer emit logs with routing key = 'info'
  2. RabbitMQ will discard all of them until a queue is bound to this routing_key
  3. The receiver create a Queue A and bind it to routing_key 'info'
  4. Now rabbitmq routes logs with routing_key 'info' to Queue A, and the receiver consume them

You can reproduce this binding easily with celery.

In exemple you can do it in the celery configuration file:

 exchange = Exchange('default', type=topic)
 CELERY_QUEUES = (
   Queue('all_logs', exchange, routing_key='logs.#'),
   Queue('info_logs', exchange, routing_key='logs.info')
  )

receive all logs:

$ celery worker -A receive_logs -Q all_logs

receive only 'info' logs (msg with routing_key=logs.info only)

$ celery worker -A receive_logs -Q info_logs

In the end you have started a worker that consume only msg with a specific routing_key, which is what you want.

note: info logs are duplicated in both Queue:all_logs and Queue:info_logs

You might be interested by: http://docs.celeryproject.org/en/latest/configuration.html?highlight=direct#std:setting-CELERY_WORKER_DIRECT

Post a Comment for "Starting Worker With Dynamic Routing_key?"