This document describes the current stable version of Celery (5.2). For development docs, go here.

Configuration and defaults

This document describes the configuration options available.

If you’re using the default loader, you must create the celeryconfig.py module and make sure it’s available on the Python path.

Example configuration file

This is an example configuration file to get you started. It should contain all you need to run a basic Celery set-up.

## Broker settings.
broker_url = 'amqp://guest:guest@localhost:5672//'

# List of modules to import when the Celery worker starts.
imports = ('myapp.tasks',)

## Using the database to store task state and results.
result_backend = 'db+sqlite:///results.db'

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

New lowercase settings

Version 4.0 introduced new lower case settings and setting organization.

The major difference between previous versions, apart from the lower case names, are the renaming of some prefixes, like celery_beat_ to beat_, celeryd_ to worker_, and most of the top level celery_ settings have been moved into a new task_ prefix.

Warning

Celery will still be able to read old configuration files until Celery 6.0. Afterwards, support for the old configuration files will be removed. We provide the celery upgrade command that should handle plenty of cases (including Django).

Please migrate to the new configuration scheme as soon as possible.

Setting name

Replace with

CELERY_ACCEPT_CONTENT

accept_content

CELERY_ENABLE_UTC

enable_utc

CELERY_IMPORTS

imports

CELERY_INCLUDE

include

CELERY_TIMEZONE

timezone

CELERYBEAT_MAX_LOOP_INTERVAL

beat_max_loop_interval

CELERYBEAT_SCHEDULE

beat_schedule

CELERYBEAT_SCHEDULER

beat_scheduler

CELERYBEAT_SCHEDULE_FILENAME

beat_schedule_filename

CELERYBEAT_SYNC_EVERY

beat_sync_every

BROKER_URL

broker_url

BROKER_TRANSPORT

broker_transport

BROKER_TRANSPORT_OPTIONS

broker_transport_options

BROKER_CONNECTION_TIMEOUT

broker_connection_timeout

BROKER_CONNECTION_RETRY

broker_connection_retry

BROKER_CONNECTION_MAX_RETRIES

broker_connection_max_retries

BROKER_FAILOVER_STRATEGY

broker_failover_strategy

BROKER_HEARTBEAT

broker_heartbeat

BROKER_LOGIN_METHOD

broker_login_method

BROKER_POOL_LIMIT

broker_pool_limit

BROKER_USE_SSL

broker_use_ssl

CELERY_CACHE_BACKEND

cache_backend

CELERY_CACHE_BACKEND_OPTIONS

cache_backend_options

CASSANDRA_COLUMN_FAMILY

cassandra_table

CASSANDRA_ENTRY_TTL

cassandra_entry_ttl

CASSANDRA_KEYSPACE

cassandra_keyspace

CASSANDRA_PORT

cassandra_port

CASSANDRA_READ_CONSISTENCY

cassandra_read_consistency

CASSANDRA_SERVERS

cassandra_servers

CASSANDRA_WRITE_CONSISTENCY

cassandra_write_consistency

CASSANDRA_OPTIONS

cassandra_options

S3_ACCESS_KEY_ID

s3_access_key_id

S3_SECRET_ACCESS_KEY

s3_secret_access_key

S3_BUCKET

s3_bucket

S3_BASE_PATH

s3_base_path

S3_ENDPOINT_URL

s3_endpoint_url

S3_REGION

s3_region

CELERY_COUCHBASE_BACKEND_SETTINGS

couchbase_backend_settings

CELERY_ARANGODB_BACKEND_SETTINGS

arangodb_backend_settings

CELERY_MONGODB_BACKEND_SETTINGS

mongodb_backend_settings

CELERY_EVENT_QUEUE_EXPIRES

event_queue_expires

CELERY_EVENT_QUEUE_TTL

event_queue_ttl

CELERY_EVENT_QUEUE_PREFIX

event_queue_prefix

CELERY_EVENT_SERIALIZER

event_serializer

CELERY_REDIS_DB

redis_db

CELERY_REDIS_HOST

redis_host

CELERY_REDIS_MAX_CONNECTIONS

redis_max_connections

CELERY_REDIS_USERNAME

redis_username

CELERY_REDIS_PASSWORD

redis_password

CELERY_REDIS_PORT

redis_port

CELERY_REDIS_BACKEND_USE_SSL

redis_backend_use_ssl

CELERY_RESULT_BACKEND

result_backend

CELERY_MAX_CACHED_RESULTS

result_cache_max

CELERY_MESSAGE_COMPRESSION

result_compression

CELERY_RESULT_EXCHANGE

result_exchange

CELERY_RESULT_EXCHANGE_TYPE

result_exchange_type

CELERY_RESULT_EXPIRES

result_expires

CELERY_RESULT_PERSISTENT

result_persistent

CELERY_RESULT_SERIALIZER

result_serializer

CELERY_RESULT_DBURI

Use result_backend instead.

CELERY_RESULT_ENGINE_OPTIONS

database_engine_options

[...]_DB_SHORT_LIVED_SESSIONS

database_short_lived_sessions

CELERY_RESULT_DB_TABLE_NAMES

database_db_names

CELERY_SECURITY_CERTIFICATE

security_certificate

CELERY_SECURITY_CERT_STORE

security_cert_store

CELERY_SECURITY_KEY

security_key

CELERY_ACKS_LATE

task_acks_late

CELERY_ACKS_ON_FAILURE_OR_TIMEOUT

task_acks_on_failure_or_timeout

CELERY_ALWAYS_EAGER

task_always_eager

CELERY_ANNOTATIONS

task_annotations

CELERY_COMPRESSION

task_compression

CELERY_CREATE_MISSING_QUEUES

task_create_missing_queues

CELERY_DEFAULT_DELIVERY_MODE

task_default_delivery_mode

CELERY_DEFAULT_EXCHANGE

task_default_exchange

CELERY_DEFAULT_EXCHANGE_TYPE

task_default_exchange_type

CELERY_DEFAULT_QUEUE

task_default_queue

CELERY_DEFAULT_RATE_LIMIT

task_default_rate_limit

CELERY_DEFAULT_ROUTING_KEY

task_default_routing_key

CELERY_EAGER_PROPAGATES

task_eager_propagates

CELERY_IGNORE_RESULT

task_ignore_result

CELERY_PUBLISH_RETRY

task_publish_retry

CELERY_PUBLISH_RETRY_POLICY

task_publish_retry_policy

CELERY_QUEUES

task_queues

CELERY_ROUTES

task_routes

CELERY_SEND_SENT_EVENT

task_send_sent_event

CELERY_SERIALIZER

task_serializer

CELERYD_SOFT_TIME_LIMIT

task_soft_time_limit

CELERY_TASK_TRACK_STARTED

task_track_started

CELERY_TASK_REJECT_ON_WORKER_LOST

task_reject_on_worker_lost

CELERYD_TIME_LIMIT

task_time_limit

CELERYD_AGENT

worker_agent

CELERYD_AUTOSCALER

worker_autoscaler

CELERYD_CONCURRENCY

worker_concurrency

CELERYD_CONSUMER

worker_consumer

CELERY_WORKER_DIRECT

worker_direct

CELERY_DISABLE_RATE_LIMITS

worker_disable_rate_limits

CELERY_ENABLE_REMOTE_CONTROL

worker_enable_remote_control

CELERYD_HIJACK_ROOT_LOGGER

worker_hijack_root_logger

CELERYD_LOG_COLOR

worker_log_color

CELERYD_LOG_FORMAT

worker_log_format

CELERYD_WORKER_LOST_WAIT

worker_lost_wait

CELERYD_MAX_TASKS_PER_CHILD

worker_max_tasks_per_child

CELERYD_POOL

worker_pool

CELERYD_POOL_PUTLOCKS

worker_pool_putlocks

CELERYD_POOL_RESTARTS

worker_pool_restarts

CELERYD_PREFETCH_MULTIPLIER

worker_prefetch_multiplier

CELERYD_REDIRECT_STDOUTS

worker_redirect_stdouts

CELERYD_REDIRECT_STDOUTS_LEVEL

worker_redirect_stdouts_level

CELERY_SEND_EVENTS

worker_send_task_events

CELERYD_STATE_DB

worker_state_db

CELERYD_TASK_LOG_FORMAT

worker_task_log_format

CELERYD_TIMER

worker_timer

CELERYD_TIMER_PRECISION

worker_timer_precision

Configuration Directives

General settings

accept_content

Default: {'json'} (set, list, or tuple).

A white-list of content-types/serializers to allow.

If a message is received that’s not in this list then the message will be discarded with an error.

By default only json is enabled but any content type can be added, including pickle and yaml; when this is the case make sure untrusted parties don’t have access to your broker. See Security for more.

Example:

# using serializer name
accept_content = ['json']

# or the actual content-type (MIME)
accept_content = ['application/json']

result_accept_content

Default: None (can be set, list or tuple).

New in version 4.3.

A white-list of content-types/serializers to allow for the result backend.

If a message is received that’s not in this list then the message will be discarded with an error.

By default it is the same serializer as accept_content. However, a different serializer for accepted content of the result backend can be specified. Usually this is needed if signed messaging is used and the result is stored unsigned in the result backend. See Security for more.

Example:

# using serializer name
result_accept_content = ['json']

# or the actual content-type (MIME)
result_accept_content = ['application/json']

Time and date settings

enable_utc

New in version 2.5.

Default: Enabled by default since version 3.0.

If enabled dates and times in messages will be converted to use the UTC timezone.

Note that workers running Celery versions below 2.5 will assume a local timezone for all messages, so only enable if all workers have been upgraded.

timezone

New in version 2.5.

Default: "UTC".

Configure Celery to use a custom time zone. The timezone value can be any time zone supported by the pytz library.

If not set the UTC timezone is used. For backwards compatibility there’s also a enable_utc setting, and when this is set to false the system local timezone is used instead.

Task settings

task_annotations

New in version 2.5.

Default: None.

This setting can be used to rewrite any task attribute from the configuration. The setting can be a dict, or a list of annotation objects that filter for tasks and return a map of attributes to change.

This will change the rate_limit attribute for the tasks.add task:

task_annotations = {'tasks.add': {'rate_limit': '10/s'}}

or change the same for all tasks:

task_annotations = {'*': {'rate_limit': '10/s'}}

You can change methods too, for example the on_failure handler:

def my_on_failure(self, exc, task_id, args, kwargs, einfo):
    print('Oh no! Task failed: {0!r}'.format(exc))

task_annotations = {'*': {'on_failure': my_on_failure}}

If you need more flexibility then you can use objects instead of a dict to choose the tasks to annotate:

class MyAnnotate:

    def annotate(self, task):
        if task.name.startswith('tasks.'):
            return {'rate_limit': '10/s'}

task_annotations = (MyAnnotate(), {other,})

task_compression

Default: None

Default compression used for task messages. Can be gzip, bzip2 (if available), or any custom compression schemes registered in the Kombu compression registry.

The default is to send uncompressed messages.

task_protocol

Default: 2 (since 4.0).

Set the default task message protocol version used to send tasks. Supports protocols: 1 and 2.

Protocol 2 is supported by 3.1.24 and 4.x+.

task_serializer

Default: "json" (since 4.0, earlier: pickle).

A string identifying the default serialization method to use. Can be json (default), pickle, yaml, msgpack, or any custom serialization methods that have been registered with kombu.serialization.registry.

See also

Serializers.

task_publish_retry

New in version 2.2.

Default: Enabled.

Decides if publishing task messages will be retried in the case of connection loss or other connection errors. See also task_publish_retry_policy.

task_publish_retry_policy

New in version 2.2.

Default: See Message Sending Retry.

Defines the default policy when retrying publishing a task message in the case of connection loss or other connection errors.

Task execution settings

task_always_eager

Default: Disabled.

If this is True, all tasks will be executed locally by blocking until the task returns. apply_async() and Task.delay() will return an EagerResult instance, that emulates the API and behavior of AsyncResult, except the result is already evaluated.

That is, tasks will be executed locally instead of being sent to the queue.

task_eager_propagates

Default: Disabled.

If this is True, eagerly executed tasks (applied by task.apply(), or when the task_always_eager setting is enabled), will propagate exceptions.

It’s the same as always running apply() with throw=True.

task_store_eager_result

New in version 5.1.

Default: Disabled.

If this is True and task_always_eager is True and task_ignore_result is False, the results of eagerly executed tasks will be saved to the backend.

By default, even with task_always_eager set to True and task_ignore_result set to False, the result will not be saved.

task_remote_tracebacks

Default: Disabled.

If enabled task results will include the workers stack when re-raising task errors.

This requires the tblib library, that can be installed using pip:

$ pip install celery[tblib]

See Bundles for information on combining multiple extension requirements.

task_ignore_result

Default: Disabled.

Whether to store the task return values or not (tombstones). If you still want to store errors, just not successful return values, you can set task_store_errors_even_if_ignored.

task_store_errors_even_if_ignored

Default: Disabled.

If set, the worker stores all task errors in the result store even if Task.ignore_result is on.

task_track_started

Default: Disabled.

If True the task will report its status as ‘started’ when the task is executed by a worker. The default value is False as the normal behavior is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried. Having a ‘started’ state can be useful for when there are long running tasks and there’s a need to report what task is currently running.

task_time_limit

Default: No time limit.

Task hard time limit in seconds. The worker processing the task will be killed and replaced with a new one when this is exceeded.

task_soft_time_limit

Default: No soft time limit.

Task soft time limit in seconds.

The SoftTimeLimitExceeded exception will be raised when this is exceeded. For example, the task can catch this to clean up before the hard time limit comes:

from celery.exceptions import SoftTimeLimitExceeded

@app.task
def mytask():
    try:
        return do_work()
    except SoftTimeLimitExceeded:
        cleanup_in_a_hurry()

task_acks_late

Default: Disabled.

Late ack means the task messages will be acknowledged after the task has been executed, not just before (the default behavior).

task_acks_on_failure_or_timeout

Default: Enabled

When enabled messages for all tasks will be acknowledged even if they fail or time out.

Configuring this setting only applies to tasks that are acknowledged after they have been executed and only if task_acks_late is enabled.

task_reject_on_worker_lost

Default: Disabled.

Even if task_acks_late is enabled, the worker will acknowledge tasks when the worker process executing them abruptly exits or is signaled (e.g., KILL/INT, etc).

Setting this to true allows the message to be re-queued instead, so that the task will execute again by the same worker, or another worker.

Warning

Enabling this can cause message loops; make sure you know what you’re doing.

task_default_rate_limit

Default: No rate limit.

The global default rate limit for tasks.

This value is used for tasks that doesn’t have a custom rate limit

See also

The setting:worker_disable_rate_limits setting can disable all rate limits.

Task result backend settings

result_backend

Default: No result backend enabled by default.

The backend used to store task results (tombstones). Can be one of the following:

result_backend_always_retry

Default: False

If enable, backend will try to retry on the event of recoverable exceptions instead of propagating the exception. It will use an exponential backoff sleep time between 2 retries.

result_backend_max_sleep_between_retries_ms

Default: 10000

This specifies the maximum sleep time between two backend operation retry.

result_backend_base_sleep_between_retries_ms

Default: 10

This specifies the base amount of sleep time between two backend operation retry.

result_backend_max_retries

Default: Inf

This is the maximum of retries in case of recoverable exceptions.

result_backend_transport_options

Default: {} (empty mapping).

A dict of additional options passed to the underlying transport.

See your transport user manual for supported options (if any).

Example setting the visibility timeout (supported by Redis and SQS transports):

result_backend_transport_options = {'visibility_timeout': 18000}  # 5 hours

result_serializer

Default: json since 4.0 (earlier: pickle).

Result serialization format.

See Serializers for information about supported serialization formats.

result_compression

Default: No compression.

Optional compression method used for task results. Supports the same options as the task_compression setting.

result_extended

Default: False

Enables extended task result attributes (name, args, kwargs, worker, retries, queue, delivery_info) to be written to backend.

result_expires

Default: Expire after 1 day.

Time (in seconds, or a timedelta object) for when after stored task tombstones will be deleted.

A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.

A value of None or 0 means results will never expire (depending on backend specifications).

Note

For the moment this only works with the AMQP, database, cache, Couchbase, and Redis backends.

When using the database backend, celery beat must be running for the results to be expired.

result_cache_max

Default: Disabled by default.

Enables client caching of results.

This can be useful for the old deprecated ‘amqp’ backend where the result is unavailable as soon as one result instance consumes it.

This is the total number of results to cache before older results are evicted. A value of 0 or None means no limit, and a value of -1 will disable the cache.

Disabled by default.

result_chord_join_timeout

Default: 3.0.

The timeout in seconds (int/float) when joining a group’s results within a chord.

result_chord_retry_interval

Default: 1.0.

Default interval for retrying chord tasks.

override_backends

Default: Disabled by default.

Path to class that implements backend.

Allows to override backend implementation. This can be useful if you need to store additional metadata about executed tasks, override retry policies, etc.

Example:

override_backends = {"db": "custom_module.backend.class"}

Database backend settings

Database URL Examples

To use the database backend you have to configure the result_backend setting with a connection URL and the db+ prefix:

result_backend = 'db+scheme://user:password@host:port/dbname'

Examples:

# sqlite (filename)
result_backend = 'db+sqlite:///results.sqlite'

# mysql
result_backend = 'db+mysql://scott:tiger@localhost/foo'

# postgresql
result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase'

# oracle
result_backend = 'db+oracle://scott:tiger@127.0.0.1:1521/sidname'

Please see Supported Databases for a table of supported databases, and Connection String for more information about connection strings (this is the part of the URI that comes after the db+ prefix).

database_engine_options

Default: {} (empty mapping).

To specify additional SQLAlchemy database engine options you can use the database_engine_options setting:

# echo enables verbose logging from SQLAlchemy.
app.conf.database_engine_options = {'echo': True}

database_short_lived_sessions

Default: Disabled by default.

Short lived sessions are disabled by default. If enabled they can drastically reduce performance, especially on systems processing lots of tasks. This option is useful on low-traffic workers that experience errors as a result of cached database connections going stale through inactivity. For example, intermittent errors like (OperationalError) (2006, ‘MySQL server has gone away’) can be fixed by enabling short lived sessions. This option only affects the database backend.

database_table_schemas

Default: {} (empty mapping).

When SQLAlchemy is configured as the result backend, Celery automatically creates two tables to store result meta-data for tasks. This setting allows you to customize the schema of the tables:

# use custom schema for the database result backend.
database_table_schemas = {
    'task': 'celery',
    'group': 'celery',
}

database_table_names

Default: {} (empty mapping).

When SQLAlchemy is configured as the result backend, Celery automatically creates two tables to store result meta-data for tasks. This setting allows you to customize the table names:

# use custom table names for the database result backend.
database_table_names = {
    'task': 'myapp_taskmeta',
    'group': 'myapp_groupmeta',
}

RPC backend settings

result_persistent

Default: Disabled by default (transient messages).

If set to True, result messages will be persistent. This means the messages won’t be lost after a broker restart.

Example configuration

result_backend = 'rpc://'
result_persistent = False

Please note: using this backend could trigger the raise of celery.backends.rpc.BacklogLimitExceeded if the task tombstone is too old.

E.g.

for i in range(10000):
    r = debug_task.delay()

print(r.state)  # this would raise celery.backends.rpc.BacklogLimitExceeded

Cache backend settings

Note

The cache backend supports the pylibmc and python-memcached libraries. The latter is used only if pylibmc isn’t installed.

Using a single Memcached server:

result_backend = 'cache+memcached://127.0.0.1:11211/'

Using multiple Memcached servers:

result_backend = """
    cache+memcached://172.19.26.240:11211;172.19.26.242:11211/
""".strip()

The “memory” backend stores the cache in memory only:

result_backend = 'cache'
cache_backend = 'memory'

cache_backend_options

Default: {} (empty mapping).

You can set pylibmc options using the cache_backend_options setting:

cache_backend_options = {
    'binary': True,
    'behaviors': {'tcp_nodelay': True},
}

cache_backend

This setting is no longer used in celery’s builtin backends as it’s now possible to specify the cache backend directly in the result_backend setting.

Note

The django-celery-results - Using the Django ORM/Cache as a result backend library uses cache_backend for choosing django caches.

MongoDB backend settings

Note

The MongoDB backend requires the pymongo library: http://github.com/mongodb/mongo-python-driver/tree/master

mongodb_backend_settings

This is a dict supporting the following keys:

  • database

    The database name to connect to. Defaults to celery.

  • taskmeta_collection

    The collection name to store task meta data. Defaults to celery_taskmeta.

  • max_pool_size

    Passed as max_pool_size to PyMongo’s Connection or MongoClient constructor. It is the maximum number of TCP connections to keep open to MongoDB at a given time. If there are more open connections than max_pool_size, sockets will be closed when they are released. Defaults to 10.

  • options

    Additional keyword arguments to pass to the mongodb connection constructor. See the pymongo docs to see a list of arguments supported.

Example configuration

result_backend = 'mongodb://localhost:27017/'
mongodb_backend_settings = {
    'database': 'mydb',
    'taskmeta_collection': 'my_taskmeta_collection',
}

Redis backend settings

Configuring the backend URL

Note

The Redis backend requires the redis library.

To install this package use pip:

$ pip install celery[redis]

See Bundles for information on combining multiple extension requirements.

This backend requires the result_backend setting to be set to a Redis or Redis over TLS URL:

result_backend = 'redis://username:password@host:port/db'

For example:

result_backend = 'redis://localhost/0'

is the same as:

result_backend = 'redis://'

Use the rediss:// protocol to connect to redis over TLS:

result_backend = 'rediss://username:password@host:port/db?ssl_cert_reqs=required'

Note that the ssl_cert_reqs string should be one of required, optional, or none (though, for backwards compatibility, the string may also be one of CERT_REQUIRED, CERT_OPTIONAL, CERT_NONE).

If a Unix socket connection should be used, the URL needs to be in the format::

result_backend = 'socket:///path/to/redis.sock'

The fields of the URL are defined as follows:

  1. username

    New in version 5.1.0.

    Username used to connect to the database.

    Note that this is only supported in Redis>=6.0 and with py-redis>=3.4.0 installed.

    If you use an older database version or an older client version you can omit the username:

    result_backend = 'redis://:password@host:port/db'
    
  2. password

    Password used to connect to the database.

  3. host

    Host name or IP address of the Redis server (e.g., localhost).

  4. port

    Port to the Redis server. Default is 6379.

  5. db

    Database number to use. Default is 0. The db can include an optional leading slash.

When using a TLS connection (protocol is rediss://), you may pass in all values in broker_use_ssl as query parameters. Paths to certificates must be URL encoded, and ssl_cert_reqs is required. Example:

result_backend = 'rediss://:password@host:port/db?\
    ssl_cert_reqs=required\
    &ssl_ca_certs=%2Fvar%2Fssl%2Fmyca.pem\                  # /var/ssl/myca.pem
    &ssl_certfile=%2Fvar%2Fssl%2Fredis-server-cert.pem\     # /var/ssl/redis-server-cert.pem
    &ssl_keyfile=%2Fvar%2Fssl%2Fprivate%2Fworker-key.pem'   # /var/ssl/private/worker-key.pem

Note that the ssl_cert_reqs string should be one of required, optional, or none (though, for backwards compatibility, the string may also be one of CERT_REQUIRED, CERT_OPTIONAL, CERT_NONE).

New in version 5.1.0.

redis_backend_health_check_interval

Default: Not configured

The Redis backend supports health checks. This value must be set as an integer whose value is the number of seconds between health checks. If a ConnectionError or a TimeoutError is encountered during the health check, the connection will be re-established and the command retried exactly once.

redis_backend_use_ssl

Default: Disabled.

The Redis backend supports SSL. This value must be set in the form of a dictionary. The valid key-value pairs are the same as the ones mentioned in the redis sub-section under broker_use_ssl.

redis_max_connections

Default: No limit.

Maximum number of connections available in the Redis connection pool used for sending and retrieving results.

Warning

Redis will raise a ConnectionError if the number of concurrent connections exceeds the maximum.

redis_socket_connect_timeout

New in version 4.0.1.

Default: None

Socket timeout for connections to Redis from the result backend in seconds (int/float)

redis_socket_timeout

Default: 120.0 seconds.

Socket timeout for reading/writing operations to the Redis server in seconds (int/float), used by the redis result backend.

redis_retry_on_timeout

New in version 4.4.1.

Default: False

To retry reading/writing operations on TimeoutError to the Redis server, used by the redis result backend. Shouldn’t set this variable if using Redis connection by unix socket.

redis_socket_keepalive

New in version 4.4.1.

Default: False

Socket TCP keepalive to keep connections healthy to the Redis server, used by the redis result backend.

Cassandra backend settings

Note

This Cassandra backend driver requires cassandra-driver.

To install, use pip:

$ pip install celery[cassandra]

See Bundles for information on combining multiple extension requirements.

This backend requires the following configuration directives to be set.

cassandra_servers

Default: [] (empty list).

List of host Cassandra servers. For example:

cassandra_servers = ['localhost']

cassandra_port

Default: 9042.

Port to contact the Cassandra servers on.

cassandra_keyspace

Default: None.

The key-space in which to store the results. For example:

cassandra_keyspace = 'tasks_keyspace'

cassandra_table

Default: None.

The table (column family) in which to store the results. For example:

cassandra_table = 'tasks'

cassandra_read_consistency

Default: None.

The read consistency used. Values can be ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

cassandra_write_consistency

Default: None.

The write consistency used. Values can be ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, LOCAL_ONE.

cassandra_entry_ttl

Default: None.

Time-to-live for status entries. They will expire and be removed after that many seconds after adding. A value of None (default) means they will never expire.

cassandra_auth_provider

Default: None.

AuthProvider class within cassandra.auth module to use. Values can be PlainTextAuthProvider or SaslAuthProvider.

cassandra_auth_kwargs

Default: {} (empty mapping).

Named arguments to pass into the authentication provider. For example:

cassandra_auth_kwargs = {
    username: 'cassandra',
    password: 'cassandra'
}

cassandra_options

Default: {} (empty mapping).

Named arguments to pass into the cassandra.cluster class.

cassandra_options = {
    'cql_version': '3.2.1'
    'protocol_version': 3
}

Example configuration

cassandra_servers = ['localhost']
cassandra_keyspace = 'celery'
cassandra_table = 'tasks'
cassandra_read_consistency = 'ONE'
cassandra_write_consistency = 'ONE'
cassandra_entry_ttl = 86400

S3 backend settings

Note

This s3 backend driver requires s3.

To install, use s3:

$ pip install celery[s3]

See Bundles for information on combining multiple extension requirements.

This backend requires the following configuration directives to be set.

s3_access_key_id

Default: None.

The s3 access key id. For example:

s3_access_key_id = 'acces_key_id'

s3_secret_access_key

Default: None.

The s3 secret access key. For example:

s3_secret_access_key = 'acces_secret_access_key'

s3_bucket

Default: None.

The s3 bucket name. For example:

s3_bucket = 'bucket_name'

s3_base_path

Default: None.

A base path in the s3 bucket to use to store result keys. For example:

s3_base_path = '/prefix'

s3_endpoint_url

Default: None.

A custom s3 endpoint url. Use it to connect to a custom self-hosted s3 compatible backend (Ceph, Scality…). For example:

s3_endpoint_url = 'https://.s3.custom.url'

s3_region

Default: None.

The s3 aws region. For example:

s3_region = 'us-east-1'

Example configuration

s3_access_key_id = 's3-access-key-id'
s3_secret_access_key = 's3-secret-access-key'
s3_bucket = 'mybucket'
s3_base_path = '/celery_result_backend'
s3_endpoint_url = 'https://endpoint_url'

Azure Block Blob backend settings

To use AzureBlockBlob as the result backend you simply need to configure the result_backend setting with the correct URL.

The required URL format is azureblockblob:// followed by the storage connection string. You can find the storage connection string in the Access Keys pane of your storage account resource in the Azure Portal.

Example configuration

result_backend = 'azureblockblob://DefaultEndpointsProtocol=https;AccountName=somename;AccountKey=Lou...bzg==;EndpointSuffix=core.windows.net'

azureblockblob_container_name

Default: celery.

The name for the storage container in which to store the results.

azureblockblob_base_path

New in version 5.1.

Default: None.

A base path in the storage container to use to store result keys. For example:

azureblockblob_base_path = 'prefix/'

azureblockblob_retry_initial_backoff_sec

Default: 2.

The initial backoff interval, in seconds, for the first retry. Subsequent retries are attempted with an exponential strategy.

azureblockblob_retry_increment_base

Default: 2.

azureblockblob_retry_max_attempts

Default: 3.

The maximum number of retry attempts.

azureblockblob_connection_timeout

Default: 20.

Timeout in seconds for establishing the azure block blob connection.

azureblockblob_read_timeout

Default: 120.

Timeout in seconds for reading of an azure block blob.

Elasticsearch backend settings

To use Elasticsearch as the result backend you simply need to configure the result_backend setting with the correct URL.

Example configuration

result_backend = 'elasticsearch://example.com:9200/index_name/doc_type'

elasticsearch_retry_on_timeout

Default: False

Should timeout trigger a retry on different node?

elasticsearch_max_retries

Default: 3.

Maximum number of retries before an exception is propagated.

elasticsearch_timeout

Default: 10.0 seconds.

Global timeout,used by the elasticsearch result backend.

elasticsearch_save_meta_as_text

Default: True

Should meta saved as text or as native json. Result is always serialized as text.

AWS DynamoDB backend settings

Note

The Dynamodb backend requires the boto3 library.

To install this package use pip:

$ pip install celery[dynamodb]

See Bundles for information on combining multiple extension requirements.

Warning

The Dynamodb backend is not compatible with tables that have a sort key defined.

If you want to query the results table based on something other than the partition key, please define a global secondary index (GSI) instead.

This backend requires the result_backend setting to be set to a DynamoDB URL:

result_backend = 'dynamodb://aws_access_key_id:aws_secret_access_key@region:port/table?read=n&write=m'

For example, specifying the AWS region and the table name:

result_backend = 'dynamodb://@us-east-1/celery_results'

or retrieving AWS configuration parameters from the environment, using the default table name (celery) and specifying read and write provisioned throughput:

result_backend = 'dynamodb://@/?read=5&write=5'

or using the downloadable version of DynamoDB locally:

result_backend = 'dynamodb://@localhost:8000'

or using downloadable version or other service with conforming API deployed on any host:

result_backend = 'dynamodb://@us-east-1'
dynamodb_endpoint_url = 'http://192.168.0.40:8000'

The fields of the DynamoDB URL in result_backend are defined as follows:

  1. aws_access_key_id & aws_secret_access_key

    The credentials for accessing AWS API resources. These can also be resolved by the boto3 library from various sources, as described here.

  2. region

    The AWS region, e.g. us-east-1 or localhost for the Downloadable Version. See the boto3 library documentation for definition options.

  3. port

    The listening port of the local DynamoDB instance, if you are using the downloadable version. If you have not specified the region parameter as localhost, setting this parameter has no effect.

  4. table

    Table name to use. Default is celery. See the DynamoDB Naming Rules for information on the allowed characters and length.

  5. read & write

    The Read & Write Capacity Units for the created DynamoDB table. Default is 1 for both read and write. More details can be found in the Provisioned Throughput documentation.

  6. ttl_seconds

    Time-to-live (in seconds) for results before they expire. The default is to not expire results, while also leaving the DynamoDB table’s Time to Live settings untouched. If ttl_seconds is set to a positive value, results will expire after the specified number of seconds. Setting ttl_seconds to a negative value means to not expire results, and also to actively disable the DynamoDB table’s Time to Live setting. Note that trying to change a table’s Time to Live setting multiple times in quick succession will cause a throttling error. More details can be found in the DynamoDB TTL documentation

IronCache backend settings

Note

The IronCache backend requires the iron_celery library:

To install this package use pip:

$ pip install iron_celery

IronCache is configured via the URL provided in result_backend, for example:

result_backend = 'ironcache://project_id:token@'

Or to change the cache name:

ironcache:://project_id:token@/awesomecache

For more information, see: https://github.com/iron-io/iron_celery

Couchbase backend settings

Note

The Couchbase backend requires the couchbase library.

To install this package use pip:

$ pip install celery[couchbase]

See Bundles for instructions how to combine multiple extension requirements.

This backend can be configured via the result_backend set to a Couchbase URL:

result_backend = 'couchbase://username:password@host:port/bucket'

couchbase_backend_settings

Default: {} (empty mapping).

This is a dict supporting the following keys:

  • host

    Host name of the Couchbase server. Defaults to localhost.

  • port

    The port the Couchbase server is listening to. Defaults to 8091.

  • bucket

    The default bucket the Couchbase server is writing to. Defaults to default.

  • username

    User name to authenticate to the Couchbase server as (optional).

  • password

    Password to authenticate to the Couchbase server (optional).

ArangoDB backend settings

Note

The ArangoDB backend requires the pyArango library.

To install this package use pip:

$ pip install celery[arangodb]

See Bundles for instructions how to combine multiple extension requirements.

This backend can be configured via the result_backend set to a ArangoDB URL:

result_backend = 'arangodb://username:password@host:port/database/collection'

arangodb_backend_settings

Default: {} (empty mapping).

This is a dict supporting the following keys:

  • host

    Host name of the ArangoDB server. Defaults to localhost.

  • port

    The port the ArangoDB server is listening to. Defaults to 8529.

  • database

    The default database in the ArangoDB server is writing to. Defaults to celery.

  • collection

    The default collection in the ArangoDB servers database is writing to. Defaults to celery.

  • username

    User name to authenticate to the ArangoDB server as (optional).

  • password

    Password to authenticate to the ArangoDB server (optional).

  • http_protocol

    HTTP Protocol in ArangoDB server connection. Defaults to http.

  • verify

    HTTPS Verification check while creating the ArangoDB connection. Defaults to False.

CosmosDB backend settings (experimental)

To use CosmosDB as the result backend, you simply need to configure the result_backend setting with the correct URL.

Example configuration

result_backend = 'cosmosdbsql://:{InsertAccountPrimaryKeyHere}@{InsertAccountNameHere}.documents.azure.com'

cosmosdbsql_database_name

Default: celerydb.

The name for the database in which to store the results.

cosmosdbsql_collection_name

Default: celerycol.

The name of the collection in which to store the results.

cosmosdbsql_consistency_level

Default: Session.

Represents the consistency levels supported for Azure Cosmos DB client operations.

Consistency levels by order of strength are: Strong, BoundedStaleness, Session, ConsistentPrefix and Eventual.

cosmosdbsql_max_retry_attempts

Default: 9.

Maximum number of retries to be performed for a request.

cosmosdbsql_max_retry_wait_time

Default: 30.

Maximum wait time in seconds to wait for a request while the retries are happening.

CouchDB backend settings

Note

The CouchDB backend requires the pycouchdb library:

To install this Couchbase package use pip:

$ pip install celery[couchdb]

See Bundles for information on combining multiple extension requirements.

This backend can be configured via the result_backend set to a CouchDB URL:

result_backend = 'couchdb://username:password@host:port/container'

The URL is formed out of the following parts:

  • username

    User name to authenticate to the CouchDB server as (optional).

  • password

    Password to authenticate to the CouchDB server (optional).

  • host

    Host name of the CouchDB server. Defaults to localhost.

  • port

    The port the CouchDB server is listening to. Defaults to 8091.

  • container

    The default container the CouchDB server is writing to. Defaults to default.

File-system backend settings

This backend can be configured using a file URL, for example:

CELERY_RESULT_BACKEND = 'file:///var/celery/results'

The configured directory needs to be shared and writable by all servers using the backend.

If you’re trying Celery on a single system you can simply use the backend without any further configuration. For larger clusters you could use NFS, GlusterFS, CIFS, HDFS (using FUSE), or any other file-system.

Consul K/V store backend settings

Note

The Consul backend requires the python-consul2 library:

To install this package use pip:

$ pip install python-consul2

The Consul backend can be configured using a URL, for example:

CELERY_RESULT_BACKEND = 'consul://localhost:8500/'

or:

result_backend = 'consul://localhost:8500/'

The backend will store results in the K/V store of Consul as individual keys. The backend supports auto expire of results using TTLs in Consul. The full syntax of the URL is:

consul://host:port[?one_client=1]

The URL is formed out of the following parts:

  • host

    Host name of the Consul server.

  • port

    The port the Consul server is listening to.

  • one_client

    By default, for correctness, the backend uses a separate client connection per operation. In cases of extreme load, the rate of creation of new connections can cause HTTP 429 “too many connections” error responses from the Consul server when under load. The recommended way to handle this is to enable retries in python-consul2 using the patch at https://github.com/poppyred/python-consul2/pull/31.

    Alternatively, if one_client is set, a single client connection will be used for all operations instead. This should eliminate the HTTP 429 errors, but the storage of results in the backend can become unreliable.

Message Routing

task_queues

Default: None (queue taken from default queue settings).

Most users will not want to specify this setting and should rather use the automatic routing facilities.

If you really want to configure advanced routing, this setting should be a list of kombu.Queue objects the worker will consume from.

Note that workers can be overridden this setting via the -Q option, or individual queues from this list (by name) can be excluded using the -X option.

Also see Basics for more information.

The default is a queue/exchange/binding key of celery, with exchange type direct.

See also task_routes

task_routes

Default: None.

A list of routers, or a single router used to route tasks to queues. When deciding the final destination of a task the routers are consulted in order.

A router can be specified as either:

  • A function with the signature (name, args, kwargs, options, task=None, **kwargs)

  • A string providing the path to a router function.

  • A dict containing router specification:

    Will be converted to a celery.routes.MapRoute instance.

  • A list of (pattern, route) tuples:

    Will be converted to a celery.routes.MapRoute instance.

Examples:

task_routes = {
    'celery.ping': 'default',
    'mytasks.add': 'cpu-bound',
    'feed.tasks.*': 'feeds',                           # <-- glob pattern
    re.compile(r'(image|video)\.tasks\..*'): 'media',  # <-- regex
    'video.encode': {
        'queue': 'video',
        'exchange': 'media',
        'routing_key': 'media.video.encode',
    },
}

task_routes = ('myapp.tasks.route_task', {'celery.ping': 'default'})

Where myapp.tasks.route_task could be:

def route_task(self, name, args, kwargs, options, task=None, **kw):
        if task == 'celery.ping':
            return {'queue': 'default'}

route_task may return a string or a dict. A string then means it’s a queue name in task_queues, a dict means it’s a custom route.

When sending tasks, the routers are consulted in order. The first router that doesn’t return None is the route to use. The message options is then merged with the found route settings, where the task’s settings have priority.

Example if apply_async() has these arguments:

Task.apply_async(immediate=False, exchange='video',
                 routing_key='video.compress')

and a router returns:

{'immediate': True, 'exchange': 'urgent'}

the final message options will be:

immediate=False, exchange='video', routing_key='video.compress'

(and any default message options defined in the Task class)

Values defined in task_routes have precedence over values defined in task_queues when merging the two.

With the follow settings:

task_queues = {
    'cpubound': {
        'exchange': 'cpubound',
        'routing_key': 'cpubound',
    },
}

task_routes = {
    'tasks.add': {
        'queue': 'cpubound',
        'routing_key': 'tasks.add',
        'serializer': 'json',
    },
}

The final routing options for tasks.add will become:

{'exchange': 'cpubound',
 'routing_key': 'tasks.add',
 'serializer': 'json'}

See Routers for more examples.

task_queue_max_priority

brokers:

RabbitMQ

Default: None.

See RabbitMQ Message Priorities.

task_default_priority

brokers:

RabbitMQ, Redis

Default: None.

See RabbitMQ Message Priorities.

task_inherit_parent_priority

brokers:

RabbitMQ

Default: False.

If enabled, child tasks will inherit priority of the parent task.

# The last task in chain will also have priority set to 5.
chain = celery.chain(add.s(2) | add.s(2).set(priority=5) | add.s(3))

Priority inheritance also works when calling child tasks from a parent task with delay or apply_async.

See RabbitMQ Message Priorities.

worker_direct

Default: Disabled.

This option enables so that every worker has a dedicated queue, so that tasks can be routed to specific workers.

The queue name for each worker is automatically generated based on the worker hostname and a .dq suffix, using the C.dq exchange.

For example the queue name for the worker with node name w1@example.com becomes:

w1@example.com.dq

Then you can route the task to the task by specifying the hostname as the routing key and the C.dq exchange:

task_routes = {
    'tasks.add': {'exchange': 'C.dq', 'routing_key': 'w1@example.com'}
}

task_create_missing_queues

Default: Enabled.

If enabled (default), any queues specified that aren’t defined in task_queues will be automatically created. See Automatic routing.

task_default_queue

Default: "celery".

The name of the default queue used by .apply_async if the message has no route or no custom queue has been specified.

This queue must be listed in task_queues. If task_queues isn’t specified then it’s automatically created containing one queue entry, where this name is used as the name of that queue.

task_default_exchange

Default: Uses the value set for task_default_queue.

Name of the default exchange to use when no custom exchange is specified for a key in the task_queues setting.

task_default_exchange_type

Default: "direct".

Default exchange type used when no custom exchange type is specified for a key in the task_queues setting.

task_default_routing_key

Default: Uses the value set for task_default_queue.

The default routing key used when no custom routing key is specified for a key in the task_queues setting.

task_default_delivery_mode

Default: "persistent".

Can be transient (messages not written to disk) or persistent (written to disk).

Broker Settings

broker_url

Default: "amqp://"

Default broker URL. This must be a URL in the form of:

transport://userid:password@hostname:port/virtual_host

Only the scheme part (transport://) is required, the rest is optional, and defaults to the specific transports default values.

The transport part is the broker implementation to use, and the default is amqp, (uses librabbitmq if installed or falls back to pyamqp). There are also other choices available, including; redis://, sqs://, and qpid://.

The scheme can also be a fully qualified path to your own transport implementation:

broker_url = 'proj.transports.MyTransport://localhost'

More than one broker URL, of the same transport, can also be specified. The broker URLs can be passed in as a single string that’s semicolon delimited:

broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'

Or as a list:

broker_url = [
    'transport://userid:password@localhost:port//',
    'transport://userid:password@hostname:port//'
]

The brokers will then be used in the broker_failover_strategy.

See kombu:connection-urls in the Kombu documentation for more information.

broker_read_url / broker_write_url

Default: Taken from broker_url.

These settings can be configured, instead of broker_url to specify different connection parameters for broker connections used for consuming and producing.

Example:

broker_read_url = 'amqp://user:pass@broker.example.com:56721'
broker_write_url = 'amqp://user:pass@broker.example.com:56722'

Both options can also be specified as a list for failover alternates, see broker_url for more information.

broker_failover_strategy

Default: "round-robin".

Default failover strategy for the broker Connection object. If supplied, may map to a key in ‘kombu.connection.failover_strategies’, or be a reference to any method that yields a single item from a supplied list.

Example:

# Random failover strategy
def random_failover_strategy(servers):
    it = list(servers)  # don't modify callers list
    shuffle = random.shuffle
    for _ in repeat(None):
        shuffle(it)
        yield it[0]

broker_failover_strategy = random_failover_strategy

broker_heartbeat

transports supported:

pyamqp

Default: 120.0 (negotiated by server).

Note: This value is only used by the worker, clients do not use a heartbeat at the moment.

It’s not always possible to detect connection loss in a timely manner using TCP/IP alone, so AMQP defines something called heartbeats that’s is used both by the client and the broker to detect if a connection was closed.

If the heartbeat value is 10 seconds, then the heartbeat will be monitored at the interval specified by the broker_heartbeat_checkrate setting (by default this is set to double the rate of the heartbeat value, so for the 10 seconds, the heartbeat is checked every 5 seconds).

broker_heartbeat_checkrate

transports supported:

pyamqp

Default: 2.0.

At intervals the worker will monitor that the broker hasn’t missed too many heartbeats. The rate at which this is checked is calculated by dividing the broker_heartbeat value with this value, so if the heartbeat is 10.0 and the rate is the default 2.0, the check will be performed every 5 seconds (twice the heartbeat sending rate).

broker_use_ssl

transports supported:

pyamqp, redis

Default: Disabled.

Toggles SSL usage on broker connection and SSL settings.

The valid values for this option vary by transport.

pyamqp

If True the connection will use SSL with default SSL settings. If set to a dict, will configure SSL connection according to the specified policy. The format used is Python’s ssl.wrap_socket() options.

Note that SSL socket is generally served on a separate port by the broker.

Example providing a client cert and validating the server cert against a custom certificate authority:

import ssl

broker_use_ssl = {
  'keyfile': '/var/ssl/private/worker-key.pem',
  'certfile': '/var/ssl/amqp-server-cert.pem',
  'ca_certs': '/var/ssl/myca.pem',
  'cert_reqs': ssl.CERT_REQUIRED
}

Warning

Be careful using broker_use_ssl=True. It’s possible that your default configuration won’t validate the server cert at all. Please read Python ssl module security considerations.

redis

The setting must be a dict with the following keys:

  • ssl_cert_reqs (required): one of the SSLContext.verify_mode values:
    • ssl.CERT_NONE

    • ssl.CERT_OPTIONAL

    • ssl.CERT_REQUIRED

  • ssl_ca_certs (optional): path to the CA certificate

  • ssl_certfile (optional): path to the client certificate

  • ssl_keyfile (optional): path to the client key

broker_pool_limit

New in version 2.3.

Default: 10.

The maximum number of connections that can be open in the connection pool.

The pool is enabled by default since version 2.5, with a default limit of ten connections. This number can be tweaked depending on the number of threads/green-threads (eventlet/gevent) using a connection. For example running eventlet with 1000 greenlets that use a connection to the broker, contention can arise and you should consider increasing the limit.

If set to None or 0 the connection pool will be disabled and connections will be established and closed for every use.

broker_connection_timeout

Default: 4.0.

The default timeout in seconds before we give up establishing a connection to the AMQP server. This setting is disabled when using gevent.

Note

The broker connection timeout only applies to a worker attempting to connect to the broker. It does not apply to producer sending a task, see broker_transport_options for how to provide a timeout for that situation.

broker_connection_retry

Default: Enabled.

Automatically try to re-establish the connection to the AMQP broker if lost.

The time between retries is increased for each retry, and is not exhausted before broker_connection_max_retries is exceeded.

broker_connection_max_retries

Default: 100.

Maximum number of retries before we give up re-establishing a connection to the AMQP broker.

If this is set to 0 or None, we’ll retry forever.

broker_login_method

Default: "AMQPLAIN".

Set custom amqp login method.

broker_transport_options

New in version 2.2.

Default: {} (empty mapping).

A dict of additional options passed to the underlying transport.

See your transport user manual for supported options (if any).

Example setting the visibility timeout (supported by Redis and SQS transports):

broker_transport_options = {'visibility_timeout': 18000}  # 5 hours

Example setting the producer connection maximum number of retries (so producers won’t retry forever if the broker isn’t available at the first task execution):

broker_transport_options = {'max_retries': 5}

Worker

imports

Default: [] (empty list).

A sequence of modules to import when the worker starts.

This is used to specify the task modules to import, but also to import signal handlers and additional remote control commands, etc.

The modules will be imported in the original order.

include

Default: [] (empty list).

Exact same semantics as imports, but can be used as a means to have different import categories.

The modules in this setting are imported after the modules in imports.

worker_deduplicate_successful_tasks

New in version 5.1.

Default: False

Before each task execution, instruct the worker to check if this task is a duplicate message.

Deduplication occurs only with tasks that have the same identifier, enabled late acknowledgment, were redelivered by the message broker and their state is SUCCESS in the result backend.

To avoid overflowing the result backend with queries, a local cache of successfully executed tasks is checked before querying the result backend in case the task was already successfully executed by the same worker that received the task.

This cache can be made persistent by setting the worker_state_db setting.

If the result backend is not persistent (the RPC backend, for example), this setting is ignored.

worker_concurrency

Default: Number of CPU cores.

The number of concurrent worker processes/threads/green threads executing tasks.

If you’re doing mostly I/O you can have more processes, but if mostly CPU-bound, try to keep it close to the number of CPUs on your machine. If not set, the number of CPUs/cores on the host will be used.

worker_prefetch_multiplier

Default: 4.

How many messages to prefetch at a time multiplied by the number of concurrent processes. The default is 4 (four messages for each process). The default setting is usually a good choice, however – if you have very long running tasks waiting in the queue and you have to start the workers, note that the first worker to start will receive four times the number of messages initially. Thus the tasks may not be fairly distributed to the workers.

To disable prefetching, set worker_prefetch_multiplier to 1. Changing that setting to 0 will allow the worker to keep consuming as many messages as it wants.

For more on prefetching, read Prefetch Limits

Note

Tasks with ETA/countdown aren’t affected by prefetch limits.

worker_lost_wait

Default: 10.0 seconds.

In some cases a worker may be killed without proper cleanup, and the worker may have published a result before terminating. This value specifies how long we wait for any missing results before raising a WorkerLostError exception.

worker_max_tasks_per_child

Maximum number of tasks a pool worker process can execute before it’s replaced with a new one. Default is no limit.

worker_max_memory_per_child

Default: No limit. Type: int (kilobytes)

Maximum amount of resident memory, in kilobytes, that may be consumed by a worker before it will be replaced by a new worker. If a single task causes a worker to exceed this limit, the task will be completed, and the worker will be replaced afterwards.

Example:

worker_max_memory_per_child = 12000  # 12MB

worker_disable_rate_limits

Default: Disabled (rate limits enabled).

Disable all rate limits, even if tasks has explicit rate limits set.

worker_state_db

Default: None.

Name of the file used to stores persistent worker state (like revoked tasks). Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the celery worker --statedb argument.

worker_timer_precision

Default: 1.0 seconds.

Set the maximum time in seconds that the ETA scheduler can sleep between rechecking the schedule.

Setting this value to 1 second means the schedulers precision will be 1 second. If you need near millisecond precision you can set this to 0.1.

worker_enable_remote_control

Default: Enabled by default.

Specify if remote control of the workers is enabled.

worker_proc_alive_timeout

Default: 4.0.

The timeout in seconds (int/float) when waiting for a new worker process to start up.

worker_cancel_long_running_tasks_on_connection_loss

New in version 5.1.

Default: Disabled by default.

Kill all long-running tasks with late acknowledgment enabled on connection loss.

Tasks which have not been acknowledged before the connection loss cannot do so anymore since their channel is gone and the task is redelivered back to the queue. This is why tasks with late acknowledged enabled must be idempotent as they may be executed more than once. In this case, the task is being executed twice per connection loss (and sometimes in parallel in other workers).

When turning this option on, those tasks which have not been completed are cancelled and their execution is terminated. Tasks which have completed in any way before the connection loss are recorded as such in the result backend as long as task_ignore_result is not enabled.

Warning

This feature was introduced as a future breaking change. If it is turned off, Celery will emit a warning message.

In Celery 6.0, the worker_cancel_long_running_tasks_on_connection_loss will be set to True by default as the current behavior leads to more problems than it solves.

Events

worker_send_task_events

Default: Disabled by default.

Send task-related events so that tasks can be monitored using tools like flower. Sets the default value for the workers -E argument.

task_send_sent_event

New in version 2.2.

Default: Disabled by default.

If enabled, a task-sent event will be sent for every task so tasks can be tracked before they’re consumed by a worker.

event_queue_ttl

transports supported:

amqp

Default: 5.0 seconds.

Message expiry time in seconds (int/float) for when messages sent to a monitor clients event queue is deleted (x-message-ttl)

For example, if this value is set to 10 then a message delivered to this queue will be deleted after 10 seconds.

event_queue_expires

transports supported:

amqp

Default: 60.0 seconds.

Expiry time in seconds (int/float) for when after a monitor clients event queue will be deleted (x-expires).

event_queue_prefix

Default: "celeryev".

The prefix to use for event receiver queue names.

event_exchange

Default: "celeryev".

Name of the event exchange.

Warning

This option is in experimental stage, please use it with caution.

event_serializer

Default: "json".

Message serialization format used when sending event messages.

See also

Serializers.

Remote Control Commands

Note

To disable remote control commands see the worker_enable_remote_control setting.

control_queue_ttl

Default: 300.0

Time in seconds, before a message in a remote control command queue will expire.

If using the default of 300 seconds, this means that if a remote control command is sent and no worker picks it up within 300 seconds, the command is discarded.

This setting also applies to remote control reply queues.

control_queue_expires

Default: 10.0

Time in seconds, before an unused remote control command queue is deleted from the broker.

This setting also applies to remote control reply queues.

control_exchange

Default: "celery".

Name of the control command exchange.

Warning

This option is in experimental stage, please use it with caution.

Logging

worker_hijack_root_logger

New in version 2.2.

Default: Enabled by default (hijack root logger).

By default any previously configured handlers on the root logger will be removed. If you want to customize your own logging handlers, then you can disable this behavior by setting worker_hijack_root_logger = False.

Note

Logging can also be customized by connecting to the celery.signals.setup_logging signal.

worker_log_color

Default: Enabled if app is logging to a terminal.

Enables/disables colors in logging output by the Celery apps.

worker_log_format

Default:

"[%(asctime)s: %(levelname)s/%(processName)s] %(message)s"

The format to use for log messages.

See the Python logging module for more information about log formats.

worker_task_log_format

Default:

"[%(asctime)s: %(levelname)s/%(processName)s]
    %(task_name)s[%(task_id)s]: %(message)s"

The format to use for log messages logged in tasks.

See the Python logging module for more information about log formats.

worker_redirect_stdouts

Default: Enabled by default.

If enabled stdout and stderr will be redirected to the current logger.

Used by celery worker and celery beat.

worker_redirect_stdouts_level

Default: WARNING.

The log level output to stdout and stderr is logged as. Can be one of DEBUG, INFO, WARNING, ERROR, or CRITICAL.

Security

security_key

Default: None.

New in version 2.5.

The relative or absolute path to a file containing the private key used to sign messages when Message Signing is used.

security_certificate

Default: None.

New in version 2.5.

The relative or absolute path to an X.509 certificate file used to sign messages when Message Signing is used.

security_cert_store

Default: None.

New in version 2.5.

The directory containing X.509 certificates used for Message Signing. Can be a glob with wild-cards, (for example /etc/certs/*.pem).

security_digest

Default: sha256.

New in version 4.3.

A cryptography digest used to sign messages when Message Signing is used. https://cryptography.io/en/latest/hazmat/primitives/cryptographic-hashes/#module-cryptography.hazmat.primitives.hashes

Custom Component Classes (advanced)

worker_pool

Default: "prefork" (celery.concurrency.prefork:TaskPool).

Name of the pool class used by the worker.

Eventlet/Gevent

Never use this option to select the eventlet or gevent pool. You must use the -P option to celery worker instead, to ensure the monkey patches aren’t applied too late, causing things to break in strange ways.

worker_pool_restarts

Default: Disabled by default.

If enabled the worker pool can be restarted using the pool_restart remote control command.

worker_autoscaler

New in version 2.2.

Default: "celery.worker.autoscale:Autoscaler".

Name of the autoscaler class to use.

worker_consumer

Default: "celery.worker.consumer:Consumer".

Name of the consumer class used by the worker.

worker_timer

Default: "kombu.asynchronous.hub.timer:Timer".

Name of the ETA scheduler class used by the worker. Default is or set by the pool implementation.

Beat Settings (celery beat)

beat_schedule

Default: {} (empty mapping).

The periodic task schedule used by beat. See Entries.

beat_scheduler

Default: "celery.beat:PersistentScheduler".

The default scheduler class. May be set to "django_celery_beat.schedulers:DatabaseScheduler" for instance, if used alongside django-celery-beat extension.

Can also be set via the celery beat -S argument.

beat_schedule_filename

Default: "celerybeat-schedule".

Name of the file used by PersistentScheduler to store the last run times of periodic tasks. Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).

Can also be set via the celery beat --schedule argument.

beat_sync_every

Default: 0.

The number of periodic tasks that can be called before another database sync is issued. A value of 0 (default) means sync based on timing - default of 3 minutes as determined by scheduler.sync_every. If set to 1, beat will call sync after every task message sent.

beat_max_loop_interval

Default: 0.

The maximum number of seconds beat can sleep between checking the schedule.

The default for this value is scheduler specific. For the default Celery beat scheduler the value is 300 (5 minutes), but for the django-celery-beat database scheduler it’s 5 seconds because the schedule may be changed externally, and so it must take changes to the schedule into account.

Also when running Celery beat embedded (-B) on Jython as a thread the max interval is overridden and set to 1 so that it’s possible to shut down in a timely manner.