Handlers¶
This documents the base handler interface as well as the provided core
handlers. There are additional handlers for special purposes in the
logbook.more
, logbook.ticketing
and logbook.queues
modules.
Base Interface¶
- class logbook.Handler(level=0, filter=None, bubble=False)¶
Handler instances dispatch logging events to specific destinations.
The base handler class. Acts as a placeholder which defines the Handler interface. Handlers can optionally use Formatter instances to format records as desired. By default, no formatter is specified; in this case, the ‘raw’ message as determined by record.message is logged.
To bind a handler you can use the
push_application()
,push_thread()
orpush_greenlet()
methods. This will push the handler on a stack of handlers. To undo this, use thepop_application()
,pop_thread()
methods andpop_greenlet()
:handler = MyHandler() handler.push_application() # all here goes to that handler handler.pop_application()
By default messages sent to that handler will not go to a handler on an outer level on the stack, if handled. This can be changed by setting bubbling to True.
There are also context managers to setup the handler for the duration of a with-block:
with handler.applicationbound(): ... with handler.threadbound(): ... with handler.greenletbound(): ...
Because threadbound is a common operation, it is aliased to a with on the handler itself if not using gevent:
with handler: ...
If gevent is enabled, the handler is aliased to greenletbound.
- applicationbound()¶
Can be used in combination with the with statement to execute code while the object is bound to the application.
- blackhole = False¶
a flag for this handler that can be set to True for handlers that are consuming log records but are not actually displaying it. This flag is set for the
NullHandler
for instance.
- bubble¶
the bubble flag of this handler
- close()¶
Tidy up any resources used by the handler. This is automatically called by the destructor of the class as well, but explicit calls are encouraged. Make sure that multiple calls to close are possible.
- contextbound()¶
Can be used in combination with the with statement to execute code while the object is bound to the asyncio context.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- emit_batch(records, reason)¶
Some handlers may internally queue up records and want to forward them at once to another handler. For example the
FingersCrossedHandler
internally buffers records until a level threshold is reached in which case the buffer is sent to this method and notemit()
for each record.The default behaviour is to call
emit()
for each record in the buffer, but handlers can use this to optimize log handling. For instance the mail handler will try to batch up items into one mail and not to emit mails for each record in the buffer.Note that unlike
emit()
there is no wrapper method likehandle()
that does error handling. The reason is that this is intended to be used by other handlers which are already protected against internal breakage.reason is a string that specifies the rason why
emit_batch()
was called, and notemit()
. The following are valid values:'buffer'
Records were buffered for performance reasons or because the records were sent to another process and buffering was the only possible way. For most handlers this should be equivalent to calling
emit()
for each record.'escalation'
Escalation means that records were buffered in case the threshold was exceeded. In this case, the last record in the iterable is the record that triggered the call.
'group'
All the records in the iterable belong to the same logical component and happened in the same process. For example there was a long running computation and the handler is invoked with a bunch of records that happened there. This is similar to the escalation reason, just that the first one is the significant one, not the last.
If a subclass overrides this and does not want to handle a specific reason it must call into the superclass because more reasons might appear in future releases.
Example implementation:
def emit_batch(self, records, reason): if reason not in ('escalation', 'group'): Handler.emit_batch(self, records, reason) ...
- filter¶
the filter to be used with this handler
- format(record)¶
Formats a record with the given formatter. If no formatter is set, the record message is returned. Generally speaking the return value is most likely a unicode string, but nothing in the handler interface requires a formatter to return a unicode string.
The combination of a handler and formatter might have the formatter return an XML element tree for example.
- formatter¶
the formatter to be used on records. This is a function that is passed a log record as first argument and the handler as second and returns something formatted (usually a unicode string)
- greenletbound()¶
Can be used in combination with the with statement to execute code while the object is bound to the greenlet.
- handle(record)¶
Emits the record and falls back. It tries to
emit()
the record and if that fails, it will call intohandle_error()
with the record and traceback. This function itself will always emit when called, even if the logger level is higher than the record’s level.If this method returns False it signals to the calling function that no recording took place in which case it will automatically bubble. This should not be used to signal error situations. The default implementation always returns True.
- handle_error(record, exc_info)¶
Handle errors which occur during an emit() call. The behaviour of this function depends on the current errors setting.
Check
Flags
for more information.
- level¶
the level for the handler. Defaults to NOTSET which consumes all entries.
- property level_name¶
The level as unicode string
- pop_application()¶
Pops the context object from the stack.
- pop_context()¶
Pops the context object from the stack.
- pop_greenlet()¶
Pops the context object from the stack.
- pop_thread()¶
Pops the context object from the stack.
- push_application()¶
Pushes the context object to the application stack.
- push_context()¶
Pushes the context object to the context stack.
- push_greenlet()¶
Pushes the context object to the greenlet stack.
- push_thread()¶
Pushes the context object to the thread stack.
- should_handle(record)¶
Returns True if this handler wants to handle the record. The default implementation checks the level.
- stack_manager = <logbook._speedups.ContextStackManager object>¶
subclasses have to instanciate a
ContextStackManager
object on this attribute which is then shared for all the subclasses of it.
- threadbound()¶
Can be used in combination with the with statement to execute code while the object is bound to the thread.
- class logbook.NestedSetup(objects=None)¶
A nested setup can be used to configure multiple handlers and processors at once.
- pop_application()¶
Pops the stacked object from the application stack.
- pop_context()¶
Pops the stacked object from the asyncio (via contextvar) stack.
- pop_greenlet()¶
Pops the stacked object from the greenlet stack.
- pop_thread()¶
Pops the stacked object from the thread stack.
- push_application()¶
Pushes the stacked object to the application stack.
- push_context()¶
Pushes the stacked object to the asyncio (via contextvar) stack.
- push_greenlet()¶
Pushes the stacked object to the greenlet stack.
- push_thread()¶
Pushes the stacked object to the thread stack.
- class logbook.StringFormatter(format_string)¶
Many handlers format the log entries to text format. This is done by a callable that is passed a log record and returns an unicode string. The default formatter for this is implemented as a class so that it becomes possible to hook into every aspect of the formatting process.
Core Handlers¶
- class logbook.StreamHandler(stream, level=0, format_string=None, encoding=None, filter=None, bubble=False)¶
a handler class which writes logging records, appropriately formatted, to a stream. note that this class does not close the stream, as sys.stdout or sys.stderr may be used.
If a stream handler is used in a with statement directly it will
close()
on exit to support this pattern:with StreamHandler(my_stream): pass
Notes on the encoding
On Python 3, the encoding parameter is only used if a stream was passed that was opened in binary mode.
- close()¶
The default stream handler implementation is not to close the wrapped stream but to flush it.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- encode(msg)¶
Encodes the message to the stream encoding.
- ensure_stream_is_open()¶
this method should be overriden in sub-classes to ensure that the inner stream is open
- flush()¶
Flushes the inner stream.
- write(item)¶
Writes a bytestring to the stream.
- class logbook.FileHandler(filename, mode='a', encoding=None, level=0, format_string=None, delay=False, filter=None, bubble=False)¶
A handler that does the task of opening and closing files for you. By default the file is opened right away, but you can also delay the open to the point where the first message is written.
This is useful when the handler is used with a
FingersCrossedHandler
or something similar.- close()¶
The default stream handler implementation is not to close the wrapped stream but to flush it.
- encode(record)¶
Encodes the message to the stream encoding.
- ensure_stream_is_open()¶
this method should be overriden in sub-classes to ensure that the inner stream is open
- write(item)¶
Writes a bytestring to the stream.
- class logbook.MonitoringFileHandler(filename, mode='a', encoding='utf-8', level=0, format_string=None, delay=False, filter=None, bubble=False)¶
A file handler that will check if the file was moved while it was open. This might happen on POSIX systems if an application like logrotate moves the logfile over.
Because of different IO concepts on Windows, this handler will not work on a windows system.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- class logbook.StderrHandler(level=0, format_string=None, filter=None, bubble=False)¶
A handler that writes to what is currently at stderr. At the first glace this appears to just be a
StreamHandler
with the stream set tosys.stderr
but there is a difference: if the handler is created globally andsys.stderr
changes later, this handler will point to the current stderr, whereas a stream handler would still point to the old one.
- class logbook.RotatingFileHandler(filename, mode='a', encoding='utf-8', level=0, format_string=None, delay=False, max_size=1048576, backup_count=5, filter=None, bubble=False)¶
This handler rotates based on file size. Once the maximum size is reached it will reopen the file and start with an empty file again. The old file is moved into a backup copy (named like the file, but with a
.backupnumber
appended to the file. So if you are logging tomail
the first backup copy is calledmail.1
.)The default number of backups is 5. Unlike a similar logger from the logging package, the backup count is mandatory because just reopening the file is dangerous as it deletes the log without asking on rollover.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- class logbook.TimedRotatingFileHandler(filename, mode='a', encoding='utf-8', level=0, format_string=None, date_format='%Y-%m-%d', backup_count=0, filter=None, bubble=False, timed_filename_for_current=True, rollover_format='{basename}-{timestamp}{ext}')¶
This handler rotates based on dates. It will name the file after the filename you specify and the date_format pattern.
So for example if you configure your handler like this:
handler = TimedRotatingFileHandler('/var/log/foo.log', date_format='%Y-%m-%d')
The filenames for the logfiles will look like this:
/var/log/foo-2010-01-10.log /var/log/foo-2010-01-11.log ...
By default it will keep all these files around, if you want to limit them, you can specify a backup_count.
You may supply an optional rollover_format. This allows you to specify the format for the filenames of rolled-over files. the format as
So for example if you configure your handler like this:
handler = TimedRotatingFileHandler( '/var/log/foo.log', date_format='%Y-%m-%d', rollover_format='{basename}{ext}.{timestamp}')
The filenames for the logfiles will look like this:
/var/log/foo.log.2010-01-10 /var/log/foo.log.2010-01-11 ...
Finally, an optional argument timed_filename_for_current may be set to false if you wish to have the current log file match the supplied filename until it is rolled over
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- files_to_delete()¶
Returns a list with the files that have to be deleted when a rollover occours.
- generate_timed_filename(timestamp)¶
Produces a filename that includes a timestamp in the format supplied to the handler at init time.
- class logbook.TestHandler(level=0, format_string=None, filter=None, bubble=False, force_heavy_init=False)¶
Like a stream handler but keeps the values in memory. This logger provides some ways to test for the records in memory.
Example usage:
def my_test(): with logbook.TestHandler() as handler: logger.warn('A warning') assert logger.has_warning('A warning') ...
- close()¶
Close all records down when the handler is closed.
- default_format_string = '[{record.level_name}] {record.channel}: {record.message}'¶
a class attribute for the default format string to use if the constructor was invoked with None.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- property formatted_records¶
Captures the formatted log records as unicode strings.
- has_critical(*args, **kwargs)¶
True if a specific
CRITICAL
log record exists.See Probe Log Records for more information.
- has_debug(*args, **kwargs)¶
True if a specific
DEBUG
log record exists.See Probe Log Records for more information.
- has_error(*args, **kwargs)¶
True if a specific
ERROR
log record exists.See Probe Log Records for more information.
- has_info(*args, **kwargs)¶
True if a specific
INFO
log record exists.See Probe Log Records for more information.
- has_notice(*args, **kwargs)¶
True if a specific
NOTICE
log record exists.See Probe Log Records for more information.
- property has_notices¶
True if any
NOTICE
records were found.
- has_trace(*args, **kwargs)¶
True if a specific
TRACE
log record exists.See Probe Log Records for more information.
- property has_traces¶
True if any
TRACE
records were found.
- has_warning(*args, **kwargs)¶
True if a specific
WARNING
log record exists.See Probe Log Records for more information.
- class logbook.MailHandler(from_addr, recipients, subject=None, server_addr=None, credentials=None, secure=None, record_limit=None, record_delta=None, level=0, format_string=None, related_format_string=None, filter=None, bubble=False, starttls=True)¶
A handler that sends error mails. The format string used by this handler are the contents of the mail plus the headers. This is handy if you want to use a custom subject or
X-
header:handler = MailHandler(format_string=''' Subject: {record.level_name} on My Application {record.message} {record.extra[a_custom_injected_record]} ''')
This handler will always emit text-only mails for maximum portability and best performance.
In the default setting it delivers all log records but it can be set up to not send more than n mails for the same record each hour to not overload an inbox and the network in case a message is triggered multiple times a minute. The following example limits it to 60 mails an hour:
from datetime import timedelta handler = MailHandler(record_limit=1, record_delta=timedelta(minutes=1))
The default timedelta is 60 seconds (one minute).
The mail handler sends mails in a blocking manner. If you are not using some centralized system for logging these messages (with the help of ZeroMQ or others) and the logging system slows you down you can wrap the handler in a
logbook.queues.ThreadedWrapperHandler
that will then send the mails in a background thread.server_addr can be a tuple of host and port, or just a string containing the host to use the default port (25, or 465 if connecting securely.)
credentials can be a tuple or dictionary of arguments that will be passed to
smtplib.SMTP.login()
.secure can be a tuple, dictionary, or boolean. As a boolean, this will simply enable or disable a secure connection. The tuple is unpacked as parameters keyfile, certfile. As a dictionary, secure should contain those keys. For backwards compatibility,
secure=()
will enable a secure connection. If starttls is enabled (default), these parameters will be passed tosmtplib.SMTP.starttls()
, otherwisesmtplib.SMTP_SSL
.Changed in version 0.3: The handler supports the batching system now.
New in version 1.0: starttls parameter added to allow disabling STARTTLS for SSL connections.
Changed in version 1.0: If server_addr is a string, the default port will be used.
Changed in version 1.0: credentials parameter can now be a dictionary of keyword arguments.
Changed in version 1.0: secure can now be a dictionary or boolean in addition to to a tuple.
- close_connection(con)¶
Closes the connection that was returned by
get_connection()
.
- collapse_mails(mail, related, reason)¶
When escaling or grouped mails are
- default_format_string = 'Subject: {handler.subject}\n\nMessage type: {record.level_name}\nLocation: {record.filename}:{record.lineno}\nModule: {record.module}\nFunction: {record.func_name}\nTime: {record.time:%Y-%m-%d %H:%M:%S}\n\nMessage:\n\n{record.message}\n'¶
a class attribute for the default format string to use if the constructor was invoked with None.
- deliver(msg, recipients)¶
Delivers the given message to a list of recipients.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- emit_batch(records, reason)¶
Some handlers may internally queue up records and want to forward them at once to another handler. For example the
FingersCrossedHandler
internally buffers records until a level threshold is reached in which case the buffer is sent to this method and notemit()
for each record.The default behaviour is to call
emit()
for each record in the buffer, but handlers can use this to optimize log handling. For instance the mail handler will try to batch up items into one mail and not to emit mails for each record in the buffer.Note that unlike
emit()
there is no wrapper method likehandle()
that does error handling. The reason is that this is intended to be used by other handlers which are already protected against internal breakage.reason is a string that specifies the rason why
emit_batch()
was called, and notemit()
. The following are valid values:'buffer'
Records were buffered for performance reasons or because the records were sent to another process and buffering was the only possible way. For most handlers this should be equivalent to calling
emit()
for each record.'escalation'
Escalation means that records were buffered in case the threshold was exceeded. In this case, the last record in the iterable is the record that triggered the call.
'group'
All the records in the iterable belong to the same logical component and happened in the same process. For example there was a long running computation and the handler is invoked with a bunch of records that happened there. This is similar to the escalation reason, just that the first one is the significant one, not the last.
If a subclass overrides this and does not want to handle a specific reason it must call into the superclass because more reasons might appear in future releases.
Example implementation:
def emit_batch(self, records, reason): if reason not in ('escalation', 'group'): Handler.emit_batch(self, records, reason) ...
Used for format the records that led up to another record or records that are related into strings. Used by the batch formatter.
- generate_mail(record, suppressed=0)¶
Generates the final email (
email.message.Message
) with headers and date. suppressed is the number of mails that were not send if the record_limit feature is active.
- get_connection()¶
Returns an SMTP connection. By default it reconnects for each sent mail.
- get_recipients(record)¶
Returns the recipients for a record. By default the
recipients
attribute is returned for all records.
- max_record_cache = 512¶
the maximum number of record hashes in the cache for the limiting feature. Afterwards, record_cache_prune percent of the oldest entries are removed
- message_from_record(record, suppressed)¶
Creates a new message for a record as email message object (
email.message.Message
). suppressed is the number of mails not sent if the record_limit feature is active.
- record_cache_prune = 0.333¶
the number of items to prune on a cache overflow in percent.
- class logbook.GMailHandler(account_id, password, recipients, **kw)¶
A customized mail handler class for sending emails via GMail (or Google Apps mail):
handler = GMailHandler( "my_user@gmail.com", "mypassword", ["to_user@some_mail.com"], ...) # other arguments same as MailHandler
New in version 0.6.0.
- class logbook.SyslogHandler(application_name=None, address=None, facility='user', socktype=SocketKind.SOCK_DGRAM, level=0, format_string=None, filter=None, bubble=False, record_delimiter=None)¶
A handler class which sends formatted logging records to a syslog server. By default it will send to it via unix socket.
- close()¶
Tidy up any resources used by the handler. This is automatically called by the destructor of the class as well, but explicit calls are encouraged. Make sure that multiple calls to close are possible.
- default_format_string = '{record.channel}: {record.message}'¶
a class attribute for the default format string to use if the constructor was invoked with None.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- class logbook.NTEventLogHandler(application_name, log_type='Application', level=0, format_string=None, filter=None, bubble=False)¶
A handler that sends to the NT event log system.
- default_format_string = 'Message Level: {record.level_name}\nLocation: {record.filename}:{record.lineno}\nModule: {record.module}\nFunction: {record.func_name}\nExact Time: {record.time:%Y-%m-%d %H:%M:%S}\n\nEvent provided Message:\n\n{record.message}\n'¶
a class attribute for the default format string to use if the constructor was invoked with None.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- get_event_category(record)¶
Returns the event category for the record. Override this if you want to specify your own categories. This version returns 0.
- get_message_id(record)¶
Returns the message ID (EventID) for the record. Override this if you want to specify your own ID. This version returns 1.
- unregister_logger()¶
Removes the application binding from the registry. If you call this, the log viewer will no longer be able to provide any information about the message.
- class logbook.NullHandler(level=0, filter=None)¶
A handler that does nothing.
Useful to silence logs above a certain location in the handler stack:
handler = NullHandler() handler.push_application()
NullHandlers swallow all logs sent to them, and do not bubble them onwards.
- blackhole = True¶
a flag for this handler that can be set to True for handlers that are consuming log records but are not actually displaying it. This flag is set for the
NullHandler
for instance.
- class logbook.WrapperHandler(handler)¶
A class that can wrap another handler and redirect all calls to the wrapped handler:
handler = WrapperHandler(other_handler)
Subclasses should override the
_direct_attrs
attribute as necessary.
- logbook.create_syshandler(application_name, level=0)¶
Creates the handler the operating system provides. On Unix systems this creates a
SyslogHandler
, on Windows sytems it will create aNTEventLogHandler
.
Special Handlers¶
- class logbook.FingersCrossedHandler(handler, action_level=14, buffer_size=0, pull_information=True, reset=False, filter=None, bubble=False)¶
This handler wraps another handler and will log everything in memory until a certain level (action_level, defaults to ERROR) is exceeded. When that happens the fingers crossed handler will activate forever and log all buffered records as well as records yet to come into another handled which was passed to the constructor.
Alternatively it’s also possible to pass a factory function to the constructor instead of a handler. That factory is then called with the triggering log entry and the finger crossed handler to create a handler which is then cached.
The idea of this handler is to enable debugging of live systems. For example it might happen that code works perfectly fine 99% of the time, but then some exception happens. But the error that caused the exception alone might not be the interesting bit, the interesting information were the warnings that lead to the error.
Here a setup that enables this for a web application:
from logbook import FileHandler from logbook import FingersCrossedHandler def issue_logging(): def factory(record, handler): return FileHandler('/var/log/app/issue-%s.log' % record.time) return FingersCrossedHandler(factory) def application(environ, start_response): with issue_logging(): return the_actual_wsgi_application(environ, start_response)
Whenever an error occours, a new file in
/var/log/app
is created with all the logging calls that lead up to the error up to the point where the with block is exited.Please keep in mind that the
FingersCrossedHandler
handler is a one-time handler. Once triggered, it will not reset. Because of that you will have to re-create it whenever you bind it. In this case the handler is created when it’s bound to the thread.Due to how the handler is implemented, the filter, bubble and level flags of the wrapped handler are ignored.
Changed in version 0.3.
The default behaviour is to buffer up records and then invoke another handler when a severity theshold was reached with the buffer emitting. This now enables this logger to be properly used with the
MailHandler
. You will now only get one mail for each buffered record. However once the threshold was reached you would still get a mail for each record which is why the reset flag was added.When set to True, the handler will instantly reset to the untriggered state and start buffering again:
handler = FingersCrossedHandler(MailHandler(...), buffer_size=10, reset=True)
New in version 0.3: The reset flag was added.
- batch_emit_reason = 'escalation'¶
the reason to be used for the batch emit. The default is
'escalation'
.New in version 0.3.
- buffer_size¶
the maximum number of entries in the buffer. If this is exhausted the oldest entries will be discarded to make place for new ones
- buffered_records¶
the buffered records of the handler. Once the action is triggered (
triggered
) this list will be None. This attribute can be helpful for the handler factory function to select a proper filename (for example time of first log record)
- close()¶
Tidy up any resources used by the handler. This is automatically called by the destructor of the class as well, but explicit calls are encouraged. Make sure that multiple calls to close are possible.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- property triggered¶
This attribute is True when the action was triggered. From this point onwards the finger crossed handler transparently forwards all log records to the inner handler. If the handler resets itself this will always be False.
- class logbook.GroupHandler(handler, pull_information=True)¶
A handler that buffers all messages until it is popped again and then forwards all messages to another handler. This is useful if you for example have an application that does computations and only a result mail is required. A group handler makes sure that only one mail is sent and not multiple. Some other handles might support this as well, though currently none of the builtins do.
Example:
with GroupHandler(MailHandler(...)): # everything here ends up in the mail
The
GroupHandler
is implemented as aWrapperHandler
thus forwarding all attributes of the wrapper handler.Notice that this handler really only emit the records when the handler is popped from the stack.
New in version 0.3.
- emit(record)¶
Emit the specified logging record. This should take the record and deliver it to whereever the handler sends formatted log records.
- pop_application()¶
Pops the context object from the stack.
- pop_context()¶
Pops the context object from the stack.
- pop_greenlet()¶
Pops the context object from the stack.
- pop_thread()¶
Pops the context object from the stack.
Mixin Classes¶
- class logbook.StringFormatterHandlerMixin(format_string)¶
A mixin for handlers that provides a default integration for the
StringFormatter
class. This is used for all handlers by default that log text to a destination.- default_format_string = '[{record.time:%Y-%m-%d %H:%M:%S.%f%z}] {record.level_name}: {record.channel}: {record.message}'¶
a class attribute for the default format string to use if the constructor was invoked with None.
- property format_string¶
the currently attached format string as new-style format string.
- formatter_class¶
alias of
StringFormatter
- class logbook.HashingHandlerMixin¶
Mixin class for handlers that are hashing records.
- hash_record(record)¶
Returns a hash for a record to keep it apart from other records. This is used for the record_limit feature. By default The level, channel, filename and location are hashed.
Calls into
hash_record_raw()
.
- hash_record_raw(record)¶
Returns a hashlib object with the hash of the record.
- class logbook.LimitingHandlerMixin(record_limit, record_delta)¶
Mixin class for handlers that want to limit emitting records.
In the default setting it delivers all log records but it can be set up to not send more than n mails for the same record each hour to not overload an inbox and the network in case a message is triggered multiple times a minute. The following example limits it to 60 mails an hour:
from datetime import timedelta handler = MailHandler(record_limit=1, record_delta=timedelta(minutes=1))
- check_delivery(record)¶
Helper function to check if data should be delivered by this handler. It returns a tuple in the form
(suppression_count, allow)
. The first one is the number of items that were not delivered so far, the second is a boolean flag if a delivery should happen now.