You want to set up Django logging to debug a project's workflow or detect potential errors.
Set up loggers in your Django project files using the
getLogger method of the Python
logging package. Next, define logging messages with one of several logger methods:
LOGGING variable in
settings.py to process logging messages from your loggers. Set up different handlers in the
LOGGING variable to send logging messages to different destinations (e.g. console, file, email). Set up a logging level in either the loggers or handlers section of the
LOGGING variable to establish a logging threshold. Optionally set up filters in the
LOGGING variable to further refine the type of logging messages that get reported. Optionally set up formatters in the
LOGGING variable to customize the final logging output.
Django is built on top of Python's logging package. The Python logging package provides a robust and flexible way to setup application logging. In case you've never used Python's logging package, I'll provide a brief overview of its core concepts. There are four core concepts in Python logging:
.pyfile) has a single logger to assign its logging messages. However, it's also possible to define multiple loggers in the same module (e.g. one logger for business logic, another logger for database logic,etc). In addition, it's also possible to use the same logger across multiple Python modules or
With this brief overview of Python logging concepts, let's jump straight into exploring Django's default logging functionality.
The logging configuration for Django projects is defined in the
LOGGING variable in
settings.py. For the moment, don't even bother opening your project's
settings.py file because you won't see
LOGGING in it. This variable isn't hard-coded when you create a project, but it does have some logging values in effect if it isn't declared. Listing 1 shows the default
LOGGING values if it isn't declared in
In summary, the default Django logging settings illustrated in listing 1 have the following logging behaviors:
consolehandler will only be done when
DEBUG=True, for logging messages worse than
INFO(inclusive) and only for the Python package
django-- and its children
django.contrib,etc -- as well as the Python package
mail_adminshandler -- which sends emails to
ADMINS-- will only be done when
DEBUG=False, for logging messages worse than
ERROR(inclusive) and only for the Python packages
Now lets break down the
handlers section in listing 1. Handlers define locations to send logging messages and there are three in listing 1:
mail_admins. The handler names by themselves do nothing -- they are simply reference names -- the relevant actions are defined in the associated properties dictionary. All the handlers have a
class property which defines the backing Python class that does the actual work.
console handler is assigned the
logging.StreamHandler class which is part of the core Python logging package. This class sends logging output to streams such as standard input and standard error, which as the handler name suggests is technically the system console or screen where Django runs. The
null handler is assigned the
logging.NullHandler class which is also part of the core Python logging package and which generates no output. And the
mail_admins handler is assigned the
django.utils.log.AdminEmailHandler class that is a Django custom handler utility, which sends logging output as an email to people defined as
settings.py -- see the previous recipe Django settings.py for the real world: Security, absolute paths and multiple environments/files for more information on the
Another property in Django handlers is
level which defines the threshold level at which the handler will accept logging messages. There are five threshold levels for Python logging, from worst to least worst they are:
INFO level for the
console handler indicates that all logging messages worse or equal to
INFO -- which is every level, except
DEBUG -- will make it to the handler, a reasonable setting as the console can handle many messages. The
ERROR level for the
mail_admins handler indicates that only messages worse or equal to
ERROR -- which is just
CRITICAL -- will make it to the handler, a reasonable setting as only the two worst types of error messages will trigger emails to administrators.
The other property in Django handlers is
filters which defines an additional layer to restrict logging messages for a handler. Handlers can accept multiple filters, which is why the
filters property accepts a Python list. The
console handler has a single filter
require_debug_true and the
mail_admins handler has a single filter
require_debug_false. Filters are defined in their own block as you can observe in listing 1. The
require_debug_false filter is backed by the
django.utils.log.RequireDebugFalse class which checks if a project has
DEBUG=False, where as the
require_debug_true filter is backed by the
django.utils.log.RequireDebugTrue class which checks if a project has
DEBUG=True. This means the
console handler will only accept logging messages if a Django project has
DEBUG=True and the
mail_admins handler will only accept logging messages if a Django project has
Now that you understand
filters, lets take a look at
loggers. Logger definitions generally map directly to Python packages and have parent-child relationships. For example, Python modules (i.e.
.py files) that belong to a package named
coffeehouse will generally have a logger named
coffeehouse and Python modules that belong to the package
coffeehouse.about will generally have a logger named
coffeehouse.about. The dot notation in logger names also represents a parent-child relationship, so the
coffeehouse.about logger is considered the child of the
In listing 1 there are four loggers:
django logger indicates that all logging messages associated with it and its children be processed by the
django.request logger indicates that all logging messages associated with it and its children be processed by the
mail_admins handler. The
django.request logger also has the
'level':'ERROR' property to provide the threshold level at which the logger will accept logging messages -- a property that overrides the handler
level property. And in addition, the
django.request logger also has the
'propagate':'False' to indicate the logger should not propagate messages to parent loggers (e.g.
django is the parent of
Next, we have the
django.security logger which is identical in functionality to the
django.request logger. And the
py.warnings which indicates that all logging messages associated with it and its children be processed by the
Finally there are the first two lines in listing 1 which are associated with Python logging in general. The
version key identifies the configuration version as
1, which at present is the only Python logging version. And the
disable_existing_loggers key is used to disable all existing Python loggers. If
False it keeps the pre-existing logger values and if it's set to
True it disables all pre-existing loggers values. Note that if you use
'disable_existing_loggers': False in your own
LOGGING variable you can redefine/override some or all of the pre-existing logger values.
Now that you have a firm understanding of what Django logging does in its default state, I'll describe how to create logging messages in a Django project and finish the recipe with a custom
At the top of any Python module or
.py file you can create loggers by using the
getLogger method of the
logging package. The
getLogger method receives the name of the logger as its input parameter. Listing 2 illustrates the creation of two logger instances using
__name__ and the hard-coded
__name__ syntax used for
getLogger in listing 1 automatically assigns the package name as the logger name. This means if the logger is defined in a module under the application directory
coffeehouse/about/views.py, the logger receives the name
coffeehouse.about.views. So by relying on the
__name__ syntax, loggers are automatically created based on the origin of the logging message.
Don't worry about potentially having dozens or hundreds of loggers in a Django project for each module or
.py file. As described in the past section, Python logging works with inheritance, so you can define a single handler for a parent logger (e.g.
coffeehouse) that handles all children loggers (e.g.
Sometimes it's convenient to define a logger with an explicit name to classify certain types of messages. In listing 2 you can see a logger named
dba that will be used for messages related to databases issues. This way database administrators can consult their own logging stream without the need to see logging messages from other parts of the application.
Once you have loggers in a module or
.py file, you can define logging messages with one of several methods depending on the severity of a message that needs to be reported. These methods are illustrated in the following list:
<logger_name>.critical().- Most severe logging level. Use it to report potentially catastrophic application events (e.g. something that can halt or crash an application).
<logger_name>.error().- Second most severe logging level. Use it to report important events (e.g. unexpected behaviors or conditions that cause end users to see an error)
<logger_name>.warning().- Mid-level logging level. Use it to report relatively important events (e.g. unexpected behaviors or conditions that shouldn't happen, yet don't cause end users to notice the issue).
<logger_name>.info().- Informative logging level. Use it to report informative events in an application (e.g. application milestones or user activity)
<logger_name>.debug().- Debug logging level. Use it to report step by step logic that can be difficult to write (e.g. complex business logic or database queries)
<logger_name>.log().- Use it to manually emit logging messages with a specific log level.
<logger_name>.exception().- Use it to create an error level logging message, wrapped with the current exception stack.
What methods you use to log messages across your project depends entirely up to you. As far as the logging levels are concerned, just try to be consistent with the selection criteria. You can always adjust the run-time logging level to deactivate logging messages of a certain level, as described in the previous section. In addition, I would also recommend you use the most descriptive logging messages possible to maximize the benefits of logging.
Listing 3 illustrates a series of examples using several logging methods and messages:
As you can see in listing 3, there are various logging messages of different levels using both loggers described in listing 2. The logging messages are spread out depending on their level in either the method body or inside try/except blocks.
If you place the loggers and logging statements like the ones in listing 3 in a Django project, you'll see that nothing happens logging wise. In fact, what you'll see in your logs are messages in the form
'No handlers could be found for logger ...<logger_name>'. This is because by default Django doesn't know anything about your loggers, it only knows about the default loggers described in the past section. Next, lets take a look at a custom
LOGGING configuration so you can see your project's logging messages.
Let's start with a set of logging requirements to define our custom Django logging. The following list illustrates some modified requirements from the default logging behavior and new logging requirements.
consolehandler to output logging message from the
DEBUGlevel, instead of the default
mail_adminshandler for the
DEBUGlevel and that only functions when
py.warnings, for the
ERRORlevel and that only functions when
dbalogger, for the
DEBUGlevel and that functions when either
Listing 4 illustrates a custom
LOGGING configuration that covers the requirements from the past list.
The first important definition in listing 4 is
'disable_existing_loggers':True which disables Django's default loggers from listing 1. If you want to keep Django's default loggers from listing 1, set this value to
False and be sure not to override any of these
loggers as we do in listing 4, as any explicit
LOGGING configuration in
settings.py takes precedence over Django defaults even when
Next, we have the same default
filters section as listing 1. Note that we need to redefine this section because we will use the same filters in our
loggers section. Then we have the
formatters section that will allow us to generate logging messages with a simpler and more verbose output.
By default, all Python logging messages follow the format
%(levelname)s:%(name)s:%(message)s, which means "Output the log message level, followed by the name of the logger and the log message itself". However, there is a lot more information available through Python logging that can make logging messages more comprehensive. As you can see in listing 4, we define the
verbose formatters using a special syntax and a series of fields that are different from the default. Table 1 illustrates the different Python formatter fields including their syntax and meaning.
|Name of the logger (logging channel)|
|Numeric logging level for the message (DEBUG, INFO,WARNING, ERROR, CRITICAL)|
|Text logging level for the message ("DEBUG", "INFO","WARNING", "ERROR", "CRITICAL")|
|Full pathname of the source file where the logging call was issued (if available)|
|Filename portion of pathname|
|Module (name portion of filename)|
|Source line number where the logging call was issued (if available)|
|Time when the log record was created (time.time() return value)|
|Textual time when the log record was created|
|Millisecond portion of the creation time|
|Time in milliseconds when the log record was created,relative to the time the logging module was loaded (typically at application startup time)|
|Thread ID (if available)|
|Thread name (if available)|
|Process ID (if available)|
|The result of record.getMessage(), computed just as the record is emitted|
You can add or remove fields to the
format field of each formatter as you see necessary based on the fields in table 1. In addition to the
format field for each formatter, there's also the
datefmt field that allows us to define the date/time format for each logging message, which will correspond to the output of the
%(asctime)s format field. Note that the syntax for the
datefmt field follows Python's strftime() format.
Next comes the four logging
handlers or locations to send logging messages. First is the
console handler which is identical to the default in listing 1, except it's set to the
DEBUG level and uses our custom
simple formatter. The remaining three handlers are all configured for file output via the
'class': 'logging.FileHandler' statement.
development_logfile handler is configured to work for logging messages worse than
DEBUG (inclusive) -- which is technically all logging messages -- and only when
DEBUG=True due to the
require_debug_true filter. In addition, the
development_logfile handler is set to send output to the file
/tmp/django_dev.log and use our custom
production_logfile handler is configured to work for logging messages worse than
ERROR (inclusive) -- which is just
CRITICAL logging messages -- and only when
DEBUG=False due to the
require_debug_false filter. In addition, the handler uses the custom
simpler formatter and is set to send output to the file
dba_logfile is configured to work for logging messages worse than
DEBUG (inclusive) -- which is technically all logging messages -- and when
DEBUG=False due to the
require_debug_false filters. In addition, the handler uses the custom
simpler formatter and is set to send output to the file
Then we come to the
loggers section. The first logger
coffeehouse indicates to link all the logging messages for itself and its children (e.g.
coffeehouse.drinks) to the handlers
production_logfile. Recall that by using Python's
__name__ syntax to define loggers -- see listing 2 -- the name of the loggers ends up being based on the Python package structure.
Next, you can see the custom
dba logger which links all its logging messages to the handlers
dba_logfile. In this case, the handlers are limited because this is a special logger intended for database administrators, so output will only go to the console and a dedicated file with these type of messages.
Finally, we define the
py.warnings loggers to re-establish some of Django's default behavior because we set
'disable_existing_loggers': True. The
django logger links all its logging messages to the handlers
production_logfile, since we want logging messages associated with the
django logger and its children (e.g.
django.security) to go to the console and two of the log files. Notice we don't declare explicit loggers for
django.security unlike the default's in listing 1, since we will use the same handlers for the
django logger and its children. At the end, the
py.warnings logger links all its logging messages to the handlers
development_logfile, as we don't want any trace of
py.warnings logging messages in production log files.
|Disable email error notifications for |
You may be surprised that I turned off Django's default logging email error notification mechanism (
If you have 100 visitors per hour on a Django site and all of them hit the same error, this means at least 100 email notifications an hour. If you have three people in
I recommend you stick to the old method on inspecting log files on a constant basis or use a dedicated reporting system such as Sentry which I'll describe in a future recipe.