You want to set up a Django application in production with the Nginx web server and uWSGI.
Install the Python uWSGI application server. Configure the uWSGI application server with a basic set of options that include
--processes and other ones depending on your requirements. Start the uWSGI application server with the
--daemonize option or an OS process manager like
supervisor. Configure Nginx as a user-facing web server that connects to the backing uWSGI application server.
Create a separate Nginx instance to serve static content for your Django application and update the application's
STATIC_URL variable to pick-up static content from this Nginx instance. Alternatively, you can define Nginx
alias parameters to leverage the same Nginx instance configured with uWSGI to also serve static content for the same Django application.
Nginx is one of the most popular web servers to deploy web applications in production environments. According to Netcraft surveys it's the second most popular web server only behind the Apache web server. For Python applications, Nginx supports WSGI through the uWSGI application server. WSGI or 'Web Server Gateway Interface' is the Python programming language standard approach that defines a simple and universal interface between web servers and Python web applications or frameworks.
|uWSGI is a protocol and an implementation of WSGI|
WSGI is a specification which means it only sets forth in paper what it has to accomplish. From the WSGI specification, implementations are born which are actual products (i.e. code) that comply with the specification. Some WSGI implementations include: the mod_wsgi module which is designed for the Apache web server -- described in the previous recipe -- as well as uWSGI which can run with Nginx and other web servers.
What can make uWSGI confusing to understand is that it's both an application server and a protocol. uWSGI is an application server in the sense it implements the WSGI specification and can thus run WSGI compliant apps (e.g. Django). But uWSGI is also a protocol, which means anyone that interacts with it must use the uWSGI protocol. So in order for standard HTTP web protocol requests to be processed by the uWSGI app server, a broker has to be used that can understand both the standard HTTP web protocol and the uWSGI protocol, which as you might have already guessed is: Nginx.
NOTE: It should be noted uWSGI can in fact natively speak HTTP, so it can process standard HTTP web protocol requests directly. However, the recommended choice to run uWSGI is with its native protocol -- for better performance -- and behind a full-fledged web server like Nginx -- to gain additional web application deployment features.
The easiest way to install uWSGI is through the
pip package manager, with:
pip install uWSGI. Once you install uWSGI, confirm its executable is installed correctly running
which uwsgi, the output should return
<virtual_env_home>/bin/uwsgi, if you see no output then uWSGI wasn't installed correctly and you'll need to review the
pip logs for more details.
Since uWSGI is an application server, there are multiple configuration options you can use -- just like the Apache or Nginx web servers. The whole gamut of uWSGI configuration flags is well over 100, so it's almost impossible to create a one-size-fits-all configuration, like it's for the Apache or Nginx web servers.
Listing 1 illustrates a sample uWSGI command with flags I regularly use to launch small to medium sized Django projects. Be aware you'll most likely need to tweak, remove or add new flags in listing 1 to accommodate the size and requirements of your project -- consult the previous uWSGI configuration link for the most detailed information on these and other flags.
You'll notice that after you run the
uwsgi command in listing 1, uWSGI outputs a series of informative logs, the ids for each of the uWSGI processes and then just waits to either output log messages or for it to be closed. Also notice the uWSGI
--socket option is set to
127.0.0.1:27000, what do you see if you open up a browser and point it to this address ? Nothing, because this address runs on the uWSGI protocol -- not the standard HTTP web protocol -- you'll need to set up Nginx to communicate with uWSGI through its protocol -- see the previous sidebar 'uWSGI is a protocol and an implementation of WSGI' for more details on this fact.
Next, table 1 provides additional details on the uWSGI configuration options used in listing 1 so you can get a better feel for what it's they do.
|It sets the address/port on which uWSGI listens.|
|Sets the socket listen queue size, which is important for highly concurrent apps. Note this value is capped to 128 on Linux, but can be changed via /proc/sys/net/somaxconn and /proc/sys/net/ipv4/tcp_max_syn_backlog for TCP sockets.|
|Sets a base directory prior to loading apps. For Django projects this is generally the PROJECT_DIR (i.e. where the |
|Defines directory paths separated by : for the Python interpreter. For Django projects this typically points to a Python virtualenv directory -- where Django and other packages reside -- and a Django project's BASE_DIR (e.g where a project's |
|This points to a Django project's wsgi configuration file relative to the PROJECT_DIR -- where |
|Indicates the number of processes/workers.|
|Sets the user id/name that owns the uWSGI processes. Note the underlying Django project files must also have permissions for this id/name|
|Sets the group id/name that owns the uWSGI processes. Note the underlying Django project files must also have permissions for this id/name|
|This is a clean-up option that removes all generated files/sockets|
|Enables a master process, which is uWSGI's built-in prefork+threading multi-worker mode and the recommended approach for most scenarios.|
|Sets the maximum time (in seconds) to wait for a uWSGI task to complete. This is a self-healing option to avoid run-away tasks. If you're expecting Django tasks to take longer than 3 minutes (180 seconds) -- which in itself is an extremely long wait for web apps -- increase this setting. If you're paranoid about resource consumption you can reduce this setting, just be aware running tasks are killed if they reach this threshold.|
|Sets the maximum number of requests processed by a worker, after which time the worker is re-started. This is another self-healing option that ensures workers are restarted continuously to avoid long-running workers that may produce memory leaks.|
|It reduces logging overhead, but you can always remove it if you need to log requests (e.g. for debugging).|
|A recommended setting to save memory as it uses a single Python interpreter for all workers.|
|(Not in listing 1)||Sets uWSGI to daemon mode, unblocking the console on which its run and sending output to the specified log.|
Table 1 contains all the configuration options used in listing 1, but in addition table 1 also describes the
--daemonize option. If you run uWSGI without the
--daemonize option (i.e. like listing 1) uWSGI sends all output to the console and blocks until uWSGI is terminated (e.g. with
Ctrl+C or the console is closed). If you've deployed any server side production environment, you know it's necessary to put tasks into the background or daemon mode so they're not abruptly terminated when you close the console on which they were started, which is the purpose of the
--daemonize option to send all output to a given file and unblock the console. Although using the
--daemonize option requires you to hunt down the uWSGI id process to kill it, it's the recommended approach to do final uWSGI deployments.
An alternative I like for uWSGI rather than the
--daemonize option is
supervisor -- a Linux operating system utility to monitor and control processes -- which gives you a higher level of control over uWSGI processes (e.g. processes can restart on boot or if they're accidentally killed). Although it would go beyond the scope of this recipe to introduce
supervisor, I'll provide a minimal set of instructions to run uWSGI with
Listing 2 illustrates the script to run uWSGI under supervisor which you would typically place in
supervisor's configuration directory (e.g.
coffeehouse.conf contains listing 2)
The first line in listing 2 surrounded by brackets
[ ] is the reference name for the process, this is important as it's used to start, stop and get the status of a process. The second line in listing 2
command represents the process to run, which is identical to the one in listing 1 but uses an absolute path for the main
uwsgi executable. The remaining options in listing 2 are common choices for
supervisor scripts, which I would recommend you consult the
supervisor documentation if you want more details about their nature.
After you deploy the
supervisor script in listing 2 and restart the
supervisor process, everything should be ready to run uWSGI under
supervisor. You can type
supervisorctl status to get a list of
supervisor controlled processes, in it you should see a line that says
coffeehouseuwsgi STOPPED <date> -- where
coffeehouseuwsgi is the bracketed name in listing 2 -- indicating the process is stopped. To start a
supervisor process you can type
supervisorctl start <program> (e.g.
supervisorctl start coffeehouseuwsgi), to stop a
supervisor process you can type
supervisorctl stop <program> (e.g.
supervisorctl stop coffeehouseuwsgi) and to re-start a
supervisor process you can type
supervisorctl restart <program> (e.g.
supervisorctl restart coffeehouseuwsgi).
Once you have a running uWSGI application server using any of the previous variations, you can setup an Nginx server configuration to communicate with uWSGI. Listing 3 illustrates an Nginx virtual server site configuration.
The Python/Django specific configuration in listing 3 is enclosed within the
location / brackets, which tells Nginx to apply the configuration at the root URL (i.e.
/). As you can see in listing 3, all of the options contain the
uwsgi prefix which indicates they're all uWSGI options. The first parameter
include uwsgi_params is a short-cut syntax to pass various parameters from Nginx to uWSGI. Listing 4 illustrates the equivalent
uwsgi_param parameters produced by the short-cut
include uwsgi_params parameter.
uwsgi_paramparameters produced by
As you can see in listing 4, the
include uwsgi_params parameter avoids you the need to type a lot of staple values generated by Nginx and pass them to uWSGI.
include uwsgi_params parameter in listing 3 you can see there are three timeout parameters all set to 300 seconds. Next, comes the
uwsgi_pass 127.0.0.1:27000 parameter which tells Nginx where to connect to uWSGI, a value that matches the
--socket value used by the uWSGI start-up commands in listings 1 and 2. Finally, the
uwsgi_param UWSGI_SCRIPT coffeehouse.wsgi in listing 3 is a standalone uWSGI parameter -- just like those in listing 4 -- that specifies the uWSGI script that corresponds to the Django application, where
coffeehouse is a directory and wsgi is the
wsgi.py file created for all Django projects.
uwsgi is running for logs if you're using the option from listing 1 or the
supervisor logs if you're using the option from listing 2. Table 1 contains a list of the most common errors and fixes for uWSGI deployments.
|ImportError: No module named django.core.wsgi|
(Cause: uWSGI can't locate the Django installation)
|ImportError: Could not import settings '<project_name>.settings' (Is it on sys.path?): No module named <project_name>.settings|
(Cause: uWSGI can't locate the Django application)
When you use Django's built-in web server (i.e.
python manager.py runserver) and have
DEBUG=True, the application's static resources are mounted as a convenience under the URL definition of the
STATIC_URL variable -- which defaults to
/static/. The use of the
DEBUG=False with Django's built-in web server or change to a different web server altogether (e.g. Nginx), this convenience disappears and no static resources are served automatically.
Next, I'll describe two Nginx alternatives. One uses two separate Nginx server instances -- one for uWSGI and the other for static resources -- which is the option I strongly recommend. And the other option describes how to setup a single Nginx server instance that connects to uWSGI and also dispatches a Django application's static resources, in case you can't or don't want to run multiple Nginx instances.
Keep in mind the following steps assume you have already consolidated your static resources into a single folder defined in
STATIC_ROOT. If you haven't made this consolidation or don't know what
Listing 5 illustrates a modified version of the Nginx site configuration in listing 3. Listing 5 shows multiple Nginx site configurations, one to run the main (uWSGI) Django application and the other to run Django's static resources.
The first site configuration in listing 5 belongs to the uWSGI server instance and is identical to the one described in listing 3, so there's isn't much left to explain about this configuration.
The second site configuration in listing 5 is used to dispatch static resources through the
static.coffeehouse.com domain. The
root parameter tells Nginx to dispatch resources located in the server's file system
/www/STORE/coffeestatic/, which represents the consolidation folder for static resources defined in
STATIC_ROOT. This means if a request is made to the
/css/bootstrap.css URL or the full URL
http://static.coffeehouse.com/css/bootstrap.css, Nginx dispatches the file under the server's file system
Although the previous approach of using separate web servers for dynamic and static content is ideal, sometimes you may not want or cant run multiple Nginx instances. For such a scenario you can use a single web server instance to handle all traffic.
Listing 6 illustrates a modified version of the Nginx site configuration in listing 3, which is the 'quick & dirty' approach of dispatching static resources with the same Nginx instance. Use this option only if you can't or don't want to run multiple instances.
location /static line in listing 6 tells Nginx that when a URL request comes in for
/static it use the
alias /www/STORE/coffeestatic value to map full-directory requests made to the
/static URL to the server's file system
/www/STORE/coffeestatic/. This means if a request is made to the
/static/bootstrap/bootstrap.css URL, Nginx dispatches the file under the server's file system
location /robots.txt line in listing 6 tells Nginx that when a URL request comes in for
/robots.txt -- or full URL
www.coffeehouse.com/robots.txt -- that it use the
alias /www/STORE/coffestatic/robots.txt value to dispatch the file located in the server's file system
/www/STORE/coffeestatic/robots.txt -- where
/www/STORE/coffeestatic/ represents the consolidation folder for static resources defined in STATIC_ROOT. The other
alias /www/STORE/coffestatic/favicon.ico line uses the same technique but to dispatch requests made to the
Essentially what the Nginx
alias parameter does is translate given URLs and dispatch files from the server's file system.
Once you setup Nginx to serve the static resources in either of the previous configurations under
/www/STORE/coffeestatic/ remember you'll also need to update the
STATIC_URL variable in your project's
|AWS S3 and WhiteNoise offer good alternatives to dispatch Django static files bypassing web server configurations|