Django asynchronous tasks without Celery
In this blog post I will guide you to implement Django asynchronous tasks without Celery. First of all I will define what I mean with the term “asynchronous task”.
What are Django asynchronous tasks?
Suppose that you want to perform a long running task in your Django web app, but you want to give an immediate response to the user without waiting for the task to finish. The task could be: sending an email, building a report, making a request to an external web service, etc. The response given to the user usually is a confirmation message that the task has started.
Every task that could take some time to complete should not block the request-response cycle between the user and your application.
If you Google for “Django asynchronous tasks” probably you’ll be directed to Celery as the way to implement asynchronous tasks with Django. Please don’t get me wrong: there is nothing wrong in Celery. Many successful projects use Celery in production with success. I also used Celery in a couple of projects, and it works well. What I find annoying is the additional effort.
Celery is yet another service to configure, launch and maintain.
Enter the uWSGI spooler
As I wrote in my other post Django – NGINX: deploy your Django project on a production server, I like to use the uWSGI application server. Well, it turns out that uWSGI provides a full featured, production ready, system to implement asynchronous tasks. They call it the uWSGI Spooler. Citing the uWSGI documentation:
The Spooler is a queue manager built into uWSGI that works like a printing/mail system.
You can enqueue massive sending of emails, image processing, video encoding, etc. and let the spooler do the hard work in background while your users get their requests served by normal workers.
A spooler works by defining a directory in which “spool files” will be written, every time the spooler find a file in its directory it will parse it and will run a specific function.
The uWSGI Spooler is very easy to configure!
Install uwsgidecorators package
To interact with uWSGI spooler from your Python code it is really convenient to install the uwsgidecorators package. Unfortunately there isn’t a recent version of uwsgidecorators on PyPI, but you can install it using apt:
sudo apt install python3-uwsgidecorators
If you use a virtualenv for your Django project (as you should) you can link the uwsgidecorator module in your virtualenv:
ln -s /usr/lib/python3/dist-packages/uwsgidecorators.py /path/to/your/virtualenv/lib/python3.6/site-packages
Create the directory for the “spool files”
First of all you have to create the directory where uWSGI will save the “spool files” used by the spooler:
mkdir -p /home/ubuntu/tasks sudo chown www-data.www-data /home/ubuntu/tasks
Please make sure that the directory is writable by the user running uWSGI, that is www-data in the example.
Create a tasks module in your Django application
Assuming that you have a Django app named app1 you should create a module named tasks.py in the app directory. The content of the module should be something like this:
import logging try: from uwsgidecorators import spool except: def spool(func): def func_wrapper(**arguments): return func(arguments) return func_wrapper import django django.setup() from .models import Model1 logger = logging.getLogger(__name__) @spool def long_running_task(arguments): id = arguments['id'] obj1 = Model1.objects.get(pk=id) obj1.long_running_model_method()
The try/except construct will let you test the code even when not running in a uWSGI application server, for instance when running locally using ./manage.py runserver. In that case the task will be executed synchronously, as it was a regular python function call.
The long_running_task function is the task you will invoke using the uWSGI Spooler. As you can see you have to decorate it with the @spool decorator, and the parameters for the task are passed in a dictionary called arguments.
In this example I’m using the Django ORM to get an instance of a model and to call a model method that will take a long time to complete.
Call the asynchronous task from your Django view
Here you can find how to call the asynchronous task from an example view:
from django.shortcuts import redirect from django.contrib import messages from .tasks import long_running_task def async_view(request, id): long_running_task(id=id) messages.add_message(request, messages.SUCCESS, 'Task started correctly') return redirect('some_other_view')
Configure uWSGI to use the spooler
To configure the uWSGI Spooler you should edit the uWSGI configuration file I presented in the post Django – NGINX: deploy your Django project on a production server. The file is located here: /etc/uwsgi/apps-enabled/django.ini. The content of the file should be something like this:
[uwsgi] chdir = /home/ubuntu/django_project # customize with your django installation directory env = DJANGO_SETTINGS_MODULE=project.settings.production # customize with your settings module wsgi-file = project/wsgi.py # customize with the relative path to your wsgi.py file workers = 1 spooler-chdir = /home/ubuntu/django_project # customize with your django installation directory spooler = /home/ubuntu/tasks/ import = app1.tasks
Restart uWSGI with:
service uwsgi restart
You will find the uWSGI logs in /var/log/uwsgi/apps/django.log. Therefore you can check them to see if the Python process started correctly or there are issues.
In the log file you’ll also find messages about the Spooler, something like this:
Mon Nov 18 14:50:45 2019 - [spooler /home/ubuntu/tasks pid: 8646] managing request uwsgi_spoolfile_on_www.example.com_8647_3_1383635828_1574067630_419736 ...
Conclusion
In this post I’ve shown how to setup Django asynchronous tasks using uWSGI Spooler. That is a simple and convenient way to perform long running tasks outside the request-response cycle. Especially if you already use uWSGI application server.
I used this technique in many different projects, for example an “Instagram like” photo and video sharing platform I built from scratch for the popular online magazine mtb-mag.com. In that case videos are processed asynchronously to make two versions suitable for all devices, in standard and high definition.
Please let me know if you have comments or question by leaving a reply below. Happy coding!
What a neat feature! I’ve never heard about it! Than you for sharing, Augusto!
Thank you for this writeup, this is really well done. After reading of all the horribly bugs with celery and rabbitmq, I really didnt want to turn on those for async tasks. Plus last time we tried, our server locked up. So this is a welcome alternative. I’ve implemented it, but ran into a lot of headache with passing arguments. Finally got it working with the additional help of https://smirnov-am.github.io/background-jobs-with-flask/ and https://pythonise.com/series/learning-flask/exploring-uwsgi-decorators
Once again, thank you!
Hi Amit, thank you very much for your comment! I really appreciate that my blog can be useful to you!
I also had some problems with passing parameters to tasks, but I wanted to keep the post simple to understand for the general case. I plan to write a follow up post with some tips for passing parameters as well as links to a couple of Django apps you can use to simplify the integration of Django with the uWSGI spooler.
> as well as links to a couple of Django apps you can use to simplify the integration of Django with the uWSGI spooler.
That would be really useful. I successfully did all the above setup on local, tested, worked fine. Finally when it came to deployment on AWS, I simply could not get it to work even after a day of mucking around. Finally had to roll it back and now I’m back to retrying Celery 🙁
Thank You So Much. This Is Too Much Help Me in my [roject. Again Thank You So So So Much Augusto
Hi this is exactly what I am looking for and I am new to this web application development. I have a request that takes almost 8-10 seconds to process, so async tasks are the only solution I could find but the problem is I cannot find resources to help on how would I send the processed file after it’s done processing. I process the request Asynchronously, but the user is still waiting for my response, so ideally I need to give user a status bar with time to completion or percentage of remaining task. How can I implement this with spoolers
Hi, I see two options here:
1. If you have an email address for the user you can send him an email with the link to download the processed file, when it is ready. In this case you’ll return immediately a response to the user, something like: “We are processing your file, we’ll send you an email when the file will be ready”.
2. While the async task is running you keep saving the status (or percentage) of completion on a specific Django model. This model will have the task ID as a field (I’m sure you can retrieve the task ID from the spooler), and a status (or percentage) field. From the web interface you start polling for the task status (or percentage) using AJAX and a specific Django view that reads the same Django model, with the correct task ID.
The second approach is more elegant, but much more complex.