This is a stupid quick guide/reference (mostly for myself) on how to get up and running with Django on fly.io. This expands on their official tutorial. We'll be deploying Django with postgres & redis.
You can find an accompanying screencast below:
Add Dependencies
I like to use pip-tools to manage dependencies. How you choose to manage dependencies is up to you.
The flyctl
command line tool is smart enough to generate the correct
Dockerfile
depending on whatever approach you choose.
For example I'd include the following to a requirements.in
file:
celery[redis]
django
django-celery-results
django-environ
gunicorn
psycopg
After installing pip-tools
I'd run pip-compile
to generate
a requirements.txt
that can be used by our Docker image we'll be uploading to
fly.io.
Then I'd use pip-sync
to install all these dependencies locally.
Environ django-environ
This is pretty much copied verbatim from the django-environ docs which I'd encourage you to check out.
Add the following to your settings.py
file:
from django.core.management.utils import get_random_secret_key
import os
import environ
env = environ.Env(
# set casting, default value
DEBUG=(bool, False)
)
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Take environment variables from .env file
environ.Env.read_env(os.path.join(BASE_DIR, ".env"))
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env("SECRET_KEY", default=get_random_secret_key())
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env("DEBUG")
if not DEBUG:
ALLOWED_HOSTS = [".fly.dev"]
CSRF_TRUSTED_ORIGINS = ["https://*.fly.dev"]
Configure the default database:
# Database
DATABASES = {
"default": env.db(default="sqlite://"),
}
We're providing a fallback DB here because we'll be calling ./manage.py
collectstatic
inside our Dockerfile where we won't have access to our prod DB.
Configure celery as follows:
CELERY_RESULT_BACKEND = "django-db"
CELERY_BROKER_URL = env("REDIS_URL", default="redis://localhost:6379")
Create a .env
file and add any missing configuration values:
DEBUG=1
DATABASE_URL=postgresql:///stuff
SECRET_KEY=1234
REDIS_URL=redis://localhost:6379
Finally configure how staticfiles will be served:
STATIC_URL = "static/"
STATIC_ROOT = "static"
Configuring your fly application
At this point we're ready to run the fly launch
command. This CLI wizard will
prompt you to provision infrastructure. Afterwards it will do some clever
static analysis of your project and generate a custom Dockerfile
and
fly.toml
configuration file based on your project layout.
fly launch
Modify the default .dockerignore
to exclude our .env
file and static
assets. It's important we don't accidentally include our local .env
file and
override production secrets:
.git/
- *.sqlite3
+ static
+ .env
You might also want to add other common values to the .dockerignore file.
Generated Dockerfile
The Dockerfile
generated by fly launch
might look something like:
ARG PYTHON_VERSION=3.12-slim-bullseye
FROM python:${PYTHON_VERSION}
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /code
WORKDIR /code
COPY requirements.txt /tmp/requirements.txt
RUN set -ex && \
pip install --upgrade pip && \
pip install -r /tmp/requirements.txt && \
rm -rf /root/.cache/
COPY . /code
ENV SECRET_KEY "iJbtJp4nb6XJuPPedO07gXxzOiS0XFrr6ulMUKsnu5fCRpO2yX"
RUN python manage.py collectstatic --noinput
EXPOSE 8000
CMD ["gunicorn", "--bind", ":8000", "--workers", "2", "stuff.wsgi"]
The flyctl
command line tool is smart enough to figure out we were using
psycopg
and so installs the libpq
developer library inside our container.
If you use poetry, you might find the output looks slightly different.
Generated fly.toml
The fly.toml
should look something like:
# fly.toml app configuration file generated for jack-stuff on 2023-11-01T18:37:10-03:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#
app = "jack-stuff"
primary_region = "gru"
console_command = "/code/manage.py shell"
[build]
[deploy]
release_command = "python manage.py migrate"
[env]
PORT = "8000"
[http_service]
internal_port = 8000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
processes = ["app"]
[[statics]]
guest_path = "/code/static"
url_prefix = "/static/"
This handy (auto-generated) section should automatically apply any database migrations on deployment.
[deploy]
release_command = "python manage.py migrate"
You'll notice the Dockerfile
has a line RUN python manage.py collectstatic --noinput
. This works in tandem with the below section, which should enable fly.io to
serve all our static assets in production:
[[statics]]
guest_path = "/code/static"
url_prefix = "/static/"
Running multiple processes inside our Fly.io App
Fly makes it super easy to run multiple processes, each in it's own VM. They even published their own blog post describing how to do this for Django+Celery.
To configure celery workers to run alongside the gunicorn
server in the
same app add the following to the fly.toml
configuration file:
[processes]
app = "python -m gunicorn --bind :8000 --workers 2 stuff.wsgi"
celery = "python -m celery -A stuff worker -l info -B"
If you wanted to you could run celery beat in its own separate process:
[processes]
app = "python -m gunicorn --bind :8000 --workers 2 stuff.wsgi"
worker = "python -m celery -A stuff worker -l info"
beat = "python -m celery -A stuff beat -l info"
Fly has some great documentation that goes in depth here
Deployment
Double check the secrets have been configured correctly:
$ fly secrets list
NAME DIGEST CREATED AT
DATABASE_URL f3bf61986914275f 1h12m ago
SECRET_KEY 65c26d0e2503fea1 1h13m ago
REDIS_URL fbb3395b6181aa6c 1h13m ago
Finally we're ready to deploy our app:
fly deploy
And with that we're done 🥳