Django Development With Docker Compose and Machine

 In Blog, Browsers, Softwares

Docker is a containerization tool used for spinning up isolated, reproducible application environments. This piece details how to containerize a Django Project, Postgres, and Redis for local development along with delivering the stack to the cloud via Docker Compose and Docker Machine.


In the end, the stack will include a separate container for each service:

  • 1 web/Django container
  • 1 nginx container
  • 1 Postgres container
  • 1 Redis container
  • 1 data container
container stack diagram

Interested in creating a similar environment for Flask? Check out this blog post.

Local Setup

Along with Docker (v1.6.1) we will be using –

  • Docker Compose (v1.2.0) for orchestrating a multi-container application into a single app, and
  • Docker Machine (v0.2.0) for creating Docker hosts both locally and in the cloud.

Follow the directions here and here to install Docker Compose and Machine, respectively. Then test out the installs:

$ docker-machine –version
docker-machine version 0.2.0 (8b9eaf2)
$ docker-compose –version
docker-compose 1.2.0

Next clone the project from the repository or create your own project based on the project structure found on the repo:

├── docker-compose.yml
├── nginx
│   ├── Dockerfile
│   └── sites-enabled
│   └── django_project
├── production.yml
└── web
├── Dockerfile
├── docker_django
│   ├──
│   ├── apps
│   │   ├──
│   │   └── todo
│   │   ├──
│   │   ├──
│   │   ├──
│   │   ├── templates
│   │   │   ├── _base.html
│   │   │   └── home.html
│   │   ├──
│   │   ├──
│   │   └──
│   ├──
│   ├──
│   └──
├── requirements.txt
└── static
└── main.css

We’re now ready to get the containers up and running…

Docker Machine

To start Docker Machine, simply run:

$ docker-machine create -d virtualbox dev;
INFO[0000] Creating CA: /Users/michael/.docker/machine/certs/ca.pem
INFO[0000] Creating client certificate: /Users/michael/.docker/machine/certs/cert.pem
INFO[0001] Downloading boot2docker.iso to /Users/michael/.docker/machine/cache/boot2docker.iso…
INFO[0035] Creating SSH key…
INFO[0035] Creating VirtualBox VM…
INFO[0043] Starting VirtualBox VM…
INFO[0044] Waiting for VM to start…
INFO[0094] “dev” has been created and is now the active machine.
INFO[0094] To point your Docker client at it, run this in your shell: eval “$(docker-machine env dev)”

The create command setup a new “Machine” (called dev) for Docker development. In essence, it downloaded boot2docker and started a VM with Docker running. Now just point Docker at the dev machine:

$ eval “$(docker-machine env dev)”

Run the following command to view the currently running Machines:

$ docker-machine ls
dev * virtualbox Running tcp://

Next, let’s fire up the containers with Docker Compose and get Django, Postgres, and Redis up and running.

Docker Compose

Let’s take a look at the docker-compose.yml file:

restart: always
build: ./web
env_file: .env
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000

restart: always
build: ./nginx/

restart: always
image: postgres:latest

restart: always
image: redis:latest

restart: always
image: postgres:latest
command: true

Here, we’re defining five services – web, nginx, postgres, redis, and data.

  1. First, the web service is built via the instructions in the Dockerfile within the “web” directory – where the Python environment is setup, requirements are installed, and the Django application is fired up on port 8000. That port is then forwarded to port 80 on the host environment – e.g., the Docker Machine. This service also adds environment variables to the container that are defined in the .env file.
  2. The nginx service is used for reverse proxy to forward requests either to Django or the static file directory.
  3. Next, the postgres service is built from the the official PostgreSQL image from Docker Hub, which installs Postgres and runs the server on the default port 5432.
  4. Likewise, the redis service uses the official Redis image to install, well, Redis and then the service is ran on port 6379.
  5. Finally, notice how there is a separate volume container that’s used to store the database data – called data. This helps ensure that the data persists even if the Postgres container is completely destroyed.

Now, to get the containers running, build the images and then start the services:

$ docker-compose build
$ docker-compose up -d

Grab a cup of coffee. Or go for a long walk. This will take a while the first time you run it. Subsequent builds run much quicker since Docker caches the results from the first build.

Once the services are running, we need to create the database migrations:

$ docker-compose run web /usr/local/bin/python migrate

Grab the IP associated with Docker Machine – docker-machine ip – and then navigate to that IP in your browser:

Try refreshing. You should see the counter update. Essentially, we’re using the Redis INCR to increment after each handled request. Check out the b in web/docker_django/apps/todo/ for more info.

Again, this created five services, all running in different containers:

$ docker-compose ps
Name Command State Ports
dockerizingdjango_data_1 / true Up 5432/tcp
dockerizingdjango_nginx_1 /usr/sbin/nginx Up>80/tcp
dockerizingdjango_postgres_1 / postgres Up>5432/tcp
dockerizingdjango_redis_1 / redis-server Up>6379/tcp
dockerizingdjango_web_1 /usr/local/bin/gunicorn do … Up 8000/tcp

To see which environment variables are available to the web service, run:

$ docker-compose run web env

To view the logs:

$ docker-compose logs

You can also enter the Postgres Shell – since we forward the port to the host environment in the docker-compose.yml file – to add users/roles as well as databases via:

$ psql -h -p 5432 -U postgres –password

Ready to deploy? Stop the processes via docker-compose stop and let’s get the app up in the cloud!


So, with our app running locally, we can now push this exact same environment to a cloud hosting provider with Docker Machine. Let’s deploy to a Digital Ocean box.

After you sign up for Digital Ocean, generate a Personal Access Token, and then run the following command:

$ docker-machine create
-d digitalocean

This will take a few minutes to provision the droplet and setup a new Docker Machine called production:

INFO[0000] Creating SSH key…
INFO[0001] Creating Digital Ocean droplet…
INFO[0133] “production” has been created and is now the active machine.
INFO[0133] To point your Docker client at it, run this in your shell: eval “$(docker-machine env production)”

Now we have two Machines running, one locally and one on Digital Ocean:

$ docker-machine ls
dev * virtualbox Running tcp://
production digitalocean Running tcp://

Set production as the active machine and load the Docker environment into the shell:

$ docker-machine active production
$ eval “$(docker-machine env production)”

Finally, let’s build the Django app again in the cloud. This time we need to use a slightly different Docker Compose file that does not mount a volume in the container. Why? Well, the volume is perfect for local development since we can update our local b in the “web” directory and the changes will immediately take affect in the container. In production, there’s no need for this, obviously.

$ docker-compose build
$ docker-compose up -d -f production.yml
$ docker-compose run web /usr/local/bin/python migrate

Grab the IP address associated with that Digital Ocean account and view it in the browser. If all went well, you should see your app running, as it should.


  • Grab the b from the repo (star it too… my self-esteem depends on it!).
  • Comment below with questions.
  • Need a challenge? Try using extends to clean up the receptive b in the two Docker Compose configuration files. Keep it DRY!
  • Have a great day!
Recommended Posts

Leave a Comment

Start typing and press Enter to search