1

I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.

I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.

My docker-compose file:

version: '3.7'

services:
    postgis:
        image: kartoza/postgis:12.1
        volumes:
            - postgres_data:/var/lib/postgresql/data/
        ports:
            - "${POSTGRES_PORT}:5432"
        environment:
            - POSTGRES_USER=${POSTGRES_USER}
            - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
            - POSTGRES_DB=${POSTGRES_DB}
        env_file:
            - .env
    web:
        build: .
#        command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
        command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
#        restart: on-failure
        ports:
            - "${APP_PORT}:8000"
        volumes:
            - .:/code
        depends_on:
            - postgis
        env_file:
            - .env
        environment:
            WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
    postgres_data:
        name: ${POSTGRES_VOLUME}

My Dockerfile (of the app):

# Pull base image
FROM python:3.7
LABEL maintainer="[email protected]"

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal

RUN apt-get update &&\
    apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client


## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait

# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}

# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000

# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

My .env file:

# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost

# GEOSERVER


# DJANGO
APP_PORT=8000

And finally my in my settings.py of the django app:

DATABASES = {
    'default': {
        'ENGINE': 'django.contrib.gis.db.backends.postgis',
        'NAME': os.getenv('POSTGRES_DBNAME'),
        'USER': os.getenv('POSTGRES_USER'),
        'PASSWORD': os.getenv('POSTGRES_PASS'),
        'HOST': os.getenv("POSTGRES_HOST", "localhost"),
        'PORT': os.getenv('POSTGRES_PORT')
    }
}

I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.

The error I get:

web_1      |    Is the server running on host "localhost" (127.0.0.1) and accepting
web_1      |    TCP/IP connections on port 25432?
web_1      | could not connect to server: Cannot assign requested address
web_1      |    Is the server running on host "localhost" (::1) and accepting
web_1      |    TCP/IP connections on port 25432?

Does anybody have an idea why this isn't working?

3
  • I think your db host should be the name of the database container. because within docker environment you can communicate between containers using container names. try using postgis for your database HOST Commented Mar 17, 2020 at 17:51
  • Why don't you put them in networK? Commented Mar 17, 2020 at 19:23
  • @SadanA. I've tried using postgis, in the django settings, in the docker-compose file (environment: WAIT_HOSTS: postgis:${POSTGRES_PORT}), sadly to no avail. Andrey Nelubin: Will look into networks, good idea. Commented Mar 17, 2020 at 20:19

2 Answers 2

2

I see that in your settings.py of the django app, you are connecting to Postgres via

'HOST': os.getenv("POSTGRES_HOST", "localhost"),

While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.

In order to solve this problem, simply update your .env file to be like this:

POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...

The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.

Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432

Sign up to request clarification or add additional context in comments.

4 Comments

I don't fully understand, I am running postgres in a seperate container using a docker-compose file so that is suggestion number 1. I haven't installed Postgres locally on my mac.
I have changed my code (in .env POSTGRES_HOST is now postgis), it is still not working. But I'll update my code in the OP to reflect the changes.
I have updated my answer. The POSTGRES_PORT needs to be 5432
Thanks, I got it working finally. Didn't realize that the internal port is used by other containers while the external port can be used outside the network.
2

you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.

to fix the issue, change your env to

# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis

# DJANGO
APP_PORT=8000

and you compose file to

version: '3.7'

services:
    postgis:
        image: kartoza/postgis:12.1
        volumes:
            - postgres_data:/var/lib/postgresql/data/
        environment:
            - POSTGRES_USER=${POSTGRES_USER}
            - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
            - POSTGRES_DB=${POSTGRES_DB}
        env_file:
            - .env
    web:
        build: .
#        command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
        command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
#        restart: on-failure
        ports:
            - "${APP_PORT}:8000"
        volumes:
            - .:/code
        depends_on:
            - postgis
        env_file:
            - .env
        environment:
            WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
    postgres_data:
        name: ${POSTGRES_VOLUME}

1 Comment

Thanks for the comment, I still get this message: web_1 | Is the server running on host "postgis" (192.168.192.2) and accepting web_1 | TCP/IP connections on port 25432?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.