1

So I've managed to build my Docker image locally using docker-compose build and I've pushed the image to my Docker Hub repository.

Now I'm trying to get it working on DigitalOcean so I can host it. I've pulled the correct version of the image and I am trying to run it with the following command:

root@my-droplet:~# docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)

However I soon run into these two errors:

...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
    connection = Database.connect(**conn_params)   File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync) 
psycopg2.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
... 
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/base/base.py",
line 197, in connect
    self.connection = self.get_new_connection(conn_params)   File "/usr/local/lib/python3.8/dist-packages/django/utils/asyncio.py", line
26, in inner
    return func(*args, **kwargs)   File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
    connection = Database.connect(**conn_params)   File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync) 
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```

This may be due to how I have divided my application (using docker-compose,yml) into two services and have only pushed the image of the app since my previous post. Here is my docker-compose.yml file:

Here is my docker-compose.yml file:

version: '3'

services:
  db:
    image: postgres
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=(adminname)
      - POSTGRES_PASSWORD=(adminpassword)
      - CLOUDINARY_URL=(cloudinarykey)
  app:
    build: .
    ports:
      - "8000:8000"
    depends_on:
      - db

Here is my Dockerfile:

FROM (MY FRIENDS ACCOUNT)/django-npm:latest

RUN mkdir usr/src/mprova

WORKDIR /usr/src/mprova

COPY frontend ./frontend
COPY backend ./backend

WORKDIR /usr/src/mprova/frontend

RUN npm install
RUN npm run build

WORKDIR /usr/src/mprova/backend

ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt

EXPOSE 8000

CMD python3 manage.py collectstatic && \
    python3 manage.py makemigrations && \
    python3 manage.py migrate && \
    gunicorn mellon.wsgi --bind 0.0.0.0:8000

and here is a snippet of my settings.py file:

...
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'admin',
        'PASSWORD': '[THE PASSWORD]',
        'HOST': 'db',
        'PORT': 5432,
    }
}
...

How can I fix this issue? I've looked around but can't see what I'm doing wrong.


From what I saw online, I added the following to my project, but it doesn't work either:

Here is my base.py (new file):

import psycopg2

conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")

Additions to Dockerfile:

FROM (MY FRIENDS ACCOUNT)/django-npm:latest

RUN pip3 install psycopg2-binary

COPY base.py base.py

RUN python3 base.py

...

Yet the build fails due to this error:

Traceback (most recent call last):   File "base.py", line 3, in
<module>
    conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")   
File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
        Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
        TCP/IP connections on port 5432?

ERROR: Service 'app' failed to build: The command '/bin/sh -c python3 base.py' returned a non-zero code: 1 
me@My-MacBook-Pro-5 mellon % 

I'm unsure as to what to try next but I feel like I am making a simple mistake. Why is the connection being refused at the moment?

3 Answers 3

2

When you launch a docker-compose.yml file, Compose will automatically create a Docker network for you and attach containers to it. Absent any specific networks: declarations in the file, Compose will name the network default, and then most things Compose creates are prefixed with the current directory name. This setup is described in more detail in Networking in Compose.

docker run doesn't look at the Compose settings at all, so anything you manually do with docker run needs to replicate the settings docker-compose.yml has or would automatically generate. To make a docker run container connect to a Compose network, you need to do something like:

docker network ls              # looking for somename_default
docker run \
  --rm -it -p 8000:8000/tcp \
  --net somename_default \     # <-- add this option
  mycommand/myapp:1.1

(The specific error message could not translate host name "postgres" to address hints at a Docker networking setup issue: the database client container isn't on the same network as the database server and so the hostname resolution fails. Contrast with a "connection refused" type error, which would suggest one container is finding the other but either the database isn't running yet or the port number is wrong.)

In your docker build example, code in a Dockerfile can never connect to a database. At a mechanical level, there's no way to attach it to a Docker network as we did with docker run. At a conceptual level, building an image produces a reusable artifact; if I docker push the image to a registry and docker run it on a different host, that won't have the database setup, or similarly if I delete and recreate the database container locally.

Sign up to request clarification or add additional context in comments.

5 Comments

I missed the run part. Good explanation :)
When I do docker volume ls, there are no volumes available. Why is this the case?
I meant docker network ls, sorry; I've fixed the answer.
@DavidMaze Thank you. There are three options: bridge, host and none. How do I know which one to select?
If you've run docker-compose up -d, there should be one named somename_default, where somename is the current directory's name. Use that one.
1

I am trying to run it with the following command: root@my-droplet:~#

docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)

Docker-compose allows to create a composition of several containers.
Running the django container with docker run defeats its purpose.

What you want is running :

docker-compose build

docker-compose up

or in a single command :

docker-compose up --build                    

Why is the connection being refused at the moment?

Maybe because that is not done at the right moment.

I am not a Django specialist but your second try looks ok while your django service misses the Dockerfile attribute in the compose declaration (typing error I assume).

For your first try, that error :

django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```

means that django uses "postgres" as host of the database.
But according to your compose, the database could be reached with a host named "db". These don't match.

Regarding your second try that defines database properties, it looks fine and it also matches to what the official doc states :

In this section, you set up the database connection for Django.

In your project directory, edit the composeexample/settings.py file.

Replace the DATABASES = ... with the following:

settings.py

DATABASES = {
'default': {
    'ENGINE': 'django.db.backends.postgresql',
    'NAME': 'postgres',
    'USER': 'postgres',
    'PASSWORD': 'postgres',
    'HOST': 'db',
    'PORT': 5432,
} } These settings are determined by the postgres Docker image specified in docker-compose.yml.

For your second try, that error :

 conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
        Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
        TCP/IP connections on port 5432?

may mean that the Postgres DB listener is not ready yet when Django tries to init the connection with.

That attributes of your compose : depends_on: defines order of container start but doesn't wait for that the application behind the container be "ready".

Here two way to addresse that kind of scenario :

  • either add a sleep of some seconds before starting your django service
  • or set your django compose conf to restart on failure.

For the first way, configure your django service with a CMD such as (not tried) :

   app:
     build: .
     ports:
       - "8000:8000"
     depends_on:
       - db
     command: /bin/bash -c "sleep 30; python3 manage.py collectstatic && python3 manage.py makemigrations && python3 manage.py migrate && gunicorn mellon.wsgi --bind 0.0.0.0:8000 "

For the second way, try something like that :

  app:
    build: .
    ports:
      - "8000:8000"
    depends_on:
      - db
    restart: on-failure

Comments

0

I did not touch my Django Application for a long time, and I run python manage.py createsuperuser, then got

django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```

Instead, I need to use

docker-compose exec [Your Django App] python manage.py createsuperuser

Need to run in the container, then it works!

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.