1

I can`t connect to Postgres image in Docker container outside Python shell.

I have a script, which downloads and rolls out Postgres Docker image. One of the steps is that I would like to create some tables within the database.

If I run the setup.py, it will download the image and install Docker container on the local machine and wait for a connection. If next I will open a Python shell and try to connect to DB - everything works fine.

My Dockerfile:

FROM postgres:11.1
RUN apt-get update
EXPOSE 5432
ENV POSTGRES_DB monitoring
ENV POSTGRES_USER process_monitor

My setup.py file:

import subprocess
import psycopg2


def update_apt():
    subprocess.run(['sudo', 'apt-get', '-y', 'update'])

def install_docker():
    subprocess.run(['sudo', 'apt-get', '-y', 'install', 'docker.io'])

def install_docker_image():
    subprocess.run(['sudo','docker', 'build', '-t', 'postgres-image', '.'])

def docker_run():
    subprocess.run(['sudo', 'docker', 'run', '-p', '5432:5432', '-v', '/var/run/postgresql:/var/run/postgresql', '-d', 'postgres-image'])


if __name__ == "__main__":
    update_apt()
    install_docker()
    install_docker_image()
    docker_run()

Here is what I have in cli when I run this:

Successfully built d7a66c60b43e
Successfully tagged postgres-image:latest
2d1c667a84513943dad359b73962b6314590b1bf79ab4376aa7b513684629e5f

--- Successul built and ran container ---

(venv) simon@simon-HP-EliteBook-840-G2:~/Desktop/Projects/x5-test-task$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2d1c667a8451        postgres-image      "docker-entrypoint.s…"   7 seconds ago       Up 5 seconds        0.0.0.0:5432->5432/tcp   loving_shtern

-- Check if it works ----

(venv) simon@simon-HP-EliteBook-840-G2:~/Desktop/Projects/x5-test-task$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
>>> conn = psycopg2.connect(dbname='monitoring', user='process_monitor')
>>> cur = conn.cursor()
>>> cur.execute('SELECT datname FROM pg_database')
>>> cur.fetchall()
[('postgres',), ('monitoring',), ('template1',), ('template0',)]
>>> exit()

--- as you can see, there is a monitoring database I need ---

See, the successful connection from Python shell.

But when I tried to add the same code I checked in shell, i get the error:

if __name__ == "__main__":
    update_apt()
    install_docker()
    install_docker_image()
    docker_run()
    conn = psycopg2.connect(dbname='monitoring', user='process_monitor')
Traceback (most recent call last):
  File "setup.py", line 58, in <module>
    conn = psycopg2.connect(dbname='monitoring', user='process_monitor')
  File "/home/simon/Desktop/Projects/x5-test-task/venv/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

Can't figure out, why I can`t connect from the same script. Should I add some sleep time?

Edit1: it seems lite it is the problem of timing. Tried sleep(60) successfully connected.

3 Answers 3

2

The problem is that you try to connect right after initiating database start. There is some time between container start and moment when database starts accepting connections. You can either wait fixed amount of time with sleep or create loop which will check if db port is open.

Sign up to request clarification or add additional context in comments.

1 Comment

Yep, that is correct. Also, that led me to proper creation of database via SQL scripts inside docker.
0

After checking with official documentation, I found this:

If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.

That means, for several seconds right after the start of the container you can`t connect to postgres.

Comments

0

Check this link for docker Setup PythonAPI with Postgres: https://github.com/ajayrawat12/DockerPythonFalconPostgres

you can expose db port in docker-compose.yml file, then can use it in connection from anywhere using dbclient as 127.0.0.1:5432

1 Comment

Sorry, that is not what this question was about.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.