67

I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.

My part from docker-compose.yml file about PostgreSQL:

db:
    image: postgres:9.4
    ports:
        - 5432:5432
    environment:
        POSTGRES_PASSWORD: postgres
        POSTGRES_USER: postgres
        PGDATA : /var/lib/postgresql/data
    volumes:
        - /projects/own/docker_php/pgdata:/var/lib/postgresql/data

When I make docker-compose up I get this message:

db_1  | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1  | If you want to create a new database system, either remove or empty
db_1  | the directory "/var/lib/postgresql/data" or run initdb
db_1  | with an argument other than "/var/lib/postgresql/data".

I tried to create my own image from the container, so my Dockerfile is:

FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data

But I got the same error, what am I doing wrong?

Update

I got a SQL dump using pg_dumpall, and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.

1
  • Have you ever solved this with data volumes instead of data-only containers? Commented Oct 22, 2016 at 16:36

3 Answers 3

78

To build on irakli's answer, here's an updated solution:

  • use newer version 2 Docker Compose file
  • separate volumes section
  • extra settings deleted

docker-compose.yml

version: '2'

services:
  postgres9:
    image: postgres:9.4
    expose:
      - 5432
    volumes:
      - data:/var/lib/postgresql/data

volumes:
  data: {}

demo

Start Postgres database server:

$ docker-compose up

Show all tables in the database. In another terminal, talk to the container's Postgres:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'

It'll show nothing, as the database is blank. Create a table:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'

List the newly-created table:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'

                         Access privileges
 Schema |   Name    | Type  | Access privileges | Column access privileges 
--------+-----------+-------+-------------------+--------------------------
 public | beer      | table |                   | 

Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.

Now, kill the Postgres server container:

$ docker-compose stop

Start up the Postgres container again:

$ docker-compose up

We expect that the database server will re-use the storage, so our very important data is still there. Check:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
                         Access privileges
 Schema |   Name    | Type  | Access privileges | Column access privileges 
--------+-----------+-------+-------------------+--------------------------
public | beer      | table |                   | 

We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.

storing data

First, make a backup, storing our data on the host:

$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql

Zap our data from the guest database:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'

Restore our backup (stored on the host) into the Postgres container.

Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.

$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql

List the tables to verify the restore worked:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
                         Access privileges
Schema |   Name    | Type  | Access privileges | Column access privileges 
--------+-----------+-------+-------------------+--------------------------
public | beer      | table |                   | 

To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.

Thanks Tomasz!

Sign up to request clarification or add additional context in comments.

4 Comments

Great example, thank you, one observation under "Zap our data ... " there should be 'drop table beer' rather than 'delete table beer', shouldn't it?
Hi @johntellsall , Thanks for demo. Do you mind you let me know, the volumes data: {} is actually pointing to which host machine directory ?
For what it's worth, in case your data is not persisting, you can look here for answers
I had the same but i was getting problems with a clean database. I have to put - PGDATA=/tmp for solve it.
17

It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.

You should be creating data-only Docker containers, instead. Like this:

postgres9:
  image: postgres:9.4
  ports:
    - 5432:5432
  volumes_from:
    - pg_data
  environment:
    POSTGRES_PASSWORD: postgres
    POSTGRES_USER: postgres
    PGDATA : /var/lib/postgresql/data/pgdata

pg_data:
  image: alpine:latest
  volumes:
    - /var/lib/postgresql/data/pgdata
  command: "true"

which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good

As for: how to import initial data, you can either:

  1. docker cp, into the data-only container of the setup, or
  2. Use an SQL dump of the data, instead of moving binary files around (which is what I would do).

5 Comments

Thanks, it seems to be correct. But according to UPDATE 5 stackoverflow.com/questions/18496940/…. Data container is not needed now, instead we should use new volume api docs.docker.com/engine/reference/commandline/volume_create . How can we interpret your example with this new api ?
Any idea how to SQL dump if your Postgres Docker container is not starting due to the error mentioned by OP?
update : volumes_from is depreciated. github.com/wodby/docker4drupal/issues/92
There's little benefit to this. One could just make a persistent volume and reference it from 2 other containers and accomplish all the same things mentioned in that blog post. Adding complexity for future possible never to be realized gain is the downfall of us developers. We've all done it and need to stop.
Personally, I like mounting my volumes to disk rather than container volumes because I can more easily access the files in the mounted volume.
0

To restore your data from an existing dumped.sql file.

Create your Dockerfile:

version: "3"
services:
 
  postgres:
    image: postgres
    container_name: postgres
    environment:
      POSTGRES_DB: YOUR_DATABASE
      POSTGRES_USER: your_username_if_needed
      POSTGRES_PASSWORD: your_password_if_needed
    expose:
      - 5432
    volumes:
      - data:/var/lib/postgresql/data

volumes:
  data: {}

Launch it and then kill it:

docker-compose up
docker-compose stop

Then migrate your data from that dump:

docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql

-W is only needed if you set a password.

You should now see your console executing the import.

When its done, you're good to go: docker-compose up

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.