0

I need to synchronize a test environment with production data (n8n worflows, user info, etc), both have postgres databases and reside on the same intranet

(e.g: server A: 10.0.0.10, server B: 10.0.0.20)

What I need is an easy/automated way of transfering all databases from production to the test environment without margin of error.

PS: I am aware of how to dump ONE database into a remote server, my question is regarding dumping N databases efficiently into another server.

3
  • 7
    The docs are your friend: 1) logical replication. 2) postgres-fdw. 3) For dump/restore entire cluster: a) Text pg_dumpall b) Binary pg_basebackup. That should get you started. Commented Oct 28 at 18:07
  • I was able to do it with a relatively straightforward bash script using pg_dumpall together with a remote psql connection, thanks a lot! Commented Nov 6 at 12:13
  • why not post the script as an answer and then the question might remain open? Commented Nov 11 at 13:21

1 Answer 1

0

I was able to do it with a relatively straightforward bash script using pg_dumpall together with a remote psql connection:

#!/usr/bin/env bash
# export_dump.sh
# Usage: edit the simple variables below (or set them in the environment before running).
# This version uses pg_dumpall locally and restores via psql remotely.

set -euo pipefail

# ---------------------------
# Simple external variables
# Edit these directly or export them in the environment before running.
# ---------------------------
LOCAL_CONTAINER="postgres"
LOCAL_DB_USER="USER"
LOCAL_DB_PASS=""        # optional

REMOTE_HOST="10.0.0.XXX"   # required
REMOTE_SSH_USER="root"
SSH_PASSWORD="password"                    # optional: password for SSH (requires sshpass)
REMOTE_CONTAINER="postgres"
REMOTE_DB_USER="USER"          # remote psql user
REMOTE_DB_PASS=""               # optional for psql

DUMP_FILE="dump_chatbot_completo.sql"                 # local dump filename

# ---------------------------
# Basic validation
# ---------------------------
if [[ -z "$REMOTE_HOST" ]]; then echo "REMOTE_HOST is required"; exit 1; fi

SSH_OPTS=()
# Add any desired default ssh options here, e.g.:
SSH_OPTS+=("-o" "StrictHostKeyChecking=no")

SSH_TARGET="${REMOTE_SSH_USER}@${REMOTE_HOST}"

# NOTE: If SSH_PASSWORD is provided, ensure sshpass is available
SSHPASS_CMD=()
if [[ -n "${SSH_PASSWORD}" ]]; then
    if ! command -v sshpass >/dev/null 2>&1; then
        echo "sshpass is required for password-based SSH. Install sshpass or remove SSH_PASSWORD." >&2
        exit 1
    fi
    SSHPASS_CMD=("sshpass" "-p" "${SSH_PASSWORD}")
fi


# ---------------------------
# 1. Create local dump using pg_dumpall
# ---------------------------
echo "Creating dump from local container '$LOCAL_CONTAINER' using pg_dumpall..."

if [[ -n "$LOCAL_DB_PASS" ]]; then
    docker exec -i "$LOCAL_CONTAINER" sh -c "PGPASSWORD='${LOCAL_DB_PASS}' pg_dumpall -U '${LOCAL_DB_USER}'" > "$DUMP_FILE"
else
    docker exec -i "$LOCAL_CONTAINER" pg_dumpall -U "${LOCAL_DB_USER}" > "$DUMP_FILE"
fi

echo "Dump created: $DUMP_FILE ($(du -h "$DUMP_FILE" | cut -f1))"

# ---------------------------
# 2. Upload dump to remote host
# ---------------------------
REMOTE_TMP="/tmp/${DUMP_FILE}"
echo "Uploading $DUMP_FILE to ${SSH_TARGET}:${REMOTE_TMP}..."

if [[ -n "${SSHPASS_CMD[*]:-}" ]]; then
    "${SSHPASS_CMD[@]}" scp "${SSH_OPTS[@]}" "$DUMP_FILE" "${SSH_TARGET}:${REMOTE_TMP}"
else
    scp "${SSH_OPTS[@]}" "$DUMP_FILE" "${SSH_TARGET}:${REMOTE_TMP}"
fi

echo "Verifying dump file exists on remote host..."

if [[ -n "${SSHPASS_CMD[*]:-}" ]]; then
    "${SSHPASS_CMD[@]}" ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "test -f '${REMOTE_TMP}' && echo 'File exists on remote host' || { echo 'File not found on remote host' >&2; exit 1; }"
else
    ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "test -f '${REMOTE_TMP}' && echo 'File exists on remote host' || { echo 'File not found on remote host' >&2; exit 1; }"
fi

# ---------------------------
# 2.5. Drop all databases on remote host
# ---------------------------
echo "Dropping all databases on remote container '$REMOTE_CONTAINER'..."

if [[ -n "$REMOTE_DB_PASS" ]]; then
    remote_drop_command="docker exec -i ${REMOTE_CONTAINER} bash -c \"export PGPASSWORD='${REMOTE_DB_PASS}'; psql -U '${REMOTE_DB_USER}' -t -c \\\"SELECT 'DROP DATABASE \\\\\\\"' || datname || '\\\\\\\" WITH (FORCE);' FROM pg_database WHERE datistemplate = false AND datname != 'postgres';\\\" | psql -U '${REMOTE_DB_USER}'\""
else
    remote_drop_command="docker exec -i ${REMOTE_CONTAINER} bash -c \"psql -U '${REMOTE_DB_USER}' -t -c \\\"SELECT 'DROP DATABASE \\\\\\\"' || datname || '\\\\\\\" WITH (FORCE);' FROM pg_database WHERE datistemplate = false AND datname != 'postgres';\\\" | psql -U '${REMOTE_DB_USER}'\""
fi

if [[ -n "${SSHPASS_CMD[*]:-}" ]]; then
    "${SSHPASS_CMD[@]}" ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "${remote_drop_command}"
else
    ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "${remote_drop_command}"
fi

echo "All databases dropped successfully."

# ---------------------------
# 3. Restore on remote host using psql
# ---------------------------
echo "Restoring dump on remote container '$REMOTE_CONTAINER' using psql -U ${REMOTE_DB_USER}..."

if [[ -n "$REMOTE_DB_PASS" ]]; then
    remote_cmd="export PGPASSWORD='${REMOTE_DB_PASS}'; cat '${REMOTE_TMP}' | docker exec -i ${REMOTE_CONTAINER} psql -U '${REMOTE_DB_USER}'; rm -f '${REMOTE_TMP}'"
else
    remote_cmd="cat '${REMOTE_TMP}' | docker exec -i ${REMOTE_CONTAINER} psql -U '${REMOTE_DB_USER}'; rm -f '${REMOTE_TMP}'"
fi

if [[ -n "${SSHPASS_CMD[*]:-}" ]]; then
    "${SSHPASS_CMD[@]}" ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "${remote_cmd}"
else
    ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "${remote_cmd}"
fi

echo "Restore complete."
exit 0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.