0

We manage several Java+PostgreSQL environments in Jelastic. Our PaaS provider uses Jelastic platform version 5.4.

In each environment, we have a cron task that calls a shell script to generate a daily gzipped database back-up via pg_dump for PostgreSQL 9.4. This script has been running for years, literally, but recently it stopped working. The script looks like this:

#!/bin/bash
DATE=`date +"%Y-%m-%d_%H-%M-%S"`
DB_NAME="my-backup"
FILE="$DB_NAME-$DATE.backup.gz"
BASE_DIR="/var/lib/jelastic/backup"
BACKUP_FILE_PATH=$BASE_DIR/$FILE
pg_dump --verbose --format=custom $DB_NAME | gzip > $BACKUP_FILE_PATH

The only thing that has changed recently has to do with an increased shared_buffers value in postgresql.conf, which is a change we performed based on instructions provided by Jelastic. We did try reversing the change on shared_buffers to return it to its default value for Jelastic environments, with no positive effects on back-ups.

Now, the generated backup file (gzipped) is just 20 bytes long, and the whole backup process takes much less time than we would expect, as our database is large (over 1.5GB) and contains BLOBs. The file extracted from the GZIP is empty.

In pg_dump verbose output there's nothing out of the ordinary:

pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading table inheritance information
pg_dump: reading event triggers
pg_dump: finding extension members
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "databasechangeloglock"
pg_dump: finding the columns and types of table "databasechangelog"
pg_dump: finding the columns and types of table "compania"
pg_dump: finding default expressions of table "compania"
pg_dump: finding the columns and types of table "comprobante"
pg_dump: finding default expressions of table "comprobante"
pg_dump: finding the columns and types of table "estado_comprobante"
pg_dump: finding default expressions of table "estado_comprobante"
pg_dump: finding the columns and types of table "usuario"
pg_dump: finding default expressions of table "usuario"
pg_dump: finding the columns and types of table "clave_contingencia"
pg_dump: finding default expressions of table "clave_contingencia"
pg_dump: finding the columns and types of table "latido_integrador"
pg_dump: finding default expressions of table "latido_integrador"
pg_dump: finding the columns and types of table "comprobante_importado"
pg_dump: finding default expressions of table "comprobante_importado"
pg_dump: finding the columns and types of table "estado_comprobante_importado"
pg_dump: finding default expressions of table "estado_comprobante_importado"
pg_dump: finding the columns and types of table "tarea_comprobante_importado"
pg_dump: finding default expressions of table "tarea_comprobante_importado"
pg_dump: finding the columns and types of table "usuario_compania"
pg_dump: finding default expressions of table "usuario_compania"
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: reading indexes for table "databasechangeloglock"
pg_dump: reading indexes for table "compania"
pg_dump: reading indexes for table "comprobante"
pg_dump: reading indexes for table "estado_comprobante"
pg_dump: reading indexes for table "usuario"
pg_dump: reading indexes for table "clave_contingencia"
pg_dump: reading indexes for table "latido_integrador"
pg_dump: reading indexes for table "comprobante_importado"
pg_dump: reading indexes for table "estado_comprobante_importado"
pg_dump: reading indexes for table "tarea_comprobante_importado"
pg_dump: reading indexes for table "usuario_compania"
pg_dump: reading constraints
pg_dump: reading foreign key constraints for table "compania"
pg_dump: reading foreign key constraints for table "comprobante"
pg_dump: reading foreign key constraints for table "estado_comprobante"
pg_dump: reading foreign key constraints for table "usuario"
pg_dump: reading foreign key constraints for table "clave_contingencia"
pg_dump: reading foreign key constraints for table "latido_integrador"
pg_dump: reading foreign key constraints for table "comprobante_importado"
pg_dump: reading foreign key constraints for table "estado_comprobante_importado"
pg_dump: reading foreign key constraints for table "tarea_comprobante_importado"
pg_dump: reading foreign key constraints for table "usuario_compania"
pg_dump: reading triggers
pg_dump: reading triggers for table "compania"
pg_dump: reading triggers for table "comprobante"
pg_dump: reading triggers for table "estado_comprobante"
pg_dump: reading triggers for table "usuario"
pg_dump: reading triggers for table "clave_contingencia"
pg_dump: reading triggers for table "latido_integrador"
pg_dump: reading triggers for table "comprobante_importado"
pg_dump: reading triggers for table "estado_comprobante_importado"
pg_dump: reading triggers for table "tarea_comprobante_importado"
pg_dump: reading triggers for table "usuario_compania"
pg_dump: reading rewrite rules
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: saving database definition

Also, the PostgreSQL 9.4 logs than can be accessed on Jelastic show no relevant messages that may give us a clue as to what might be happening.

In an attempt to "fix" this, we have performed PostgreSQL maintenance procedures, including vacuumlo and vacuumdb --full, to no avail. There's plenty of available storage space available for the backup file, so that should not be the cause of the problem.

Any ideas about why this might be happening? What should we look for, and where? As this is a critical issue, we would appreciate any pointers.

5
  • Is this just happening in one environment of your several? Commented Oct 1, 2018 at 23:08
  • reading your post again, try specifying user for pg_dump like "pg_dump -U postgres ... etc" Commented Oct 2, 2018 at 2:12
  • @Slumdog It is happening in a couple of environments where the only change has been the modification of the shared-buffers value. Commented Oct 3, 2018 at 14:12
  • @Slumdog Also, I do not specify the -U switch because I have made some changes to pg_hba.conf to allow for unsupervised backup generation via the cron job. As I mentioned in the original post, the script has always worked, and it is exactly the same as the one that works just fine in other environments (my deployment procedure is automated.) The only difference is the shared-buffers value. The databases are large (several GB in size), but that didn't seem to cause any trouble before. Commented Oct 3, 2018 at 14:21
  • hi, @Esteban, how look for you the postgres file on cron folder?? Commented Jan 23, 2019 at 21:56

1 Answer 1

1

The first thing you need to do is to change the file pg_hba.conf (allow connections without a password through sockets):

CT-42366 ~# cat ./backup.sh 
local    all all                  trust
host     all all     127.0.0.1/32 ident
host     all all     ::1/128      ident
host     all all     0.0.0.0/0    md5

After that restart "postgresql" service:

service postgresql restart

Then, if your login to the database is different from "root", use this modified script:

CT-42366 ~# cat ./backup.sh 
#!/bin/bash
DATE=date +"%Y-%m-%d_%H-%M-%S"
DB_NAME="db_name"
LOGIN="db_login"
FILE="$DB_NAME-$DATE.backup.gz"
BASE_DIR="/var/lib/jelastic/backup"
BACKUP_FILE_PATH=$BASE_DIR/$FILE
pg_dump --user=$LOGIN --format=custom --dbname=$DB_NAME | gzip > 
$BACKUP_FILE_PATH

We hope this helps you solve your problem.

Sign up to request clarification or add additional context in comments.

5 Comments

Thanks for the modified script, but I don't think it is a solution in my case. As explained in the original post, the script has not changed and has been working for a few years. Originally I did follow the instructions on the Jelastic documentation, which included a change to pg_hba.conf. I am not getting permission errors with pg_dump, and therefore I don't need the modified script; actually, pg_dump produces a "normal" output (which I included above). The only change was the shared_buffers value, but even after reverting that change, pg_dump is generating empty backup files.
You still need to try variations directly in Postgresql and tell us what you have tried, we can't tell from your comments. So far all I can read from your replies are what you think is best rather than actual results. Try logging connection info at least in Postgresql, comparing it to a working server.
Your problem is that you have the pg_hba.conf file configured by default in environments where backups do not work. Check it out and, if necessary, change it according to the answer I gave above.
hi, @Jelactic, according with docs.jelastic.com/postgresql-backup-restore, how look postgres file on cron folder to run bash file??
@meyquel, it's enough to specify a path to your script in postgres cron file. * * * * * /path_to_script/script.sh works fine for us (script.sh should be executable). If such approach doesn't work please refer the official cron documentation

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.