A simple docker container that runs PostgreSQL / PostGIS backups (PostGIS is not required it will backup any PG database). It is primarily intended to be used with our docker postgis docker image. By default, it will create a backup once per night (at 23h00)in a nicely ordered directory by a year / month.
- Visit our page on the docker hub at: https://registry.hub.docker.com/u/kartoza/pg-backup/
- Visit our page on GitHub at: https://github.com/kartoza/docker-pg-backup
There are various ways to get the image onto your system:
The preferred way (but using most bandwidth for the initial image) is to get our docker trusted build like this:
docker pull kartoza/pg-backup:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}
Where the environment variables are
POSTGRES_MAJOR_VERSION=13
POSTGIS_MAJOR_VERSION=3
POSTGIS_MINOR_RELEASE=1
We highly suggest that you use a tagged image that match the PostgreSQL image you are running i.e (kartoza/pg-backup:13-3.1 for backing up kartoza/postgis:13-3.1 DB). The latest tag may change and may not successfully back up your database.
To build the image yourself do:
git clone https://github.com/kartoza/docker-pg-backup.git
cd docker-pg-backup
./build.sh # It will build the latest version corresponding the latest PostgreSQL version
To create a running container do:
POSTGRES_MAJOR_VERSION=13
POSTGIS_MAJOR_VERSION=3
POSTGIS_MINOR_RELEASE=1
docker run --name "db" -p 25432:5432 -d -t kartoza/postgis:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}
docker run --name="backups" --link db:db -v `pwd`/backups:/backups -d kartoza/pg-backup:$POSTGRES_MAJOR_VERSION-$POSTGIS_MAJOR_VERSION.${POSTGIS_MINOR_RELEASE}
You can also use the following environment variables to pass a username and password etc for the database connection.
POSTGRES_USERif not set, defaults to : dockerPOSTGRES_PASSif not set, defaults to : dockerPOSTGRES_PORTif not set, defaults to : 5432POSTGRES_HOSTif not set, defaults to : dbARCHIVE_FILENAMEyou can use your specified filename format here, default to empty, which means it will use default filename format.DBLISTa space-separated list of databases for backup, e.g.gis data. Default is all databases.REMOVE_BEFOREremove all old backups older than specified amount of days, e.g.30would only keep backup files younger than 30 days. Default: no files are ever removed.DUMP_ARGSThe default dump arguments based on official PostgreSQL Dump options.RESTORE_ARGSAdditional restore commands based on official PostgreSQL restoreSTORAGE_BACKENDThe default backend is to store the backup files. It can either beFILEorS3(Example minio or amazon bucket) backends.DB_TABLESA boolean variable to specify if the user wants to dump the DB as individual tables. Defaults toNoCRON_SCHEDULEspecifies the cron schedule when the backup needs to run. Defaults to midnight daily.
Note To avoid interpolation issues with the env variable ${CRON_SCHEDULE} you will
need to provide the variable as a quoted string i.e ${CRON_SCHEDULE}='*/1 * * * '
or ${CRON_SCHEDULE}="/1 * * * *"
Here is a more typical example using docker-composer:
The default backup archive generated will be stored in the /backups directory (inside the container):
/backups/$(date +%Y)/$(date +%B)/${DUMPPREFIX}_${DB}.$(date +%d-%B-%Y).dmp
As a concrete example, with DUMPPREFIX=PG and if your postgis has DB name gis.
The backup archive would be something like:
/backups/2019/February/PG_gis.13-February-2019.dmp
If you specify ARCHIVE_FILENAME instead (default value is empty). The
filename will be fixed according to this prefix.
Let's assume ARCHIVE_FILENAME=latest
The backup archive would be something like
/backups/latest.gis.dmp
The script uses s3cmd for backing up files to S3 bucket.
ACCESS_KEY_IDAccess key for the bucketSECRET_ACCESS_KEYSecret Access key for the bucketDEFAULT_REGIONDefaults to 'us-west-2'HOST_BASEHOST_BUCKETSSL_SECUREThe determines if the S3 bucket isBUCKETIndicates the bucket name that will be created.
You can read more about configuration options for s3cmd
For a typical usage of this look at the docker-compose-s3.yml
The image supports mounting the following configs:
- s3cfg when backing to
S3backend - backup-cron for any custom configuration you need to specify in the file.
An environment variable ${EXTRA_CONFIG_DIR} controls the location of the folder.
If you need to mount s3cfg file. You can run the following:
-e ${EXTRA_CONFIG_DIR}=/settings
-v /data:/settings
Where s3cfg is located in /data
The image provides a simple restore script. You need to specify some environment variables first:
TARGET_DBThe db name to restoreWITH_POSTGISKartoza specific, to generate POSTGIS extension along with the restore processTARGET_ARCHIVEThe full path of the archive to restore
Note: The restore script will try to delete the TARGET_DB if it matches an existing database,
so make sure you know what you are doing.
Then it will create a new one and restore the content from TARGET_ARCHIVE
It is generally a good practice to restore into an empty new database and then manually drop and rename the databases.
i.e if your original database is named gis, you can restore it into a new database called gis_restore
If you specify these environment variables using docker-compose.yml file, then you can execute a restore process like this:
docker-compose exec dbbackups /backup-scripts/restore.sh
Tim Sutton ([email protected])
Admire Nyakudya ([email protected])
Rizky Maulana ([email protected]) July 2021