System Backup
  • 29 Oct 2024
  • 2 Minutes to read
  • Dark
    Light
  • PDF

System Backup

  • Dark
    Light
  • PDF

Article summary

This article provides an overview and recommendations for backing up your Brainspace instance for Disaster Recovery purposes.

A thorough backup of all important data and files for your Brainspace instance should include the following, all taken at the same time:

  • Builds information (by default, the /data/brainspace volume and subdirectories)

  • Datasets information (by default, the /localdata/brainspace volume and subdirectories on the Application server)

  • PostgreSQL Brainspace database data (by default on the Application server)

  • System and application files (root partition on all servers)

The frequency for your backups, and whether you use Full, Incremental or Differential backups, snapshot or deduplication technology should be determined according to your business needs. At a minimum, a daily/nightly full backup of all important data volumes and weekly backup of application/system volumes is recommended.

See Dataset Archive and Restore Options for an overview of options to disable or archive your datasets and manage Brainspace license and system memory utilization.

Backing up your Builds information

  1. Run the following command as root to make an archive copy of your Builds information:

  • tar cvfz builds-$(date +%Y%m%d%H%M%S).tar.gz /data/brainspace

    • This will create a compressed archive file in the current directory named builds-<timestamp>.tar.gz

    • This command can be run in a cron job or systemd timer on a schedule to meet your business needs.

Backing up your Datasets Information

Enter the following command as root to archive your build information:

  • tar cvfz datasets-$(date +%Y%m%d%H%M%S).tar.gz /localdata/brainspace

    • This will create a compressed archive file in the current directory named datasets-<timestamp>.tar.gz

    • This command can be run in a cron job or systemd timer on a schedule to meet your business needs.

Archiving your PostgreSQL Brainspace database

  1. Run the following command as root to make a point-in-time copy of your database.

  2. Enter the following command:

    • su - postgres

  3. Enter the following command:

    • pg_dump --clean --create --if-exists --compress=5 --quote-all-identifiers brainspace > brainspace-db-$(date +%Y%m%d%H%M%S).sql.gz

    • This will create a gzip compressed file in the current directory named 'brainspace-db-<timestamp>.sql.gz'

      Note

      If records in your database are in use while making a copy, the copy may not include the most recent database updates. It is recommended to make the copy after-hours when your Brainspace instance is not being used, to create the copy with enough frequency to get recent changes, or leverage the Continuous Archive capability in Postgres for the most timely backup capabilities.

System Backups

  1. Use the archival procedures provided above to save your important Build, Dataset, PostgreSQL data and system files, then backup using your organization's preferred backup solution.

  2. You might choose to run a cron job or systemd timer to automate and schedule the archival and backup procedures to run on a regular basis.

Restoring data from Backup

  1. If you need to restore your Brainspace instance from a saved backup, be sure to use the backup copies taken at the same time (the restored data files must match the state of the restored database).

  2. Enter the following command:

    • su - postgres

      Note

      The file to be restored needs to be readable by the `postgres` user.

  3. If needed, below is the command to restore the Postgres brainspace database from backup file:

    • gunzip -c brainspace-db-<timestamp>.sql.gz | psql


ESC

Eddy AI, facilitating knowledge discovery through conversational intelligence