You are viewing an older version of this section. View current production version.
BACKUP DATABASE
In SingleStore Managed Service, creating or restoring a backup to/from a local filesystem or network drive is not supported.
Backs up a database. The backup files are saved on the master aggregator and on the leaves. Refer to the Backing Up and Restoring Data guide for additional information.
Syntax
Backup to a Local or Network Drive
BACKUP [DATABASE] db_name [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO "backup_path" [(OPTION resource_pool = resource_pool_name)]
Backup to S3
BACKUP [DATABASE] db_name [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]
configuration_json:
'{"region":"your_region",
"endpoint_url":"http://other_endpoint"
[,"x-amz-server-side-encryption":"<encryption_type>" [, "x-amz-server-side-encryption-aws-kms-key-id":"<optional_key>" ] |
"x-amz-server-side-encryption-customer-algorithm":"<encryption_type>",
"x-amz-server-side-encryption-customer-key":"<encrypted_or_unencrypted_key>",
"x-amz-server-side-encryption-customer-key-MD5":"<key>"
]
}'
credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
"aws_secret_access_key": "replace_with_your_secret_access_key",
["aws_session_token": "replace_with_your_temp_session_token",]
["role_arn":"replace_with_your_role_arn"]
}'
Backup to an S3 Compatible Storage Provider
BACKUP [DATABASE] db_name [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO S3 "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]
configuration_json:
'{"endpoint_url":"http://storage_provider_endpoint"}'
credentials_json:
'{"aws_access_key_id": "replace_with_your_access_key_id",
"aws_secret_access_key": "replace_with_your_secret_access_key"
}'
Backup to Google Cloud Storage
BACKUP [DATABASE] db_name [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO GCS "bucket/path" [CONFIG configuration_json] CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]
configuration_json:
'{}'
credentials_json:
'{"access_id": "replace_with_your_google_access_key",
"secret_key": "replace_with_your_google_secret_key"
}'
See the remarks for more details.
Backup to Azure Blobs
BACKUP DATABASE db_name [WITH { INIT | DIFFERENTIAL | SPLIT PARTITIONS [BY 2] }] TO AZURE "container/blob-prefix" CREDENTIALS credentials_json [(OPTION resource_pool = resource_pool_name)]
credentials_json:
'{"account_name": "your_account_name_here",
"account_key": "your_account_key_here"
}'
General Arguments
db_name
is the name of a SingleStore DB database.resource_pool_name
is optional. It is the name of the resource pool where the backup operation runs.- Use the
[WITH { INIT | DIFFERENTIAL }]
clause to make an incremental backup. If you don’t specify this clause, a full backup is made. - Use the
[WITH {SPLIT PARTITIONS [BY 2]}]
clause to double the number of partitions in your database in an offline manner. See partition split topic for details.
General Remarks
- Because backups go directly from each node to the backup destination, all nodes must have write access to the destination.
- The
BACKUP DATABASE
command writes a consistent snapshot of the database to disk. This means a backup stores the data in the database as it existed at the time theBACKUP DATABASE
operation started (all committed transactions at the time theBACKUP DATABASE
started will be stored in the backup). Backup files containing the database’s reference tables are saved on the master aggregator. Backup files are also saved on the leaves. - This command must be run on the master aggregator node. For more information, see Node Requirements for SingleStore DB Commands.
- See Backing Up and Restoring Data for a list of items that are included in or excluded from a backup.
BACKUP DATABASE
is an online operation, which means that writes will continue to run while theBACKUP DATABASE
is in progress. Any writes executed after theBACKUP DATABASE
has started will not be reflected in the stored backup.- All partitions of the database need to be in the online or replicating state to run a
BACKUP DATABASE
operation. The backup file is created on the server and not the client, unlikemysqldump
. - The aggregator needs to briefly block new queries from running across the cluster to start a
BACKUP DATABASE
. Once theBACKUP DATABASE
is running, queries can run against the cluster normally. If there is a long-running write query executing at the time aBACKUP DATABASE
operation is run, theBACKUP DATABASE
will wait for the query to finish. - Incremental backups contain incremental data only for columnstore tables. For rowstore tables, an incremental backup contains a full copy of the rowstore data (instead of incremental data).
- A backup of system databases cannot be created.
- Remote databases, i.e. databases on a secondary cluster to which the primary database is being replicated, cannot be backed up with the
BACKUP DATABASE
command.
Back up to a Local or Network Drive
In SingleStore Managed Service, creating or restoring a backup to/from a local filesystem or network drive is not supported.
- When a local backup is made, the backup files are saved on the SingleStore DB master aggregator node and leaf nodes.
- Backing up to a network drive is preferable to backing up locally. This is because if any backup files stored locally on the SingleStore DB nodes become corrupted or are lost, the entire backup will not be able to be restored.
Local Backups
- The path
backup_path
on each node needs to be accessible by thememsqld
process. - The
backup_path
is relative to the directory stored in thedatadir
engine variable, by default. You can include/
at the front of thebackup_path
to override the default and use an absolute path.
Incremental backups cannot be stored on a local file system.
Backup to S3
-
BACKUP ... TO S3
writes the backup files to the specified S3 bucket. -
SingleStore DB backup will add
db_name.backup
to the path and put the objects in the pathpath/db_name.backup/
. -
bucket/path/db_name.backup
must not currently exist; otherwise, an error will be returned. -
aws_session_token
is optional. This is only needed if your AWS account uses the AWS Security Token Service. -
region
is optional. It is the AWS region where S3 bucket resides. -
endpoint_url
is optional. -
role_arn
is optional and can be used instead of specifyingaws_access_key_id
andaws_secret_access_key
values. This is only needed if your AWS security is setup to require a role. -
Backup to an S3 bucket requires the following permissions:
s3::GetObject s3::PutObject s3::ListBucket
-
You should not specify the
compatibility_mode
setting, as you would when backing up to Google Cloud Storage using the S3 interface.
Backup to an S3 Compatible Storage Provider
BACKUP ... TO S3
with theendpoint_url
set to the URL of a storage provider creates an S3 compatible backup at the storage provider.- SingleStore DB backup will add
db_name.backup
to the path and put the objects in the pathpath/db_name.backup/
. bucket/path/db_name.backup
must not currently exist; otherwise, an error will be returned.aws_access_key_id
is optional. It is the access key id for accessing the storage provider.aws_secret_access_key
is optional. It is the secret access key for accessing the storage provider.- You should not specify the
compatibility_mode
setting, as you would when backing up to Google Cloud Storage using the S3 interface. - When backing up to Dell EMC ECS object storage, you should use the
compatibility_mode
setting.
Backup to Google Cloud Storage
- The
CONFIG
clause may optionally specify anendpoint_url
. - The
CREDENTIALS
clause is required - We support only HMAC keys
- The
CREDENTIALS
clause should be a JSON object with two fields:
access_id
: usually a 24 or 60 character alphanumeric string, which is linked to the Google account, typically all uppercase and starts with GOOG
.
secret_key
: usually a 40 character Base-64 encoded string that is linked to a specific access_id
.
Back up to Azure Blobs
BACKUP ... TO AZURE
writes the backup files to the specified Azure container.- SingleStore DB backup will append
db_name.backup
to the blob prefix and and will put objects into the “path”blob-prefix/db_name.backup/
. container/blob-prefix/db_name.backup
must not currently exist; otherwise, an error will be returned.
Examples
Making an Incremental Backup
The Backing Up and Restoring Data topic provides an example of making incremental backups.
All of the examples shown below make full backups.
Local Backup
The following example backs up a database to /my-backup-dir
.
BACKUP DATABASE memsql_demo to "/my-backup-dir";
The following example backs up a database to the directory stored in the datadir
engine variable.
BACKUP DATABASE memsql_demo to "./";
The following example backs up a database to the my-relative-backup-dir
directory, which is relative to the directory stored in the datadir
engine variable. In this example, if datadir
contained the directory /var/lib/memsql/leaf-3308-aaaaaaaaaa/data
, the backup would be saved to the /var/lib/memsql/leaf-3308-aaaaaaaaaa/data/my-relative-backup-dir
directory.
BACKUP DATABASE memsql_demo to "./my-relative-backup-dir";
S3 Backup
The following example backs up a database to an S3 bucket.
BACKUP DATABASE memsql_demo TO S3 "backup_bucket/backups/6_1_2018"
CONFIG '{"region":"us-east-1"}'
CREDENTIALS '{"aws_access_key_id":"your_access_key_id","aws_secret_access_key":"your_secret_access_key"}';
The following example backs up a database to an S3 bucket using an Amazon Resource Name (ARN) for AWS Identity and Access Management (IAM).
BACKUP DATABASE mydatabase TO S3 'my-bucket/memsql-backup/'
CONFIG '{"region":"us-east-1"}'
CREDENTIALS '{"role_arn": "arn:aws:iam::<AccountID>:role/EC2AmazonS3FullAccess"}';
The following examples demonstrate the CONFIG
clause for backing up to encrypted buckets.
--- For Server-Side Encryption (SSE) ---
CONFIG '{"region":"us-east-1", "x-amz-server-side-encryption":"AES256"}'
--- For Server-Side Encryption - Key Management Service (SSE-KMS) ---
CONFIG '{"region":"us-east-1",
"x-amz-server-side-encryption":"aws:kms",
"x-amz-server-side-encryption-aws-kms-key-id": "<optional_key>"}'
--- For Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C) ---
CONFIG '{"region":"us-east-1", "endpoint_url":"<target_endpoint_url>",
"x-amz-server-side-encryption-customer-algorithm":"AES256",
"x-amz-server-side-encryption-customer-key":"<key>",
"x-amz-server-side-encryption-customer-key-MD5":"<key>"}'
Backup to Google Cloud Storage
The following example backs up a database to a Google Cloud Storage bucket.
BACKUP DATABASE memsql_demo TO GCS "backup_bucket/backups/6_1_2018"
CREDENTIALS '{"access_id":"your_google_access_key","secret_key":"your_google_secret_key"}';
Azure Blobs Backup
The following example backs up a database to an Azure bucket.
BACKUP DATABASE memsql_demo TO Azure "backup_container/backups/6_1_2018"
CREDENTIALS '{"account_name":"your_account_name_here","account_key":"your_account_key_here"}';
Backup Using a Resource Pool
The following example backs up a database to /my-backup-dir
, using an existing resource pool general
.
BACKUP DATABASE memsql_demo TO "/my-backup-dir" OPTION (resource_pool = general);