MemSQL replication allows database replication between MemSQL instances, executed at the partition level. It is simple, robust, and fast. This topic describes how to use replication in MemSQL.
MemSQL replication is fully online. In the middle of a continuous write workload, you can start replication to a secondary (replica) cluster without pausing the primary (source) cluster. Replication then creates a read-only database replica that can be used for disaster recovery or to serve additional reads.
Replication across clusters, which includes cross datacenter replication, only supports asynchronous mode. In asynchronous mode, writes on the primary cluster will never wait to be replicated to the secondary cluster. Furthermore, secondary cluster failures will never block the master.
Databases are replicated at the leaf level, which implies that a leaf in the secondary cluster replicates data directly from a leaf in the primary cluster; therefore, when connecting a secondary cluster to the primary cluster, the leaves in the primary and secondary cluster must be able to communicate with each other. They should not be blocked by firewall or network rules.
To replicate a database, the secondary cluster user must have
CREATE DATABASE
privileges and the primary cluster user (the one
specified in REPLICATE DATABASE
) must have REPLICATION
privileges on the primary cluster’s master aggregator.
Replication Commands
Replication in MemSQL is controlled entirely from the secondary cluster, where the following commands are run:
- REPLICATE DATABASE
- PAUSE REPLICATING
- CONTINUE REPLICATING
- STOP REPLICATING
- SHOW REPLICATION STATUS
- SHOW PARTITIONS
- SHOW CLUSTER STATUS
Note that most of these commands are also applicable to within-cluster replication.
Setting Up Replication
This example will guide you through setting up replication of a database. These instructions assume that you have two MemSQL clusters running. The following host:port
combinations represent the master aggregators of the primary and secondary clusters:
primary-MA:3306
secondary-MA:3306
Note that the primary and secondary clusters need not have identical topologies. MemSQL will automatically manage sharding of replica data on the secondary cluster. In this example, primary-MA
has a root user with an empty password.
To begin replicating the database db_name
from primary-MA
, run the following command on secondary-MA
:
REPLICATE DATABASE db_name FROM root:root_password@primary-MA:3306;
Note that multiple secondary clusters can replicate from a single primary cluster. To do this, run REPLICATE DATABASE
on the master aggregator of each of the replica clusters.
Pausing and Stopping Replication
MemSQL allows users to pause and resume online replication with single commands.
PAUSE REPLICATING db_name;
****
Query OK, 1 row affected (0.06 sec)
CONTINUE REPLICATING db_name;
****
Query OK, 1 row affected (0.96 sec)
PAUSE REPLICATING
temporarily pauses replication but maintains the replication relationship between master and secondary databases. To begin replicating from a different primary cluster, you must start a new REPLICATE DATABASE
process.
STOP REPLICATING db_name
halts replication of db_name
and automatically promotes the db_name
instance on the secondary cluster to a “full” MemSQL database with all read, write, DDL, and DML operations. Once replication on a cluster has been stopped, it cannot be restarted.
Monitoring Replication
SHOW PARTITIONS EXTENDED
Running SHOW PARTITIONS EXTENDED
on secondary-MA
will display information such as replication role, the location of each partition, if it is locked, and other information.
SHOW CLUSTER STATUS
Running SHOW CLUSTER STATUS
provides information like log replay
position and detailed information about all databases in the cluster.
SHOW DATABASES EXTENDED
SHOW DATABASES EXTENDED
is another useful command for monitoring replication status. The output summarizes the replication status and other information about the state of the databases present in a MemSQL cluster.
SHOW REPLICATION STATUS
Running SHOW REPLICATION STATUS
on a node shows the status of every replication process running on that node. The following is an example of the output of SHOW REPLICATION STATUS run on secondary-MA
. Note that this example follows the naming conventions established in Setting Up Replication.
SHOW REPLICATION STATUS;
****
+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role | Database | Master_URI | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| master | cluster | NULL | online | 0:37 | 0:37 | 0:0 | 0:37 | 34 | yes | 127.0.0.1:20002/cluster | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 17 |
| master | cluster | NULL | online | 0:37 | 0:37 | 0:0 | 0:37 | 34 | yes | 127.0.0.1:20001/cluster | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 17 |
| async slave | cluster_17639882876507016380 | 127.0.0.1:10000/cluster | online | 0:37 | 0:37 | 0:0 | 0:37 | 33 | yes | NULL | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 |
| master | cluster_17639882876507016380 | NULL | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 | yes | 127.0.0.1:20002/cluster_17639882876507016380 | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 |
| master | cluster_17639882876507016380 | NULL | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 | yes | 127.0.0.1:20001/cluster_17639882876507016380 | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 |
| async slave | db_name | 127.0.0.1:10000/db_name | online | 0:683 | 0:683 | 0:683 | 0:683 | 8 | yes | NULL | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 |
| master | db_name | NULL | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 | yes | 127.0.0.1:20002/db_name | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 |
| master | db_name | NULL | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 | yes | 127.0.0.1:20001/db_name | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 |
+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
8 rows in set (0.03 sec)
In this example, the first line describes replication of the sharding
database on primary-MA
to the cluster_17639882876507016380
database on secondary-MA
. The sharding
database exists on the master aggregator and stores metadata that defines how data is partitioned. REPLICATE DATABASE
automatically creates a cluster_[hash]
database on the secondary cluster which stores partition metadata about the primary cluster. The second line describes replication of metadata and reference tables for the db_name
database in the secondary cluster. This data is replicated asynchronously to all aggregators and asynchronously to all leaves. The third and fourth lines describe replication of db_name
metadata and reference tables from secondary-MA
to the secondary cluster’s two leaf nodes (secondary-L1
and secondary-L2
).
NetworkPosition
uses the format
[log file ordinal]:[byte offset into log file]
.
The following is the output of SHOW REPLICATION STATUS run on
secondary-L1
. In this example, db_name_[ordinal]
refers to a partition of the sharded db_name
database.
SHOW REPLICATION STATUS;
****
+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role | Database | Master_URI | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| async slave | cluster | 127.0.0.1:20000/cluster | online | 0:37 | 0:37 | 0:0 | 0:37 | 34 | yes | NULL | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 17 |
| async slave | cluster_17639882876507016380 | 127.0.0.1:20000/cluster_17639882876507016380 | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 | yes | NULL | replicating | 0:36 | 0:37 | 0:36 | 0:37 | 16 |
| async slave | db_name | 127.0.0.1:20000/db_name | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 | yes | NULL | replicating | 0:683 | 0:683 | 0:683 | 0:683 | 8 |
| async slave | db_name_0 | 127.0.0.1:10001/db_name_0 | online | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 | yes | NULL | replicating | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 |
| master | db_name_0 | NULL | replicating | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 | yes | 127.0.0.1:20001/db_name_0 | replicating | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 |
| async slave | db_name_0_AUTO_SLAVE | 127.0.0.1:20001/db_name_0_AUTO_SLAVE | replicating | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 | yes | NULL | replicating | 0:30423 | 0:30423 | 0:30423 | 0:30423 | 1778 |
| async slave | db_name_10 | 127.0.0.1:10001/db_name_10 | online | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 | yes | NULL | replicating | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 |
| master | db_name_10 | NULL | replicating | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 | yes | 127.0.0.1:20001/db_name_10 | replicating | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 |
| async slave | db_name_10_AUTO_SLAVE | 127.0.0.1:20001/db_name_10_AUTO_SLAVE | replicating | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 | yes | NULL | replicating | 0:29835 | 0:29835 | 0:29835 | 0:29835 | 1766 |
| async slave | db_name_12 | 127.0.0.1:10001/db_name_12 | online | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 | yes | NULL | replicating | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 |
| master | db_name_12 | NULL | replicating | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 | yes | 127.0.0.1:20001/db_name_12 | replicating | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 |
| async slave | db_name_12_AUTO_SLAVE | 127.0.0.1:20001/db_name_12_AUTO_SLAVE | replicating | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 | yes | NULL | replicating | 0:29773 | 0:29773 | 0:29773 | 0:29773 | 1747 |
| async slave | db_name_14 | 127.0.0.1:10001/db_name_14 | online | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 | yes | NULL | replicating | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 |
| master | db_name_14 | NULL | replicating | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 | yes | 127.0.0.1:20001/db_name_14 | replicating | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 |
| async slave | db_name_14_AUTO_SLAVE | 127.0.0.1:20001/db_name_14_AUTO_SLAVE | replicating | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 | yes | NULL | replicating | 0:29476 | 0:29476 | 0:29476 | 0:29476 | 1736 |
| async slave | db_name_2 | 127.0.0.1:10001/db_name_2 | online | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 | yes | NULL | replicating | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 |
| master | db_name_2 | NULL | replicating | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 | yes | 127.0.0.1:20001/db_name_2 | replicating | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 |
| async slave | db_name_2_AUTO_SLAVE | 127.0.0.1:20001/db_name_2_AUTO_SLAVE | replicating | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 | yes | NULL | replicating | 0:29188 | 0:29188 | 0:29188 | 0:29188 | 1696 |
| async slave | db_name_4 | 127.0.0.1:10001/db_name_4 | online | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 | yes | NULL | replicating | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 |
| master | db_name_4 | NULL | replicating | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 | yes | 127.0.0.1:20001/db_name_4 | replicating | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 |
| async slave | db_name_4_AUTO_SLAVE | 127.0.0.1:20001/db_name_4_AUTO_SLAVE | replicating | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 | yes | NULL | replicating | 0:30611 | 0:30611 | 0:30611 | 0:30611 | 1798 |
| async slave | db_name_6 | 127.0.0.1:10001/db_name_6 | online | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 | yes | NULL | replicating | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 |
| master | db_name_6 | NULL | replicating | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 | yes | 127.0.0.1:20001/db_name_6 | replicating | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 |
| async slave | db_name_6_AUTO_SLAVE | 127.0.0.1:20001/db_name_6_AUTO_SLAVE | replicating | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 | yes | NULL | replicating | 0:30573 | 0:30573 | 0:30573 | 0:30573 | 1797 |
| async slave | db_name_8 | 127.0.0.1:10001/db_name_8 | online | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 | yes | NULL | replicating | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 |
| master | db_name_8 | NULL | replicating | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 | yes | 127.0.0.1:20001/db_name_8 | replicating | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 |
| async slave | db_name_8_AUTO_SLAVE | 127.0.0.1:20001/db_name_8_AUTO_SLAVE | replicating | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 | yes | NULL | replicating | 0:29812 | 0:29812 | 0:29812 | 0:29812 | 1735 |
+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
In this sample output, the first line refers to replication of the reference database (metadata) for db_name
. This data is replicated from primary-MA
to secondary-MA
from which is is replicated to each leaf node in the secondary cluster. The remaining lines refer to replication of the partitions of the sharded database db_name
. As you can see, the data is replicated directly from leaf nodes in the primary cluster to leaf nodes in the secondary cluster. In this example, secondary-L1
is receiving data from both primary-L1
and primary-L2
.
Finally, note that MemSQL will automatically take the steps necessary to ensure the secondary cluster is consistent with the primary cluster. For example, if a leaf node in a primary cluster with redundancy 2 and a replica partition on the secondary cluster gets ahead of a replica partition on the primary cluster (due to network or other irregularity), MemSQL will automatically drop and reprovision the replica partition on the secondary cluster such that it will be consistent with the recently promoted master partition on the primary cluster. Please note that the dropped or reprovisioning replica partition will not appear in the SHOW REPLICATION STATUS output.
Replication Compatibility Between Different Cluster Versions
In general, you may replicate data between two different versions of MemSQL if the MemSQL version on the source cluster is earlier than the version on the destination cluster. However, there are some exceptions:
-
You may not replicate data from a source cluster running MemSQL 5.8 or earlier to a destination cluster running MemSQL 6.0 or later. This is due to how replication changed in MemSQL 6.0. You must first upgrade your source cluster to MemSQL 6.0 or later. Refer to Upgrading to MemSQL 6.0 for more information.
-
You may not replicate data from a source cluster running MemSQL 6.8 or earlier to a destination cluster running MemSQL 7.0 or later. This is due to how replication changed in MemSQL 7.0. You must first upgrade your source cluster to MemSQL 7.0 or later. Refer to Upgrading to MemSQL 7.0 for more information.
Failover and Promotion
There are a number of failure cases to consider when discussing MemSQL replication.
Primary Cluster Master Aggregator Failure
When the master aggregator fails on the disaster recovery (DR) primary cluster, no changes to reference data or schema can be made. Data Manipulation Language (DML) changes to distributed tables can be made through child aggregators and are replicated to the DR secondary cluster. To resolve this, you can either restart the master aggregator node, or promote a child aggregator in the primary cluster to master. For more information, see AGGREGATOR SET AS MASTER.
Secondary Cluster Master Aggregator Failure
If the master aggregator for the secondary cluster fails, replication of reference tables and metadata will stop, but distributed tables will continue replicating because this data flows directly from leaves on the primary cluster to leaves on the secondary cluster. To resume replication of reference and metadata, you may restart your master aggregator node, or promote a child aggregator in the primary cluster to master. For more information, see AGGREGATOR SET AS MASTER.
Once the new master aggregator is started, the replication continues automatically. Missing changes to reference data, schema, and distributed tables will be replicated from the primary cluster.
Failover to a Secondary Cluster
If the primary cluster fails and it becomes necessary to failover to the secondary cluster, do the following:
- Stop replication by running
STOP REPLICATING db_name
on the secondary cluster master aggregator. The secondary cluster is now a “full” MemSQL cluster has full read, write, DDL, and DML capabilities. - Re-point your application at an aggregator in the secondary cluster. Note that, after running
STOP REPLICATING
, you cannot resume replicating from the primary cluster.
Tuning Replication
MemSQL Replication is tuned by default to be efficient and fast for most workloads. However, MemSQL exposes several configuration hooks to help you fine-tune performance.
Snapshot and Log Files
MemSQL replication works by transferring snapshot and log files from master to replica. When replication is initiated, the replica requests the snapshot file from the master and proceeds to provision from the snapshot. Once provisioning is complete, replication proceeds by shipping transactions directly from the log file.
As described in durability_configuration, MemSQL exposes the snapshots-to-keep
variable to let you tune how many old versions of the snapshot and log files to keep around. MemSQL will automatically delete files that fall out of this window. In the context of replication, if a network outage causes the replica to fall so behind that its position in the log has been rotated out, then it is re-provisioned from the current snapshot so that replication can proceed from an existing log file.
You can tune snapshot-trigger-size
and snapshots-to-keep
to optimize the server for your network. A larger snapshot-trigger-size
increases the length of each log and therefore offers more tolerance in the event of a sluggish network. However, it will decrease the frequency at which
snapshots are taken and increase MemSQL recovery time due to larger logs (snapshot recovery is parallel, log recovery is single threaded).
By increasing snapshots-to-keep
, you can effectively increase how long log files are kept around. If you increase these parameters, make sure to allocate enough disk space to account for the larger
(snapshot-trigger-size
) and extra (snapshots-to-keep
) files. If a replica partition falls more than snapshots-to-keep
(which is a positive integer) behind the master partition, primary cluster master aggregator will automatically reprovision that replica partition.
Using mysqldump to Extract Data From a Secondary database
When mysqldump
is run on a secondary database following the instructions in Exporting Data From MemSQL <data_export>, an error will occur. This error happens because mysqldump
runs LOCK TABLES
which isn’t permitted on a secondary database. mysqldump
can be configured to avoid
locking tables by passing the option --lock-tables=false
. So, to take a consistent mysqldump
of a secondary database called secondary_db
we recommend the following:
Note that pausing replication is only required if you want a consistent mysqldump
when concurrent writes are happening on the master.