Outdated Version

You are viewing an older version of this section. View current production version.

Using Replication min read


MemSQL replication allows database replication between MemSQL instances, executed at the partition level. It is simple, robust, and fast. This topic describes how to use replication in MemSQL.

MemSQL replication is fully online. In the middle of a continuous write workload, you can start replication to a secondary (replica) cluster without pausing the primary (source) cluster. Replication then creates a read-only database replica that can be used for disaster recovery or to serve additional reads.

Replication across clusters, which includes cross datacenter replication, only supports asynchronous mode. In asynchronous mode, writes on the primary cluster will never wait to be replicated to the secondary cluster. Furthermore, secondary cluster failures will never block the master.

Info

To replicate a database, the secondary cluster user must have CREATE DATABASE privileges and the primary cluster user (the one specified in REPLICATE DATABASE) must have REPLICATION privileges on the primary cluster’s master aggregator.

Replication Commands

Replication in MemSQL is controlled entirely from the secondary cluster, where the following commands are run:

Note that most of these commands are also applicable to within-cluster replication.

Setting Up Replication

This example will guide you through setting up replication of a database. These instructions assume that you have two MemSQL clusters running. The following host:port combinations represent the master aggregators of the primary and secondary clusters:

  • primary-MA:3306
  • secondary-MA:3306

Note that the primary and secondary clusters need not have identical topologies. MemSQL will automatically manage sharding of replica data on the secondary cluster. In this example, primary-MA has a root user with an empty password.

To begin replicating the database db_name from primary-MA, run the following command on secondary-MA:

memsql> REPLICATE DATABASE db_name FROM root:root_password@primary-MA:3306;

Note that multiple secondary clusters can replicate from a single primary cluster. To do this, run REPLICATE DATABASE on the master aggregator of each of the replica clusters.

Pausing and Stopping Replication

MemSQL allows users to pause and resume online replication with single commands.

memsql> PAUSE REPLICATING db_name;
Query OK, 1 row affected (0.06 sec)

memsql> CONTINUE REPLICATING db_name;
Query OK, 1 row affected (0.96 sec)

PAUSE REPLICATING temporarily pauses replication but maintains the replication relationship between master and secondary databases. To begin replicating from a different primary cluster, you must start a new REPLICATE DATABASE process.

STOP REPLICATING db_name halts replication of db_name and automatically promotes the db_name instance on the secondary cluster to a “full” MemSQL database with all read, write, DDL, and DML operations. Once replication on a cluster has been stopped, it cannot be restarted.

Monitoring Replication

SHOW PARTITIONS EXTENDED

Running SHOW PARTITIONS EXTENDED on secondary-MA will display information such as replication role, the location of each partition, if it is locked, and other information.

memsql> SHOW PARTITIONS EXTENDED;

+---------+-----------+-------+--------+--------+------+------------------+------------+--------------+
| Ordinal | Host      | Port  | Role   | Locked | Info | Last Command     | Last Error | Last Message |
+---------+-----------+-------+--------+--------+------+------------------+------------+--------------+
|       0 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       1 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       2 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       3 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       4 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       5 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       6 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       7 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       8 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|       9 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      10 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      11 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      12 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      13 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      14 | 127.0.0.1 | 20001 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
|      15 | 127.0.0.1 | 20002 | Master |      0 | NULL | CREATE PARTITION |          0 |              |
+---------+-----------+-------+--------+--------+------+------------------+------------+--------------+
Info

Info is NULL for backwards compatibility.

SHOW CLUSTER STATUS

Running SHOW CLUSTER STATUS provides information like log replay position and detailed information about all databases in the cluster.

memsql> SHOW CLUSTER STATUS;

+---------+-----------+-------+------------------------------+-------------+-------------+----------+-------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
| Node ID | Host      | Port  | Database                     | Role        | State       | Position | Master Host | Master Port | Metadata Master Node ID | Metadata Master Host | Metadata Master Port | Metadata Role | Details                                         |
+---------+-----------+-------+------------------------------+-------------+-------------+----------+-------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
|       1 | 127.0.0.1 | 20000 | cluster                      | master      | online      | 0:37     | NULL        |        NULL |                    NULL | NULL                 |                 NULL | Reference     |                                                 |
|       1 | 127.0.0.1 | 20000 | cluster_17639882876507016380 | async slave | replicating | 0:36     | 127.0.0.1   |       10000 |                       1 | 127.0.0.1            |                10000 | Reference     | stage: packet wait, state: x_streaming, err: no |
|       1 | 127.0.0.1 | 20000 | db_name                      | async slave | replicating | 0:683    | 127.0.0.1   |       10000 |                       1 | 127.0.0.1            |                10000 | Reference     |                                                 |
|       2 | 127.0.0.1 | 20001 | cluster                      | async slave | replicating | 0:36     | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     | stage: packet wait, state: x_streaming, err: no |
|       2 | 127.0.0.1 | 20001 | cluster_17639882876507016380 | async slave | replicating | 0:36     | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     | stage: packet wait, state: x_streaming, err: no |
|       2 | 127.0.0.1 | 20001 | db_name                      | async slave | replicating | 0:683    | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_0                    | async slave | replicating | 0:30423  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_0_AUTO_SLAVE         | async slave | replicating | 0:30423  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_10                   | async slave | replicating | 0:29835  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_10_AUTO_SLAVE        | async slave | replicating | 0:29835  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_12                   | async slave | replicating | 0:29773  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_12_AUTO_SLAVE        | async slave | replicating | 0:29773  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_14                   | async slave | replicating | 0:29476  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_14_AUTO_SLAVE        | async slave | replicating | 0:29476  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_2                    | async slave | replicating | 0:29188  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_2_AUTO_SLAVE         | async slave | replicating | 0:29188  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_4                    | async slave | replicating | 0:30611  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_4_AUTO_SLAVE         | async slave | replicating | 0:30611  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_6                    | async slave | replicating | 0:30573  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_6_AUTO_SLAVE         | async slave | replicating | 0:30573  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_8                    | async slave | replicating | 0:29812  | 127.0.0.1   |       10001 |             -2147483646 | 127.0.0.1            |                10001 | Slave         |                                                 |
|       2 | 127.0.0.1 | 20001 | db_name_8_AUTO_SLAVE         | async slave | replicating | 0:29812  | 127.0.0.1   |       20001 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | cluster                      | async slave | replicating | 0:36     | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     | stage: packet wait, state: x_streaming, err: no |
|       3 | 127.0.0.1 | 20002 | cluster_17639882876507016380 | async slave | replicating | 0:36     | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     | stage: packet wait, state: x_streaming, err: no |
|       3 | 127.0.0.1 | 20002 | db_name                      | async slave | replicating | 0:683    | 127.0.0.1   |       20000 |                       1 | 127.0.0.1            |                20000 | Reference     |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_1                    | async slave | replicating | 0:30454  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_11                   | async slave | replicating | 0:30546  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_11_AUTO_SLAVE        | async slave | replicating | 0:30546  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_13                   | async slave | replicating | 0:30048  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_13_AUTO_SLAVE        | async slave | replicating | 0:30048  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_15                   | async slave | replicating | 0:30609  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_15_AUTO_SLAVE        | async slave | replicating | 0:30609  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_1_AUTO_SLAVE         | async slave | replicating | 0:30454  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_3                    | async slave | replicating | 0:30158  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_3_AUTO_SLAVE         | async slave | replicating | 0:30158  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_5                    | async slave | replicating | 0:30342  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_5_AUTO_SLAVE         | async slave | replicating | 0:30342  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_7                    | async slave | replicating | 0:29773  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_7_AUTO_SLAVE         | async slave | replicating | 0:29773  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_9                    | async slave | replicating | 0:30270  | 127.0.0.1   |       10002 |             -2147483645 | 127.0.0.1            |                10002 | Slave         |                                                 |
|       3 | 127.0.0.1 | 20002 | db_name_9_AUTO_SLAVE         | async slave | replicating | 0:30270  | 127.0.0.1   |       20002 |                    NULL | NULL                 |                 NULL | Orphan        |                                                 |
+---------+-----------+-------+------------------------------+-------------+-------------+----------+-------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+

SHOW DATABASES EXTENDED

SHOW DATABASES EXTENDED is another useful command for monitoring replication status. The output summarizes the replication status and other information about the state of the databases present in a MemSQL cluster.

memsql> SHOW DATABASES EXTENDED;

+------------------------------+---------+-------------+-------------+----------+---------+-------------+------------+-----------------+-------------------+------------------+----------------+------+--------------+--------------+
| Database                     | Commits | Role        | State       | Position | Details | AsyncSlaves | SyncSlaves | ConsensusSlaves | CommittedPosition | HardenedPosition | ReplayPosition | Term | LastPageTerm | Memory (MBs) |
+------------------------------+---------+-------------+-------------+----------+---------+-------------+------------+-----------------+-------------------+------------------+----------------+------+--------------+--------------+
| cluster                      |      34 | master      | online      | 0:37     |         |           2 | 0          |               0 | 0:37              | 0:37             | NULL           |    2 |            0 |         0.00 |
| cluster_17639882876507016380 |      16 | async slave | replicating | 0:37     |         |           2 | 0          |               0 | 0:36              | 0:37             | 0:36           |    2 |            2 |         0.00 |
| db_name                      |       8 | async slave | replicating | 0:683    |         |           2 | 0          |               0 | 0:683             | 0:683            | 0:683          |    1 |            0 |         0.00 |
| information_schema           |      87 | master      | online      | 0:10     |         |           0 | 0          |               0 | 0:10              | 0:10             | NULL           |    1 |            0 |         0.00 |
| memsql                       |      18 | master      | online      | 0:1085   |         |           0 | 0          |               0 | 0:1085            | 0:1085           | NULL           |    2 |            0 |         0.00 |
+------------------------------+---------+-------------+-------------+----------+---------+-------------+------------+-----------------+-------------------+------------------+----------------+------+--------------+--------------+
5 rows in set (0.00 sec)

SHOW REPLICATION STATUS

Running SHOW REPLICATION STATUS on a node shows the status of every replication process running on that node. The following is an example of the output of SHOW REPLICATION STATUS run on secondary-MA. Note that this example follows the naming conventions established in Setting Up Replication.

memsql> SHOW REPLICATION STATUS;

+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role        | Database                     | Master_URI              | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI                                    | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| master      | cluster                      | NULL                    | online       | 0:37             | 0:37               | 0:0              | 0:37           |             34 | yes       | 127.0.0.1:20002/cluster                      | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            17 |
| master      | cluster                      | NULL                    | online       | 0:37             | 0:37               | 0:0              | 0:37           |             34 | yes       | 127.0.0.1:20001/cluster                      | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            17 |
| async slave | cluster_17639882876507016380 | 127.0.0.1:10000/cluster | online       | 0:37             | 0:37               | 0:0              | 0:37           |             33 | yes       | NULL                                         | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            16 |
| master      | cluster_17639882876507016380 | NULL                    | replicating  | 0:36             | 0:37               | 0:36             | 0:37           |             16 | yes       | 127.0.0.1:20002/cluster_17639882876507016380 | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            16 |
| master      | cluster_17639882876507016380 | NULL                    | replicating  | 0:36             | 0:37               | 0:36             | 0:37           |             16 | yes       | 127.0.0.1:20001/cluster_17639882876507016380 | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            16 |
| async slave | db_name                      | 127.0.0.1:10000/db_name | online       | 0:683            | 0:683              | 0:683            | 0:683          |              8 | yes       | NULL                                         | replicating | 0:683           | 0:683             | 0:683           | 0:683         |             8 |
| master      | db_name                      | NULL                    | replicating  | 0:683            | 0:683              | 0:683            | 0:683          |              8 | yes       | 127.0.0.1:20002/db_name                      | replicating | 0:683           | 0:683             | 0:683           | 0:683         |             8 |
| master      | db_name                      | NULL                    | replicating  | 0:683            | 0:683              | 0:683            | 0:683          |              8 | yes       | 127.0.0.1:20001/db_name                      | replicating | 0:683           | 0:683             | 0:683           | 0:683         |             8 |
+-------------+------------------------------+-------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+

8 rows in set (0.03 sec)

In this example, the first line describes replication of the sharding database on primary-MA to the sharding_6683bab69542952c database on secondary-MA. The sharding database exists on the master aggregator and stores metadata that defines how data is partitioned. REPLICATE DATABASE automatically creates a sharding_[hash] database on the secondary cluster which stores partition metadata about the primary cluster. The second line describes replication of metadata and reference tables for the db_name database. This data is replicated synchronously to all aggregators and asynchronously to all leaves. The third and fourth lines describe replication of db_name metadata and reference tables from secondary-MA to the secondary cluster’s two leaf nodes (secondary-L1 and secondary-L2).

NetworkPosition uses the format [log file ordinal]:[byte offset into log file].

The following is the output of SHOW REPLICATION STATUS run on secondary-L1. In this example, db_name_[ordinal] refers to a partition of the sharded db_name database.

memsql> SHOW REPLICATION STATUS;

+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role        | Database                     | Master_URI                                   | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI                  | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| async slave | cluster                      | 127.0.0.1:20000/cluster                      | online       | 0:37             | 0:37               | 0:0              | 0:37           |             34 | yes       | NULL                       | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            17 |
| async slave | cluster_17639882876507016380 | 127.0.0.1:20000/cluster_17639882876507016380 | replicating  | 0:36             | 0:37               | 0:36             | 0:37           |             16 | yes       | NULL                       | replicating | 0:36            | 0:37              | 0:36            | 0:37          |            16 |
| async slave | db_name                      | 127.0.0.1:20000/db_name                      | replicating  | 0:683            | 0:683              | 0:683            | 0:683          |              8 | yes       | NULL                       | replicating | 0:683           | 0:683             | 0:683           | 0:683         |             8 |
| async slave | db_name_0                    | 127.0.0.1:10001/db_name_0                    | online       | 0:30423          | 0:30423            | 0:30423          | 0:30423        |           1778 | yes       | NULL                       | replicating | 0:30423         | 0:30423           | 0:30423         | 0:30423       |          1778 |
| master      | db_name_0                    | NULL                                         | replicating  | 0:30423          | 0:30423            | 0:30423          | 0:30423        |           1778 | yes       | 127.0.0.1:20001/db_name_0  | replicating | 0:30423         | 0:30423           | 0:30423         | 0:30423       |          1778 |
| async slave | db_name_0_AUTO_SLAVE         | 127.0.0.1:20001/db_name_0_AUTO_SLAVE         | replicating  | 0:30423          | 0:30423            | 0:30423          | 0:30423        |           1778 | yes       | NULL                       | replicating | 0:30423         | 0:30423           | 0:30423         | 0:30423       |          1778 |
| async slave | db_name_10                   | 127.0.0.1:10001/db_name_10                   | online       | 0:29835          | 0:29835            | 0:29835          | 0:29835        |           1766 | yes       | NULL                       | replicating | 0:29835         | 0:29835           | 0:29835         | 0:29835       |          1766 |
| master      | db_name_10                   | NULL                                         | replicating  | 0:29835          | 0:29835            | 0:29835          | 0:29835        |           1766 | yes       | 127.0.0.1:20001/db_name_10 | replicating | 0:29835         | 0:29835           | 0:29835         | 0:29835       |          1766 |
| async slave | db_name_10_AUTO_SLAVE        | 127.0.0.1:20001/db_name_10_AUTO_SLAVE        | replicating  | 0:29835          | 0:29835            | 0:29835          | 0:29835        |           1766 | yes       | NULL                       | replicating | 0:29835         | 0:29835           | 0:29835         | 0:29835       |          1766 |
| async slave | db_name_12                   | 127.0.0.1:10001/db_name_12                   | online       | 0:29773          | 0:29773            | 0:29773          | 0:29773        |           1747 | yes       | NULL                       | replicating | 0:29773         | 0:29773           | 0:29773         | 0:29773       |          1747 |
| master      | db_name_12                   | NULL                                         | replicating  | 0:29773          | 0:29773            | 0:29773          | 0:29773        |           1747 | yes       | 127.0.0.1:20001/db_name_12 | replicating | 0:29773         | 0:29773           | 0:29773         | 0:29773       |          1747 |
| async slave | db_name_12_AUTO_SLAVE        | 127.0.0.1:20001/db_name_12_AUTO_SLAVE        | replicating  | 0:29773          | 0:29773            | 0:29773          | 0:29773        |           1747 | yes       | NULL                       | replicating | 0:29773         | 0:29773           | 0:29773         | 0:29773       |          1747 |
| async slave | db_name_14                   | 127.0.0.1:10001/db_name_14                   | online       | 0:29476          | 0:29476            | 0:29476          | 0:29476        |           1736 | yes       | NULL                       | replicating | 0:29476         | 0:29476           | 0:29476         | 0:29476       |          1736 |
| master      | db_name_14                   | NULL                                         | replicating  | 0:29476          | 0:29476            | 0:29476          | 0:29476        |           1736 | yes       | 127.0.0.1:20001/db_name_14 | replicating | 0:29476         | 0:29476           | 0:29476         | 0:29476       |          1736 |
| async slave | db_name_14_AUTO_SLAVE        | 127.0.0.1:20001/db_name_14_AUTO_SLAVE        | replicating  | 0:29476          | 0:29476            | 0:29476          | 0:29476        |           1736 | yes       | NULL                       | replicating | 0:29476         | 0:29476           | 0:29476         | 0:29476       |          1736 |
| async slave | db_name_2                    | 127.0.0.1:10001/db_name_2                    | online       | 0:29188          | 0:29188            | 0:29188          | 0:29188        |           1696 | yes       | NULL                       | replicating | 0:29188         | 0:29188           | 0:29188         | 0:29188       |          1696 |
| master      | db_name_2                    | NULL                                         | replicating  | 0:29188          | 0:29188            | 0:29188          | 0:29188        |           1696 | yes       | 127.0.0.1:20001/db_name_2  | replicating | 0:29188         | 0:29188           | 0:29188         | 0:29188       |          1696 |
| async slave | db_name_2_AUTO_SLAVE         | 127.0.0.1:20001/db_name_2_AUTO_SLAVE         | replicating  | 0:29188          | 0:29188            | 0:29188          | 0:29188        |           1696 | yes       | NULL                       | replicating | 0:29188         | 0:29188           | 0:29188         | 0:29188       |          1696 |
| async slave | db_name_4                    | 127.0.0.1:10001/db_name_4                    | online       | 0:30611          | 0:30611            | 0:30611          | 0:30611        |           1798 | yes       | NULL                       | replicating | 0:30611         | 0:30611           | 0:30611         | 0:30611       |          1798 |
| master      | db_name_4                    | NULL                                         | replicating  | 0:30611          | 0:30611            | 0:30611          | 0:30611        |           1798 | yes       | 127.0.0.1:20001/db_name_4  | replicating | 0:30611         | 0:30611           | 0:30611         | 0:30611       |          1798 |
| async slave | db_name_4_AUTO_SLAVE         | 127.0.0.1:20001/db_name_4_AUTO_SLAVE         | replicating  | 0:30611          | 0:30611            | 0:30611          | 0:30611        |           1798 | yes       | NULL                       | replicating | 0:30611         | 0:30611           | 0:30611         | 0:30611       |          1798 |
| async slave | db_name_6                    | 127.0.0.1:10001/db_name_6                    | online       | 0:30573          | 0:30573            | 0:30573          | 0:30573        |           1797 | yes       | NULL                       | replicating | 0:30573         | 0:30573           | 0:30573         | 0:30573       |          1797 |
| master      | db_name_6                    | NULL                                         | replicating  | 0:30573          | 0:30573            | 0:30573          | 0:30573        |           1797 | yes       | 127.0.0.1:20001/db_name_6  | replicating | 0:30573         | 0:30573           | 0:30573         | 0:30573       |          1797 |
| async slave | db_name_6_AUTO_SLAVE         | 127.0.0.1:20001/db_name_6_AUTO_SLAVE         | replicating  | 0:30573          | 0:30573            | 0:30573          | 0:30573        |           1797 | yes       | NULL                       | replicating | 0:30573         | 0:30573           | 0:30573         | 0:30573       |          1797 |
| async slave | db_name_8                    | 127.0.0.1:10001/db_name_8                    | online       | 0:29812          | 0:29812            | 0:29812          | 0:29812        |           1735 | yes       | NULL                       | replicating | 0:29812         | 0:29812           | 0:29812         | 0:29812       |          1735 |
| master      | db_name_8                    | NULL                                         | replicating  | 0:29812          | 0:29812            | 0:29812          | 0:29812        |           1735 | yes       | 127.0.0.1:20001/db_name_8  | replicating | 0:29812         | 0:29812           | 0:29812         | 0:29812       |          1735 |
| async slave | db_name_8_AUTO_SLAVE         | 127.0.0.1:20001/db_name_8_AUTO_SLAVE         | replicating  | 0:29812          | 0:29812            | 0:29812          | 0:29812        |           1735 | yes       | NULL                       | replicating | 0:29812         | 0:29812           | 0:29812         | 0:29812       |          1735 |
+-------------+------------------------------+----------------------------------------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+----------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+

In this sample output, the first line refers to replication of the reference database (metadata) for db_name. This data is replicated from primary-MA to secondary-MA from which is is replicated to each leaf node in the secondary cluster. The remaining lines refer to replication of the partitions of the sharded database db_name. As you can see, the data is replicated directly from leaf nodes in the primary cluster to leaf nodes in the secondary cluster. In this example, secondary-L1 is receiving data from both primary-L1 and primary-L2.

Finally, note that MemSQL will automatically take the steps necessary to ensure the secondary cluster is consistent with the primary cluster. For example, if a leaf node in a primary cluster with redundancy 2 and a replica partition on the secondary cluster gets ahead of a replica partition on the primary cluster (due to network or other irregularity), MemSQL will automatically drop and reprovision the replica partition on the secondary cluster such that it will be consistent with the recently promoted master partition on the primary cluster.

Replication Compatibility Between Different Cluster Versions

In general, you can replicate between different versions of MemSQL provided the following are true:

  • The version on the source cluster (5.5) is older than the version deployed on the destination cluster (5.8)

    The reason for this is because newer versions of MemSQL can introduce replication functionality changes which are incompatible with older versions of MemSQL. This is why you cannot replicate from a newer version of MemSQL to an older version.

  • You do not try to replicate from a source cluster running 5.8 or older to a destination cluster running 6.0 or later

    This is because of functionality changes introduced in 6.0. You must upgrade your source cluster to 6.0 or later.

Failover and Promotion

There are a number of failure cases to consider when discussing MemSQL replication.

Primary Cluster Master Aggregator Failure

If the master aggregator for the primary cluster fails, replication of reference and metadata will stop, but distributed tables will continue replicating because this data flows directly from leaves on the primary cluster to leaves on the secondary cluster. To resume replication of reference and metadata, do the following:

  1. Pause replication from the master aggregator of the secondary cluster using PAUSE REPLICATING db_name.
  2. Promote a child aggregator in the primary cluster to master. For more information see AGGREGATOR SET AS MASTER.
  3. Resume replication from the master aggregator of the secondary cluster pointed at the new primary cluster master aggregator using the following command:
memsql> CONTINUE REPLICATING db_name FROM primary_user[:primary_password]@primary-MA[:port][/db_name]

Secondary Cluster Master Aggregator Failure

If the master aggregator for the secondary cluster fails, like the previous failure case, replication of reference tables and metadata will stop, but distributed tables will continue replicating because this data flows directly from leaves on the primary cluster to leaves on the secondary cluster. To resume replication of reference and metadata, do the following:

  1. Promote a child aggregator in the secondary cluster to master. For more information see AGGREGATOR SET AS MASTER.
  2. Pause replication from the new master aggregator of the secondary cluster using PAUSE REPLICATING db_name.
  3. Resume replication from the new master aggregator of the secondary cluster pointed at the primary cluster master aggregator using the following command:
memsql> CONTINUE REPLICATING db_name FROM primary_user[:primary_password]@primary-MA[:port][/db_name]

Failover to a Secondary Cluster

If the primary cluster fails and it becomes necessary to failover to the secondary cluster, do the following:

  1. Stop replication by running STOP REPLICATING db_name on the secondary cluster master aggregator. The secondary cluster is now a “full” MemSQL cluster has full read, write, DDL, and DML capabilities.
  2. Re-point your application at an aggregator in the secondary cluster. Note that, after running STOP REPLICATING, you cannot resume replicating from the primary cluster.

Tuning Replication

MemSQL Replication is tuned by default to be efficient and fast for most workloads. However, MemSQL exposes several configuration hooks to help you fine-tune performance.

Snapshot and Log Files

MemSQL replication works by transferring snapshot and log files from master to replica. When replication is initiated, the replica requests the snapshot file from the master and proceeds to provision from the snapshot. Once provisioning is complete, replication proceeds by shipping transactions directly from the log file.

As described in durability_configuration, MemSQL exposes the snapshots-to-keep variable to let you tune how many old versions of the snapshot and log files to keep around. MemSQL will automatically delete files that fall out of this window. In the context of replication, if a network outage causes the replica to fall so behind that its position in the log has been rotated out, then it is re-provisioned from the current snapshot so that replication can proceed from an existing log file.

You can tune snapshot-trigger-size and snapshots-to-keep to optimize the server for your network. A larger snapshot-trigger-size increases the length of each log and therefore offers more tolerance in the event of a sluggish network. However, it will decrease the frequency at which snapshots are taken and increase MemSQL recovery time due to larger logs (snapshot recovery is parallel, log recovery is single threaded).

By increasing snapshots-to-keep, you can effectively increase how long log files are kept around. If you increase these parameters, make sure to allocate enough disk space to account for the larger (snapshot-trigger-size) and extra (snapshots-to-keep) files. If a replica partition falls more than snapshots-to-keep (which is a positive integer) behind the master partition, primary cluster master aggregator will automatically reprovision that replica partition.

Using mysqldump to Extract Data From a Secondary database

When mysqldump is run on a secondary database following the instructions in Exporting Data From MemSQL <data_export>, an error will occur. This error happens because mysqldump runs LOCK TABLES which isn’t permitted on a secondary database. mysqldump can be configured to avoid locking tables by passing the option --lock-tables=false. So, to take a consistent mysqldump of a secondary database called secondary_db we recommend the following:

Note that pausing replication is only required if you want a consistent mysqldump when concurrent writes are happening on the master.