This topic does not apply to MemSQL Helios.
If you are managing your cluster with MemSQL Ops, go here instead.
If your MemSQL workload changes over time, the MemSQL management tools make it easy to efficiently scale cluster size to meet increasing performance and sizing demands. You can add more nodes to your MemSQL cluster by following the steps below.
Be aware when adding leaf nodes:
Once a database has been created, you cannot change the number of partitions the database was originally created with.
This means:
- To maintain the current number of database partitions per leaf, you must export all data, delete the database, recreate the database and all tables, and load data back into the database’s tables.
- To maintain an equal number of partitions per leaf, the new total number of leaves must be divisible by the total number of partitions. For example, a four-leaf cluster has has been created with four partitions per leaf (16 partitions total). If only two leaf nodes are added, the total number of leaves is six; however, 16 / 6 is not an integer, meaning there cannot be an equal number of partitions on each leaf. This will cause data skew. If four additional leaves are added instead, 16 / 8, which means there can be an equal number of partitions on each leaf (two).
Also, if you keep the existing number of partitions, then you don’t have to do any reloading; only rebalancing would be required.
Before you begin
Confirm host machine and leaf configurations are the same across new and existing leaves. Configurations for sizing (RAM, cores) and OS tuning (ulimit settings, THP settings, etc.) should match to ensure uniform query performance and cluster health.
See System Requirements and Recommendations for more details.
It’s also important to consider what non-sync engine variables have been modified when increasing your cluster size. Any modified non-sync variable settings will not be automatically applied to new nodes when they are added to the cluster; therefore, when scaling your cluster, each new node non-sync variables must be updated with the same values as other nodes.
Also, generate cluster report using the sdb-report collect command prior to adding nodes. This report will be useful for troubleshooting in case any issues arise during the cluster expansion process.
Add more leaf nodes to your cluster
-
Log into the host machine that has the MemSQL management tools installed on it. You will use this host machine to add new nodes to your cluster.
-
If your new nodes will reside on a new host machine (or set of machines) that wasn’t previously part of the cluster, then you must register the host machine(s). If you use a certificate to SSH into your machines, you can use it here with the optional
--identity-file
flag and sdb-toolbox-config will use it when connecting to the host machine.sdb-toolbox-config register-host --host <hostname|IP>
-
Download and install the
memsql-server
package (containing the MemSQL engine andmemsqlctl
) onto the target host(s).sdb-deploy install --host <hostname-or-IP-address> --version <memsql-server-version>
Note
- If you don’t specify a value for the
--version
flag, the latest released version of thememsql-server
package will be installed. - If you added more than one host to your cluster, you can pass in additional
--host
flags (e.g.--host 172.17.0.1 --host 172.17.0.2
).
- If you don’t specify a value for the
-
Run
sdb-admin create-node
. Specify the hostname that you registered earlier and set a secure password for theroot
database user. If the default port of 3306 is already being used by a MemSQL node on the target host machine, specify another port with-p PORT
.sdb-admin create-node --host <hostname|IP> --password <secure_pass>
-
Assign the new node a role such as
aggregator
orleaf
using eitheradd-aggregator
oradd-leaf
, respectively. If you are running in high availability (i.e. your master aggregator is set to--redundancy-level 2
), you can specify an availability group for your new leaf nodes,--availability-group {1,2}
or let it be auto-assigned to the smaller of the two groups.sdb-admin add-leaf --memsql-id <MemSQL_ID>
-
Verify the new node has been added successfully, by running
sdb-admin list-nodes
again.
Redistribute data across cluster
-
Once you have successfully added your new nodes, connect to the host machine running the master aggregator. The singlestore-client package contains a client application that you can use to connect to your database by simply running
singlestore
at the command prompt. -
As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.
-
Run
REBALANCE PARTITIONS;
on all databases. -
Check memory and disk usage through SingleStore DB Studio.
-
On any aggregator, enter the SQL interface and output
SHOW CLUSTER STATUS
into a file you can examine to make sure partitions are evenly distributed. -
Check for data skew by running the following query in the SQL interface on any aggregator:
SELECT
DATABASE_NAME,
TABLE_NAME,
FLOOR(AVG(ROWS)) AS avg_rows,
ROUND(STDDEV(ROWS)/AVG(ROWS),3) * 100 AS row_skew,
FLOOR(AVG(MEMORY_USE)) AS avg_memory,
ROUND(STDDEV(MEMORY_USE)/AVG(MEMORY_USE),3) * 100 AS memory_skew
FROM INFORMATION_SCHEMA.TABLE_STATISTICS
GROUP BY 1, 2
HAVING SUM(ROWS) > 10000
ORDER BY row_skew DESC;
For more information about detecting and measuring data skew, see the Data Skew topic.
Cluster Expansion in DR Replication
Expanding either the primary or secondary cluster in disaster recovery (DR) replication is the same process as if DR replication were not enabled. Either cluster can be expanded without impacting the other.
These steps apply to clusters that are managed with SingleStore DB Tools.
In this example, the primary cluster has 2 hosts, 2 leaves, and 8 partitions. Note that the following commands are run from the command line on the Master Aggregator host (54.90.20.80
in this example cluster).
sdb-admin list-nodes
****
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master | 54.90.20.80 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| 5FF108F2F0 | Leaf | 100.26.238.187 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
| 2B397AE7A1 | Leaf | 54.90.20.80 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
****
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host | Port | Availability_Group | Pair_Host | Pair_Port | State | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 100.26.238.187 | 3307 | 2 | 54.90.20.80 | 3307 | online | 15 | 0.213 | 3 | NULL |
| 54.90.20.80 | 3307 | 1 | 100.26.238.187 | 3307 | online | 9 | 0.177 | 4 | NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_demo;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 54.90.20.80 | 3307 | Master | 0 |
| 0 | 100.26.238.187 | 3307 | Slave | 0 |
| 1 | 54.90.20.80 | 3307 | Master | 0 |
| 1 | 100.26.238.187 | 3307 | Slave | 0 |
| 2 | 54.90.20.80 | 3307 | Master | 0 |
| 2 | 100.26.238.187 | 3307 | Slave | 0 |
| 3 | 54.90.20.80 | 3307 | Master | 0 |
| 3 | 100.26.238.187 | 3307 | Slave | 0 |
| 4 | 54.90.20.80 | 3307 | Master | 0 |
| 4 | 100.26.238.187 | 3307 | Slave | 0 |
| 5 | 54.90.20.80 | 3307 | Master | 0 |
| 5 | 100.26.238.187 | 3307 | Slave | 0 |
| 6 | 54.90.20.80 | 3307 | Master | 0 |
| 6 | 100.26.238.187 | 3307 | Slave | 0 |
| 7 | 54.90.20.80 | 3307 | Master | 0 |
| 7 | 100.26.238.187 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Add new nodes
Add two new nodes to the primary cluster.
sdb-admin create-node --host 54.90.20.80 -pmemsql --port 3308
****
Toolbox is about to perform the following actions on host 54.90.20.80:
· Run 'memsqlctl create-node --port 3308 --password ●●●●●●'
Would you like to continue? [y/N]: y
+------------------------------------------+-------------+
| MemSQL ID | Host |
+------------------------------------------+-------------+
| EC862275668F1529AC8F9D78F87896E04BD5DD84 | 54.90.20.80 |
+------------------------------------------+-------------+
sdb-admin create-node --host 100.26.238.187 -pmemsql --port 3308
****
Toolbox is about to perform the following actions on host 100.26.238.187:
· Run 'memsqlctl create-node --port 3308 --password ●●●●●●'
Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
| MemSQL ID | Host |
+------------------------------------------+----------------+
| B8C6534BAE86981FDA49C337390CCA00F89478C9 | 100.26.238.187 |
+------------------------------------------+----------------+
Designate these new nodes as leaves
sdb-admin list-nodes
****
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master | 54.90.20.80 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| 5FF108F2F0 | Leaf | 100.26.238.187 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
| 2B397AE7A1 | Leaf | 54.90.20.80 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| B8C6534BAE | Unknown | 100.26.238.187 | 3308 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| EC86227566 | Unknown | 54.90.20.80 | 3308 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
sdb-admin add-leaf -pmemsql --memsql-id B8C6534BAE
****
Toolbox will perform the following actions on host 54.90.20.80:
· Run 'memsqlctl add-leaf --host 100.26.238.187 --port 3308 --user root --password ●●●●●●'
Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin add-leaf -pmemsql --memsql-id EC86227566
****
Toolbox will perform the following actions on host 54.90.20.80:
· Run 'memsqlctl add-leaf --host 54.90.20.80 --port 3308 --user root --password ●●●●●●'
Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
There are now two new additional leaves on the primary cluster, bringing the total to four. While these four leaves hold all of the cluster’s partitions, these two new leaves don’t yet contain any data.
sdb-admin list-nodes
****
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master | 54.90.20.80 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| 5FF108F2F0 | Leaf | 100.26.238.187 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
| B8C6534BAE | Leaf | 100.26.238.187 | 3308 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| 2B397AE7A1 | Leaf | 54.90.20.80 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| EC86227566 | Leaf | 54.90.20.80 | 3308 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
****
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host | Port | Availability_Group | Pair_Host | Pair_Port | State | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 100.26.238.187 | 3307 | 2 | 54.90.20.80 | 3307 | online | 13 | 0.238 | 3 | NULL |
| 54.90.20.80 | 3307 | 1 | 100.26.238.187 | 3307 | online | 7 | 0.180 | 4 | NULL |
| 100.26.238.187 | 3308 | 1 | 54.90.20.80 | 3308 | online | 2 | 0.167 | 5 | NULL |
| 54.90.20.80 | 3308 | 2 | 100.26.238.187 | 3308 | online | 2 | 0.134 | 6 | NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_demo;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 100.26.238.187 | 3307 | Master | 0 |
| 0 | 54.90.20.80 | 3307 | Slave | 0 |
| 1 | 100.26.238.187 | 3307 | Master | 0 |
| 1 | 54.90.20.80 | 3307 | Slave | 0 |
| 2 | 100.26.238.187 | 3307 | Master | 0 |
| 2 | 54.90.20.80 | 3307 | Slave | 0 |
| 3 | 100.26.238.187 | 3307 | Master | 0 |
| 3 | 54.90.20.80 | 3307 | Slave | 0 |
| 4 | 100.26.238.187 | 3307 | Master | 0 |
| 4 | 54.90.20.80 | 3307 | Slave | 0 |
| 5 | 100.26.238.187 | 3307 | Master | 0 |
| 5 | 54.90.20.80 | 3307 | Slave | 0 |
| 6 | 100.26.238.187 | 3307 | Master | 0 |
| 6 | 54.90.20.80 | 3307 | Slave | 0 |
| 7 | 100.26.238.187 | 3307 | Master | 0 |
| 7 | 54.90.20.80 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Deploy data to the new leaves
As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.
Rebalance the partitions on the primary cluster to deploy them to the new leaves.
While the primary cluster is rebalancing, the secondary cluster is connected, but replication is momentarily paused as the primary cluster switches masters. After this pause, writes and replication will proceed as usual through the remainder of the rebalance. This is the same behavior as if the primary cluster were restarted.
rebalance partitions on memsql_demo;
****
Query OK, 1 row affected (22.43 sec)
show partitions on memsql_demo;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 100.26.238.187 | 3308 | Master | 0 |
| 0 | 54.90.20.80 | 3308 | Slave | 0 |
| 1 | 54.90.20.80 | 3308 | Master | 0 |
| 1 | 100.26.238.187 | 3308 | Slave | 0 |
| 2 | 100.26.238.187 | 3308 | Master | 0 |
| 2 | 54.90.20.80 | 3308 | Slave | 0 |
| 3 | 54.90.20.80 | 3308 | Master | 0 |
| 3 | 100.26.238.187 | 3308 | Slave | 0 |
| 4 | 100.26.238.187 | 3307 | Slave | 0 |
| 4 | 54.90.20.80 | 3307 | Master | 0 |
| 5 | 100.26.238.187 | 3307 | Slave | 0 |
| 5 | 54.90.20.80 | 3307 | Master | 0 |
| 6 | 100.26.238.187 | 3307 | Master | 0 |
| 6 | 54.90.20.80 | 3307 | Slave | 0 |
| 7 | 100.26.238.187 | 3307 | Master | 0 |
| 7 | 54.90.20.80 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
This has no effect on the secondary cluster. There are still 8 partitions in the database, which are still on the same leaves on the secondary cluster.
Cluster before:
show partitions on memsql_repl;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 54.236.46.17 | 3307 | Master | 0 |
| 0 | 18.234.106.201 | 3307 | Slave | 0 |
| 1 | 54.236.46.17 | 3307 | Master | 0 |
| 1 | 18.234.106.201 | 3307 | Slave | 0 |
| 2 | 54.236.46.17 | 3307 | Master | 0 |
| 2 | 18.234.106.201 | 3307 | Slave | 0 |
| 3 | 54.236.46.17 | 3307 | Master | 0 |
| 3 | 18.234.106.201 | 3307 | Slave | 0 |
| 4 | 54.236.46.17 | 3307 | Master | 0 |
| 4 | 18.234.106.201 | 3307 | Slave | 0 |
| 5 | 54.236.46.17 | 3307 | Master | 0 |
| 5 | 18.234.106.201 | 3307 | Slave | 0 |
| 6 | 54.236.46.17 | 3307 | Master | 0 |
| 6 | 18.234.106.201 | 3307 | Slave | 0 |
| 7 | 54.236.46.17 | 3307 | Master | 0 |
| 7 | 18.234.106.201 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Cluster after:
show partitions on memsql_repl;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 54.236.46.17 | 3307 | Master | 0 |
| 0 | 18.234.106.201 | 3307 | Slave | 0 |
| 1 | 54.236.46.17 | 3307 | Master | 0 |
| 1 | 18.234.106.201 | 3307 | Slave | 0 |
| 2 | 54.236.46.17 | 3307 | Master | 0 |
| 2 | 18.234.106.201 | 3307 | Slave | 0 |
| 3 | 54.236.46.17 | 3307 | Master | 0 |
| 3 | 18.234.106.201 | 3307 | Slave | 0 |
| 4 | 54.236.46.17 | 3307 | Master | 0 |
| 4 | 18.234.106.201 | 3307 | Slave | 0 |
| 5 | 54.236.46.17 | 3307 | Master | 0 |
| 5 | 18.234.106.201 | 3307 | Slave | 0 |
| 6 | 54.236.46.17 | 3307 | Master | 0 |
| 6 | 18.234.106.201 | 3307 | Slave | 0 |
| 7 | 54.236.46.17 | 3307 | Master | 0 |
| 7 | 18.234.106.201 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
As a rule, if you expand the primary cluster’s storage capacity and/or memory, you must do the same to the secondary cluster.
As the primary cluster has been expanded, the secondary cluster will require additional space to store the additional data from the primary cluster.
Expanding the secondary cluster also provides more cores for read/analytics workloads.
In the example above, memsql_demo
from the primary cluster was replicated to memsql_repl
on the secondary cluster.
The secondary cluster has 2 hosts, 2 leaves.
As memsql_demo
has 8 partitions, memsql_repl
also has 8 partitions.
sdb-admin list-nodes
****
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master | 18.234.106.201 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| CAE7ABC3BF | Leaf | 18.234.106.201 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| 6CF5699EE0 | Leaf | 54.236.46.17 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
****
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host | Port | Availability_Group | Pair_Host | Pair_Port | State | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 54.236.46.17 | 3307 | 2 | 18.234.106.201 | 3307 | online | 4 | 0.193 | 3 | NULL |
| 18.234.106.201 | 3307 | 1 | 54.236.46.17 | 3307 | online | 2 | 0.136 | 4 | NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_repl;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 54.236.46.17 | 3307 | Master | 0 |
| 0 | 18.234.106.201 | 3307 | Slave | 0 |
| 1 | 54.236.46.17 | 3307 | Master | 0 |
| 1 | 18.234.106.201 | 3307 | Slave | 0 |
| 2 | 54.236.46.17 | 3307 | Master | 0 |
| 2 | 18.234.106.201 | 3307 | Slave | 0 |
| 3 | 54.236.46.17 | 3307 | Master | 0 |
| 3 | 18.234.106.201 | 3307 | Slave | 0 |
| 4 | 54.236.46.17 | 3307 | Master | 0 |
| 4 | 18.234.106.201 | 3307 | Slave | 0 |
| 5 | 54.236.46.17 | 3307 | Master | 0 |
| 5 | 18.234.106.201 | 3307 | Slave | 0 |
| 6 | 54.236.46.17 | 3307 | Master | 0 |
| 6 | 18.234.106.201 | 3307 | Slave | 0 |
| 7 | 54.236.46.17 | 3307 | Master | 0 |
| 7 | 18.234.106.201 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Add new nodes
Add two new nodes to the secondary cluster.
sdb-admin create-node --host 18.234.106.201 -pmemsql --port 3308
****
Toolbox is about to perform the following actions on host 18.234.106.201:
· Run 'memsqlctl create-node --port 3308 --password ●●●●●●'
Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
| MemSQL ID | Host |
+------------------------------------------+----------------+
| C43AAB03E4174A660CA1AE2B98C60A4D2CD59D68 | 18.234.106.201 |
+------------------------------------------+----------------+
sdb-admin create-node --host 18.234.106.201 -pmemsql --port 3309
****
Toolbox is about to perform the following actions on host 18.234.106.201:
· Run 'memsqlctl create-node --port 3309 --password ●●●●●●'
Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
| MemSQL ID | Host |
+------------------------------------------+----------------+
| 183E23A7577F113A7918B0E87299832D83A96F9B | 18.234.106.201 |
+------------------------------------------+----------------+
sdb-admin list-nodes
****
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master | 18.234.106.201 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| CAE7ABC3BF | Leaf | 18.234.106.201 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| 6CF5699EE0 | Leaf | 54.236.46.17 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
| C43AAB03E4 | Unknown | 18.234.106.201 | 3308 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| 183E23A757 | Unknown | 18.234.106.201 | 3309 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
Designate these new nodes as leaves
sdb-admin add-leaf -pmemsql --memsql-id C43AAB03E4
****
Toolbox will perform the following actions on host 18.234.106.201:
· Run 'memsqlctl add-leaf --host 18.234.106.201 --port 3308 --user root --password ●●●●●●'
Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin add-leaf -pmemsql --memsql-id 183E23A757
****
Toolbox will perform the following actions on host 18.234.106.201:
· Run 'memsqlctl add-leaf --host 18.234.106.201 --port 3309 --user root --password ●●●●●●'
Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin list-nodes
****
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master | 18.234.106.201 | 3306 | Running | True | 7.0.19 | Online | | 0.0.0.0 |
| CAE7ABC3BF | Leaf | 18.234.106.201 | 3307 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| C43AAB03E4 | Leaf | 18.234.106.201 | 3308 | Running | True | 7.0.19 | Online | 1 | 0.0.0.0 |
| 183E23A757 | Leaf | 18.234.106.201 | 3309 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
| 6CF5699EE0 | Leaf | 54.236.46.17 | 3307 | Running | True | 7.0.19 | Online | 2 | 0.0.0.0 |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
****
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host | Port | Availability_Group | Pair_Host | Pair_Port | State | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 54.236.46.17 | 3307 | 2 | 18.234.106.201 | 3307 | online | 2 | 0.199 | 3 | NULL |
| 18.234.106.201 | 3307 | 1 | 54.236.46.17 | 3307 | online | 1 | 0.133 | 4 | NULL |
| 18.234.106.201 | 3308 | 1 | 18.234.106.201 | 3309 | online | 1 | 0.257 | 5 | NULL |
| 18.234.106.201 | 3309 | 2 | 18.234.106.201 | 3308 | online | 2 | 1.290 | 6 | NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
Even though the primary cluster has been rebalanced, and there are new leaves on the secondary cluster, the new leaves don’t have any partitions and thus cannot store any data.
show partitions on memsql_repl;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 54.236.46.17 | 3307 | Master | 0 |
| 0 | 18.234.106.201 | 3307 | Slave | 0 |
| 1 | 54.236.46.17 | 3307 | Master | 0 |
| 1 | 18.234.106.201 | 3307 | Slave | 0 |
| 2 | 54.236.46.17 | 3307 | Master | 0 |
| 2 | 18.234.106.201 | 3307 | Slave | 0 |
| 3 | 54.236.46.17 | 3307 | Master | 0 |
| 3 | 18.234.106.201 | 3307 | Slave | 0 |
| 4 | 54.236.46.17 | 3307 | Master | 0 |
| 4 | 18.234.106.201 | 3307 | Slave | 0 |
| 5 | 54.236.46.17 | 3307 | Master | 0 |
| 5 | 18.234.106.201 | 3307 | Slave | 0 |
| 6 | 54.236.46.17 | 3307 | Master | 0 |
| 6 | 18.234.106.201 | 3307 | Slave | 0 |
| 7 | 54.236.46.17 | 3307 | Master | 0 |
| 7 | 18.234.106.201 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Deploy data to the new leaves
As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.
Rebalance the partitions on the secondary cluster to deploy them to the new leaves.
While the secondary cluster is rebalancing, replication is momentarily paused. After this pause, writes and replication will proceed as usual through the remainder of the rebalance. This is the same behavior as if the secondary cluster were restarted, but will likely take longer depending on how much columnstore data is in the database.
rebalance partitions on memsql_repl;
****
Query OK, 1 row affected (8.44 sec)
show partitions on memsql_repl;
****
+---------+----------------+------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+----------------+------+--------+--------+
| 0 | 18.234.106.201 | 3308 | Master | 0 |
| 0 | 18.234.106.201 | 3309 | Slave | 0 |
| 1 | 18.234.106.201 | 3309 | Master | 0 |
| 1 | 18.234.106.201 | 3308 | Slave | 0 |
| 2 | 18.234.106.201 | 3308 | Master | 0 |
| 2 | 18.234.106.201 | 3309 | Slave | 0 |
| 3 | 18.234.106.201 | 3309 | Master | 0 |
| 3 | 18.234.106.201 | 3308 | Slave | 0 |
| 4 | 54.236.46.17 | 3307 | Slave | 0 |
| 4 | 18.234.106.201 | 3307 | Master | 0 |
| 5 | 54.236.46.17 | 3307 | Slave | 0 |
| 5 | 18.234.106.201 | 3307 | Master | 0 |
| 6 | 54.236.46.17 | 3307 | Master | 0 |
| 6 | 18.234.106.201 | 3307 | Slave | 0 |
| 7 | 54.236.46.17 | 3307 | Master | 0 |
| 7 | 18.234.106.201 | 3307 | Slave | 0 |
+---------+----------------+------+--------+--------+
Note that the partition count remains the same.