This topic does not apply to SingleStore Managed Service.
Orphaned partitions are partition databases present on leaf nodes that the master aggregator node doesn’t consider part of the cluster (they’re neither master nor replica partition databases). This means the master aggregator node will drop this database if it ever gets in the way of something it wants to do.
It is possible in a crash scenario to have an orphaned partition that is the last valid copy of a partition. In this case, you will want to attach the orphan back into the cluster immediately.
Identifying Orphaned Partitions
To easily determine if your cluster has any orphaned partitions, run:
EXPLAIN REBALANCE PARTITIONS ON <database name>;
If there are any unbalanced partitions, a non-empty result set will be returned. If this is the case, run SHOW CLUSTER STATUS
and SHOW PARTITIONS
to see the details of your partitions:
Example:
SHOW CLUSTER STATUS;
| Host | Port | Database | Role | State | Position | Details |
+-----------+-------+----------+-----------+-------------+----------+---------+
| 127.0.0.1 | 10001 | x_test | Reference | replicating | 0:57 | |
| 127.0.0.1 | 10001 | x_test_0 | Master | online | 0:10 | |
| 127.0.0.1 | 10001 | x_test_1 | Slave | replicating | 0:10 | |
| 127.0.0.1 | 10002 | x_test | Reference | replicating | 0:57 | |
| 127.0.0.1 | 10002 | x_test_0 | Slave | replicating | 0:10 | |
| 127.0.0.1 | 10002 | x_test_1 | Orphan | online | 0:10 | |
SHOW PARTITIONS;
+---------+-----------+-------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+-----------+-------+--------+--------+
| 0 | 127.0.0.1 | 10001 | Master | 0 |
| 0 | 127.0.0.1 | 10002 | Slave | 0 |
| 1 | 127.0.0.1 | 10001 | Slave | 0 |
+---------+-----------+-------+--------+--------+
Addressing Orphans by Attaching New Partitions
Look at the results of SHOW CLUSTER STATUS
and SHOW PARTITIONS
from the previous section. We are missing a master for partition 1, but we have an online orphan for partition 1. Assuming this orphan contains data we need, we will want to attach the master:
ATTACH PARTITION x_test:1 on '127.0.0.1':10002;
SHOW CLUSTER STATUS;
+-----------+-------+----------+-----------+-------------+----------+---------+
| Host | Port | Database | Role | State | Position | Details |
+-----------+-------+----------+-----------+-------------+----------+---------+
| 127.0.0.1 | 10001 | x_test | Reference | replicating | 0:57 | |
| 127.0.0.1 | 10001 | x_test_0 | Master | online | 0:10 | |
| 127.0.0.1 | 10001 | x_test_1 | Slave | replicating | 0:10 | |
| 127.0.0.1 | 10002 | x_test | Reference | replicating | 0:57 | |
| 127.0.0.1 | 10002 | x_test_0 | Slave | replicating | 0:10 | |
| 127.0.0.1 | 10002 | x_test_1 | Master | online | 0:10 | |
+-----------+-------+----------+-----------+-------------+----------+---------+
SHOW PARTITIONS;
+---------+-----------+-------+--------+--------+
| Ordinal | Host | Port | Role | Locked |
+---------+-----------+-------+--------+--------+
| 0 | 127.0.0.1 | 10001 | Master | 0 |
| 0 | 127.0.0.1 | 10002 | Slave | 0 |
| 1 | 127.0.0.1 | 10001 | Slave | 0 |
| 1 | 127.0.0.1 | 10002 | Master | 0 |
+---------+-----------+-------+--------+--------+
Do not run high level commands (REBALANCE PARTITIONS
, RESTORE REDUNDANCY
, DETACH LEAF
, etc.) to attach the orphan into the cluster. High level commands may drop the orphan.
If your cluster has many partitions, you may want to count its partitions by role to help identify any orphaned partitions.
Removing Orphaned Partitions that are not Needed
If you have orphaned partitions that you no longer need, you can remove all of them at once. To do so, first check which partitions will be removed:
EXPLAIN CLEAR ORPHAN DATABASES;
To remove all orphaned partitions, run
CLEAR ORPHAN DATABASES;
Make sure to have run SHOW PARTITIONS
and SHOW CLUSTER STATUS
and have verified that all partitions are accounted for before doing this.
CLEAR ORPHAN DATABASES
can’t be undone.