You are viewing an older version of this section. View current production version.
High Availability
An availability group is set of leaves which store data redundantly to ensure high availability. Each availability group contains a copy of every partition in the system - some as masters and some as replicas. Currently, SingleStore DB supports up to two availability groups. You can set the number of availability groups via the redundancy_level
variable on the master aggregator. From this point forward, we’ll discuss the redundancy-2 case.
The placement of replica partitions in a cluster can be specified via the leaf_failover_fanout
variable. SingleStore DB supports two modes for partition placement: paired
and load_balanced
. In paired
mode, each leaf in an availability group has a corresponding pair node in the other availability group. Each leaf has its own master partitions, which SingleStore DB synchronizes to its pair as replica partitions. In other words, each leaf backs up its pair and vice versa. For this reason, each leaf stores both master and replica partitions. In the event of a failure, SingleStore DB will automatically promote replica partitions on a leaf’s pair. In load_balanced
mode, master partitions are evenly placed on leaves. The master partitions on every leaf in an availability group have their replica partitions spread evenly among a set of leaves in the opposite availability group. For more information, see Managing High Availability.
By default, the ADD LEAF command will add a leaf into the smaller of the two groups. However, if you know your cluster’s topology in advance, you can specify the group explicitly with the INTO GROUP N
suffix. By grouping together machines that share resources like a network switch or power supply, you can isolate common hardware failures to a single group and dramatically improve the cluster’s uptime.
SingleStore DB automatically displays which availability group a leaf belongs to in the SHOW LEAVES command.