You are viewing an older version of this section. View current production version.
Workload Management
MemSQL Helios does not support setting the engine variables that are discussed in this topic.
MemSQL’s workload manager automatically manages cluster workloads by limiting execution of queries that require fully distributed execution, to ensure that they are matched with available system resources. The workload manager improves overall query execution efficiency and prevents workload surges from overwhelming the system. When the system is low on connections, threads, or memory, queries are queued so they can run later when adequate resources are available.
Related to workload management is the concept of resource governance. Resource governance allows queries to be run in resource pools. Optionally, you can configure queries to queue in resource pools, up to a limit that you specify. For more information, see Setting Resource Limits.
Configuring Workload Management
The workload manager can be configured using engine variables, and the default settings are sufficient for most users regardless of cluster size. However, these engine variables can be tuned to fully utilize system resources for high concurrency workloads. These variables are discussed in the sections that follow.
Before changing any of these values, ensure that you understand each of these variables and have evaluated their impact on your workload.
All of the workload management engine variables are sync variables. To configure these variables, first read about sync variables to learn how to set them. Then, refer to Workload Management Sync Variables for information on specific variables.
Query Classification
The workload manager classifies queries by their size: small, medium, or large. Queries are classified to determine their queueing policy.
Small Queries
Small queries do not exceed the workload_management_memory_queue_threshold
, which is a small amount of memory relative the total amount of leaf memory available to queries.
Small queries also do not use distributed joins. For this reason, they do not require threads or connections on the leaves.
Medium Queries
Medium queries fit within the following lower and upper resource thresholds. These resources are threads used per leaf, connections used per leaf, and memory used per leaf.
Lower Thresholds
- Threads used per leaf:
1
- Connections used per leaf:
1
- Memory used per leaf: Total leaf memory
* workload_management_memory_queue_threshold
Upper Thresholds
- Threads used per leaf:
50% * workload_management_max_threads_per_leaf / workload_management_expected_aggregators
- Connections used per leaf:
50% * workload_management_max_connections_per_leaf / workload_management_expected_aggregators
- Memory used per leaf: Total leaf memory
/ workload_management_expected_aggregators
Large queries
Queries that exceed the upper threshold for medium queries are classified as large queries.
Queueing Policy
Queries are queued based on their size, as determined by the thresholds in the previous section, Query Classification.
Small Queries
Small queries are never queued. They use a negligible amount of leaf resources, which are not tracked.
Medium Queries
At regular intervals, the master aggregator collects statistics from the child aggregators. These statistics include the maximum resources used (threads per leaf, connections per leaf, and memory per leaf) during the interval and a rolling average of the maximum resources required during the previous intervals.
At the end of each interval, the master aggregator uses the statistics gathered from the child aggregators to reallocate resources to each aggregator, if necessary.
If the resource requirements for a new medium query exceed the total amount of resources allocated to all aggregators for all medium queries, the new query is queued in a local queue that is maintained by an individual aggregator. Each aggregator decides when to dequeue the queries in its local queue.
Fifty percent of the total leaf resources are reserved for medium queries.
Large Queries
Up to four large queries in the cluster will run concurrently. Any additional queries are queued in the global queue.
Fifty percent of the total leaf resources are reserved for large queries.
Errors and Warnings
Queries return configurable errors if too many queries are queued or a query has been queued for too long. Queries also return configurable warnings when they were queued for a significant time relative to their actual execution time. This allows users to identify when their cluster resources are insufficient to meet the load of incoming queries.
These errors and warnings are:
ER_TOO_MANY_QUEUED_QUERIES
: When the workload_management_max_queue_depth
value is reached, MemSQL will return this error instead of adding a new query to the queue. If you encounter this error, the maximum queue depth may need to be increased to accommodate the load. This error may also indicate that your cluster’s resources need to be increased or the load of incoming queries needs to be reduced.
ER_QUERY_QUEUE_TIMEOUT
: When a query has been queued longer than the workload_management_queue_timeout
value, it will return this error and will be removed from the queue. This error indicates that your cluster’s resources or the load of incoming queries need to be adjusted to successfully process incoming queries.
ER_QUERY_QUEUED_WARNING
: When the ratio of time spent by a query in the queue versus the actual execution time of the query exceeds the workload_management_queue_time_warning_ratio
value, the query will return this warning. The warning indicates the amount of time spent queued and the amount of time spent executing, to help you understand the sources of query latency. Encountering this warning is normal in many workloads, but if query latency is too high, your cluster’s resources may not be sufficient to process incoming queries in a timely manner.
Observing the Status of Workload Management
To see the current state of the workload management system, including the number of running and queued queries, and resource usage and thresholds, you can run the SHOW WORKLOAD MANAGEMENT STATUS
command. You can also run the query SELECT * FROM information_schema.MV_WORKLOAD_MANAGEMENT_STATUS;
, which will return the same information as the previous command, but per aggregator.
The following example shows sample output from a small, lightly-loaded system. The units for the memory thresholds are in megabytes.
SHOW WORKLOAD MANAGEMENT STATUS;
****
+-------------------------------------------------------+-------+
| Stat | Value |
+-------------------------------------------------------+-------+
| Queued Queries (from global queue) | 0 |
| Queued Queries (from local queue) | 0 |
| Total Queued Queries Since Startup | 0 |
| Running Queries (from global queue) | 0 |
| Running Queries (from local queue) | 0 |
| Running Memory (MB) On Leaves (from global queue) | 0 |
| Running Memory (MB) On Leaves (from local queue) | 0 |
| Allocated Memory (MB) On Leaves (for local queue) | 6673 |
| Required Memory (MB) On Leaves (from local queue) | 0 |
| Avg Required Memory (MB) On Leaves (from local queue) | 0 |
| Running Threads Per Leaf (from local queue) | 0 |
| Allocated Threads Per Leaf (for local queue) | 2048 |
| Required Threads Per Leaf (from local queue) | 0 |
| Avg Required Threads Per Leaf (from local queue) | 0 |
| Running Connections Per Leaf (from local queue) | 0 |
| Allocated Connections Per Leaf (for local queue) | 2500 |
| Required Connections Per Leaf (from local queue) | 0 |
| Avg Required Connections Per Leaf (from local queue) | 0 |
| Memory Threshold (MB) to Queue Locally | 66 |
| Memory Threshold (MB) to Queue Globally | 3337 |
| Connections Threshold to Queue Globally | 2500 |
| Threads Threshold to Queue Globally | 2048 |
+-------------------------------------------------------+-------+
The following table provides a general description of some of the status fields. For information on the local queue and the global queue, see the Query Classification section.
Status | Description |
---|---|
Queued Queries | Total number of queries queued due to workload management on this aggregator |
Running Queries | Total number of queries in a queue, based on the number and resource intensity of other queries on this aggregator |
Running Memory On Leaves | Total estimated memory used per leaf from queries that are above/below “Memory Threshold to Queue Locally”/“Memory Threshold to Queue Globally”, which are run through this aggregator |
Running Threads Per Leaf | Total number of threads used per leaf from queries run through this aggregator |
Running Connections Per Leaf | Total number of connections established per leaf from queries, which are run through this aggregator |
Memory Threshold to Queue Locally | Estimated memory that a query needs to use before it is queued |
Memory Threshold to Queue Globally | Estimated memory that a query needs to use before it is queued and wait for multi-aggregator coordination to run |
Connections Threshold to Queue Globally | Total number of connections that a query takes before it is queued and wait for multi-aggregator coordination to run |
Threads Threshold to Queue Globally | Total number of threads that a query takes before it is queued and wait for multi-aggregator coordination to run |