Outdated Version

You are viewing an older version of this section. View current production version.

Addressing the "Maximum Table Memory" Error min read


Info

SingleStore Managed Service does not support setting the engine variables that are discussed in this topic.

If you see the error message below, the leaf at 'leafhost':leafport has reached maximum_table_memory, which is the maximum cumulative memory in use by all tables on a node. SingleStore DB will not allow writes to any table once maximum_table_memory is reached (SingleStore DB will become read-only).

Leaf Error (leafhost:leafport): Memory usage by SingleStore DB for tables (##### MB) has reached the value of 'maximum_table_memory' global variable (##### MB). This query cannot be executed.

You can resolve this issue with the steps below.

Check for Data Skew

First, determine which nodes are reaching their memory capacity. Do this by checking for data skew across your nodes. The data skew topic discusses how to address data skew, if it is present.

If Data Skew is not Present

If nodes are reaching maximum_table_memory without skew, the cluster itself is nearing maximum memory capacity. To resolve this, you need to add capacity or delete data. To add capacity, see Administering a Cluster. If additional leaf nodes are not available, use the instructions below to delete data in batches. DELETE queries use 50 bytes of memory per record deleted until the DELETE commits, so the maximum_table_memory needs to be temporarily increased on all nodes during the DELETE.

On each leaf node in your cluster, do the following:

  1. Pause write workloads from an aggregator or your application.

  2. Check current maximum_memory and maximum_table_memory.

    memsql> select @@global.maximum_memory
    | @@global.maximum_memory |
    +-------------------------+
    |                   65536 |
    +-------------------------+
    1 row in set (0.00 sec)
    
    memsql> select @@global.maximum_table_memory
    +-------------------------------+
    | @@global.maximum_table_memory |
    +-------------------------------+
    |                         57927 |
    +-------------------------------+
    1 row in set (0.00 sec)
    
  3. Increase maximum_table_memory to 95% of maximum_memory. The default value is 90% of maximum_memory.

    memsql> set @@global.maximum_table_memory=62259;
    Query OK, 0 rows affected (0.00 sec)
    
  4. Execute small batches of DELETE queries. Limit to 100,000 records or less to reduce memory used until commit.

    memsql> DELETE FROM mytable limit 100000;
    Query OK, 0 rows affected (1.64 sec)
    
  5. Restore maximum_table_memory to original value.

    memsql> set @@global.maximum_table_memory=57927;
    Query OK, 0 rows affected (0.00 sec)