Outdated Version
You are viewing an older version of this section. View current production version.
MemSQL Kubernetes Operator Release Notes
The changelog for the Operator is listed here. To learn how to use the Operator, see the Kubernetes deployment guide.
SingleStore’s Docker images can be found on Docker Hub.
2022-06-19 Version 2.0.23
- This version of the Operator requires that the
startupProbe
flag is enabled in Kubernetes 1.17 - 1.20. For Kubernetes 1.20 and later,startupProbe
is enabled by default. - To promote safe restarts, the Master Aggregator, child aggregator, and leaf nodes now wait for
terminationGracePeriodSeconds
, allowing all in-flight queries to complete. - Supports SingleStore node Docker images without Python installed.
- Various minor bug fixes.
2022-05-25 Version 2.0.8
- Leaf nodes now automatically detach on container termination and wait for queries to drain before terminating
- Leaf nodes now automatically attach themselves on container start if they were detached previously
ms-pusher
resources are now configurable via the memsql CRimagePullSecrets
was added to memsql CR- memsql CRD version upgraded from v1beta1 to v1
2022-04-29 Version 1.2.8
- Updated the Operator’s container base image to use AlmaLinux
- Changed parked PVC deletion to take cluster metadata into account
- The Operator now waits for the first aggregator Pod before creating a leaf StatefulSet
- The Operator now waits for the reconciliation loop to finish before declaring a cluster active
- Added server-side filtering to the Operator client cache to prevent out of memory (OOM) errors
- The Operator will not restart a cluster when updating labels
- Made the collocated CA affinity rule preferred, but not required
- Improvements with liveness/readiness probes
- Added the Operator version to the cluster status
2022-03-11 Version 1.2.7
- Fixes a bug where configuring the SSL and/or the Disaster Recovery spec would cause the SingleStore DB deployment to fail.
- Required
--cluster-id
argument has been added to the Operatordeployment.yaml
file. - Updates the RBAC permissions to include two new API groups,
networking.k8s.io
andcoordination.k8s.io
, which have been added to therbac.yaml
file.
2021-05-04 Version 1.2.5
- Changes the image pull policy from
Always
toIfNotPresent
. - Changes
PodDisruptionBudget
to usemaxUnavailable
instead ofminAvailable
to avoid an unnecessaryPodDisruptionBudget
update when horizontally scaling a cluster up or down. - Allows the
admin
password to be updated either via MySQL command or the Operator. Previously, if theadmin
password was updated via MySQL command, it would be reverted by the Operator. - For engine version 7.3.3 and later, the value of the global variable
failover_on_low_disk
isOFF
by default, which prevents a cluster from failing over when out of disk space. - Allows the
admin
user to configure and use connection links on engine versions 7.3.2 and later. - Child aggregators will now be added before leaf nodes when a new cluster is deployed, which allows child aggregators to have smaller node IDs.
- Fixes a bug where the Operator failed to close a connection to the Master Aggregator.
- Improves the rebalance logic to avoid unnecessary addition/removal of leaf nodes during scale up/down if rebalancing fails.
2021-01-15 Version 1.2.4
- Allows annotations to be passed to aggregators and/or leaf nodes which will then be merged into the corresponding StatefulSet’s annotations.
- Allows the Liveness and/or Readiness probe time-related parameters to be customized.
- Adds an option to disable DDL/DML service creation.
- Adds WebSocket connection support to the cluster.
- Adds the option to enforce a secure connection for the admin user (the database user created by the Operator) when connecting to the cluster.
- Exposes DDL, DML, and WebSocket ports (if WebSocket is enabled) in MemsqlCluster’s status.
- Allows global variables to be set separately for aggregator pods and leaf pods, or for all pods in the clusters (pertains to 2019-10-14 Version de65d489 and later). If a global variable is specified at both the cluster-level and in an aggregator or leaf spec, the latter has priority.
- Supports upgrading SingleStore DB from v7.1.x to v7.3.x.
2020-10-27 Version 1.2.3
- If a backup cron job was scheduled, was triggered, and is later removed, the associated batch job and pods will be deleted.
- Allows the user that runs the backup job to be specified over the default
root
user. The default resource pool associated with that user will be used in the backup job. - When scaling down the redundancy level from level 2 to level 1, the associated ConfigMap for the second availability group (AG) will now be deleted.
- Fixes a performance degradation issue introduced in Operator release 1.2.2 where the master partitions would become imbalanced across both availability groups during an engine upgrade and/or other condition.
2020-10-01 Version 1.2.2
Docker Hub Image | |
memsql/operator:1.2.2-93a97e50 |
sha256:80954740aed76d351b0b1ab1e589ab926f70f11182c89c100cbbebcf1702831f |
- Added support for using ClusterIP addresses as DDL and DML endpoint when service type is configured as ClusterIP.
- Added
rootServiceUser
flag in the CR to control whether to grantSERVICE_USER
toroot
. - Improved Operator and engine performance by avoiding unnecessary database rebalances.
- Fixed a stuck reconciliation loop when PersistentVolumeClaim allocated more storage size than requested.
- Fixed the service update logic to avoid unnecessary service updates.
- Added support to allow the Operator to pass optional startup parameters to the master exporter.
2020-08-18 Version 1.2.1
- Adds support for setting
local_file_system_access_restricted
global variable. - Adds support for setting
priorityClassName
via an Operator command-line parameterpriority-class-name
. ThepriorityClassName
will be passed to all StatefulSet’s PodSpec. - Re-adds the Operator command-line parameters
backup-s3-endpoint
andbackup-compatibility-mode
that were inadvertently removed from the Operator 1.2.0 release. Refer to the Backups reference for more information.
2020-08-04 Version 1.2.0
- Inadvertently removed the
backup-s3-endpoint
andbackup-compatibility-mode
command-line parameters. Refer to the Backups reference for more information. - Adds support for setting almost all engine variables via
globalVariables
section in the CR. Refer to List of Engine Variables for more information. Adding, removing, or updating engine variables will cause pods to restart. The following variables are explicitly prohibited:redundancy_level
sync_permissions
local_file_system_access_restricted
- Adds a
phase
field to CR status.Running
indicates that the Operator is happy with the state of the cluster and that there are no more changes to be made.Pending
indicates that the Operator is still working towards a desired state of the cluster.
- Adds support for scaling up volume storage size when larger numbers are specified in
storageGB
field inaggregatorSpec
and/orleafSpec
in CR. - Adds support for incremental backups which allows an admin to specify both full and incremental backup schedules.
- Adds
memsqlPusherSpec
inside amonitoringSpec
in the CR to allow MemSQL to be configured to push metrics to Kafka.
2020-07-06 Version 1.1.1
- As of SingleStore DB 7.1.4, license checks are now
cgroup
-aware and respect container resource boundaries for containerized deployments.- While this does not change how license checks are performed, nor does it change how capacity is allocated, it does change how the resources allocated to the container are checked.
- Changed the label on the backup pod from
app.kubernetes.io/name=memsql-cluster
toapp.kubernetes.io/name=backup
- Triggers pod restarts when the content of dependent secrets/configmaps are updated
- When data in secrets/configmaps are used as a container’s environment variable, changes in those values will trigger a pod restart
- Adds support for running backups with compatibility mode
- S3 Region is no longer required when running backups
2020-06-18 Version 1.1.0
- Adds support for Disaster Recovery (DR).
- DR requires either of the two following requirements to be met by the underlying infrastructure:
- Kubernetes hosts in primary and secondary clusters can reach each other via host IPs across clusters
- Kubernetes pods in primary and secondary clusters can reach each other via pod IPs across clusters
- In addition, the following requirements must be met:
- SingleStore DB 7.1.3 or newer must be deployed on both the primary and secondary clusters
- The primary and secondary clusters’ DDL endpoints are stable
- DR requires either of the two following requirements to be met by the underlying infrastructure:
- Adds support for both client-server and intra-cluster secure connections.
- For client-server secure connections:
- Once configured, the server permits, but does not require, a secure connection
- Supports both initial deployments and upgrades from existing deployments that are not already configured for client-server secure connections
- Downgrades are not supported
- For intra-cluster secure connections:
- Once configured, intra-cluster secure connections are required between all nodes. Secure connections are also used between the primary cluster and secondary cluster if DR is configured.
- Supports initial deployments but does not support upgrades from existing deployments that are not already configured with intra-cluster secure connections
- Downgrades are not supported
- For client-server secure connections:
- Improves readiness probe by checking each SingleStore node’s online status.
- Improves leaf nodes StatefulSet’s update performance by using OnDelete update strategy.
- Fixes the inability to set the number of arenas via
glibc
tunable by introducing anenvVariables
section in the CR and allowing users to setMALLOC_ARENA_MAX
. Default: (node height) * (CPUs per unit) - Adds support for
auditlog_level
global variable. The following variables are currently supported:default_partitions_per_leaf
columnstore_segment_rows
columnstore_flush_bytes
columnstore_window_size
transaction_buffer
snapshot_trigger_size
minimal_disk_space
pipelines_max_concurrent
auditlog_level
2020-05-15 Version 1.0.0
- Adds support for custom scheduling in MemsqlCluster spec
- Supported scheduling constraints include:
nodeSelector
,affinity
,anti-affinity
,toleration
,schedulerName
, andnodeName
in PodSpec
- Supported scheduling constraints include:
- Adds support for automated backup scheduling
- Adds support for setting whitelisted “sync” and “non-sync” variables
- The following variables are supported:
default_partitions_per_leaf
,columnstore_segment_rows
,columnstore_flush_bytes
,columnstore_window_size
,transaction_buffer
,snapshot_trigger_size
,minimal_disk_space
,pipelines_max_concurrent
- The following variables are supported:
- Deploys Prometheus exporter as a sidecar to all SingleStore nodes
- Adds support for a custom container image repository
- If the environment variable
RELATED_IMAGE_NODE
is specified, the Operator will pull and use this image to launch all SingleStore node pods as well as the sidecar container.
- If the environment variable
- Updates liveness probe to use the
ProcessState
property returned frommemsqlctl
- Adds labels to all cluster components and services
- Adds support for custom
glibc tunables
in CR- Default:
glibc.malloc.arena_max
=8 * (node height) * (CPUs per unit)
- Default:
- Updates leaf-node restart mechanism to facilitate faster recovery
- Improves Operator performance by filtering controller events
- Adds support for vertical scaling of child aggregators
- Updates the ConfigMaps mechanism
- As a consequence, do not create an unused configuration when redundancy level is set to
1
.
- As a consequence, do not create an unused configuration when redundancy level is set to
- Updates readiness probe to include node role type and database status
- Updates Operator to check leaf-node status before detaching/attaching a pod
2019-10-14 Version de65d489
- Added support for the
--fs-group-id
flag, which allows you to inject an additional group id into the container running SingleStore DB. This is used to ensure that the process inside the container has the correct permissions to read/write to the/var/lib/memsql
volume. - Added support for additional control over the services used to handle DDL and DML queries from client applications. These controls are specified in a new key in the MemsqlCluster spec called
serviceSpec
. The fields and usage of this are described in the Kubernetes deployment guide. - The top level attributes
loadBalancerSourceRanges
andserviceObjectMetaOverrides
used in the MemSQLCluster spec are now deprecated. UseserviceSpec
moving forward. - Some global variables can now be specified through the new attribute
globalVariables
. The supported variables are:license_visibility
,default_partitions_per_leaf
,columnstore_segment_rows
,columnstore_flush_bytes
,columnstore_window_size
,transaction_buffer
, andsnapshot_trigger_size
. For more information, see the Kubernetes deployment guide. - The attributes
license
andadminHashedPassword
can now be specified through secrets. To do this, use the alternative attributelicenseSecret
oradminHashedPasswordSecret
.