Outdated Version

You are viewing an older version of this section. View current production version.

AWS EC2 Best Practices

Info

This topic does not apply to MemSQL Helios.

This document summarizes findings from the technical assistance provided by MemSQL engineers to customers operating a production MemSQL environment at Amazon EC2. It is presumed the reader is familiar with MemSQL fundamentals as well as technical basics, terminology, and economics of AWS operations.

Glossary

EC2 Instance - Available EC2 Instance Types

EBS, Elastic Block Storage - Available EBS Types

VPC (Virtual Private Cloud) - What is Amazon VPC?

VPC Peering - What is VPC Peering?

Regions and AZ (Availability Zones) - What is Amazon Region and AZ?

Placement Group - What is a Placement Group?

MemSQL Aggregator Node - What is a MemSQL Aggregator?

MemSQL Leaf Node - What is a MemSQL Leaf?

MemSQL Cluster Provisioning Considerations

MemSQL is a shared-nothing MPP cluster of servers. In the context of EC2, a MemSQL server is an EC2 instance. As a general rule, a MemSQL cluster should be operated within a single EC2 VPC/Region/Availability Zone and should utilize identically configured instances of the same type. The amount of EBS storage attached to the MemSQL EC2 instances, however, may differ depending on the type of node(s) hosted by the instance; MemSQL aggregators require a minimal amount of EBS storage (aggregators may store reference table data) while EBS storage for leaves has to be provisioned per user data capacity requirements.

Note that if rowstore is being used, the user will only need approximately twice as much disk as memory; for example, a 60GB RAM machine would need around 120-200GB of disk.

The recommended decision making method is a two-step process:

  1. Select the proper instance type as a MemSQL server. This will be a MemSQL cluster’s “building block”, then…

  2. Determine the required number of instances to scale out the cluster capacity horizontally to meet storage, response time, concurrency, and service availability requirements.

The basic principle when provisioning virtualized resources for a MemSQL server is to allocate CPU, RAM, storage I/O and Networking in a balanced manner so no particular resource bottlenecks, leaving other resources underutilized.

EC2 users should keep in perspective that “Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances.” <source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html>

Selecting Instance Type (CPU, RAM)

MemSQL recommends provisioning a minimum of 4GB of RAM per physical core or virtual processor. For faster CPUs 6-8GB per core is commonly selected.

When selecting an instance type as a building block of a MemSQL cluster in a production environment, users should only consider instances with ~4GB or more RAM per vCPU.

Several available instance types meet the above guideline. The best instance type for a particular application can only be selected per specific performance and availability requirements.

FAQ: Is it better to have smaller number of bigger machines or larger number of smaller machines?

Answer: There is hardly a “one size fits all” answer, yet the smallest number of “medium” size machines may be a good baseline. There is rarely a good reason to select a bigger than a 2 socket underlying server in EC2 environment (e.g. 32 vCPU instances), while 4 vCPU or smaller instances are only suitable for low concurrency database workloads. An 8vCPU “medium” size instance may be a good starting point. We, at MemSQL, experimented extensively with r4.2x large instances (8 vCPUs, 8GB * 8 = 64GB RAM -meets MemSQL guidance) and often find it a potent option with reasonable commercial terms.

Networking

MemSQL EC2 customers should be aware of the caveats of operating a cluster, or multiple clusters in a DR configuration, over a shared network.

AWS connectivity scenarios, ordered by increasing bandwidth:

  • Instance to Internet (WAN). Before reaching WAN, this connection traverses VPC stack, EC2 network and outer boundary. Performance range is up to 100 Mb/s, however could be unpredictable, dropping below 10 Mb/s.

  • VPC to VPC across regions. This connection crosses the public internet (WAN), so is basically the same as the above, perhaps somewhat slower since it implies VPN.

  • AZ to AZ in the same region. LAN connectivity, yet traverses lots of network stacks. Performance range is up to 1 Gbps, with no guarantees.

  • Instance to Instance in the same AZ. No cost for network traffic if in the same VPC. More reliable latency and throughput. No guarantees, e.g. network operations can be impacted by a “noisy neighbor”. MemSQL performance tests have consistently shown ability to saturate 1 Gbps network.

  • Instance to Instance in the same Placement group. Guarantee that instances are placed very close together (network-wise, e.g. in the same rack). Performance range is up to 10 Gbps, guaranteed. However no guarantees on number of available slots in placement groups.

MemSQL Deployment

As a general rule, all EC2 instances of a MemSQL cluster should be configured on a single subnet. This means all MemSQL nodes will be within a single VPC/Region/Availability Zone.

All AWS customers get a VPC allocated to the account. Each EC2 instance must be assigned to a subnet in a VPC. Customers are expected to bind a subnet to an AZ and then place the instance in the subnet. An instance can only exist in one subnet and therefore one AZ.

For DR configurations or geographically distributed data services, customers can provision two or more clusters, each in a dedicated VPC, typically in separate regions, with VPC peering between VPCs (regions).

VPC Peering

The following examples illustrate use cases leveraging VPC peering:

  • A MemSQL environment includes a primary cluster in one region and a secondary cluster in a different geography (region) for DR. Connectivity between the primary and secondary sites is provided by VPC peering.

  • A MemSQL cluster is ingesting data subscribing to a Kafka feed. A customer would typically setup a Kafka cluster in one VPC, a MemSQL cluster in a different VPC, with a VPC peering to connect MemSQL database to Kafka.

For VPC peering setup, scenarios and configuration guidance see VPC Peering Guide.

Placement Groups

At present, MemSQL does not have placement group related guidance due to limited available documentation. We are encouraged by prospects of 10 Gbps or higher connectivity between instances in the same placement group and are actively researching the ways to enhance performance and robustness of MemSQL operations by leveraging placement groups.

Caution: Aligning AWS Availability Zones with MemSQL Availability Groups

This section is to proactively answer user inquiries regarding aligning AWS AZ and MemSQL AG concepts, perhaps seeking to add “AZ level robustness” to operating environments.

MemSQL operates most efficiently when all nodes of a cluster are within a single subnet. Separate AWS Availability Zones require separate subnets and as such are not optimal for MemSQL performance. The current guidance from MemSQL is to deploy all AWS nodes of a cluster into the same AWS Availability Zone.

EBS Storage

Please note that an under-configured EC2 storage is a common root cause of inconsistent MemSQL EC2 cluster performance.

MemSQL is a shared-nothing MPP system, i.e. each MemSQL server (EC2 instance) manages its own “internal” storage.

To ensure a permanent MemSQL server storage, users need to provision EBS volumes and attach them to MemSQL servers (EC2 instances).

EBS is a disk subsystem that is shared among instances, which means that MemSQL EC2 users have to accept the reality of somewhat unpredictable variations in I/O performance across MemSQL servers and even performance variability of the same EBS volume over time. EBS performance characteristics may be affected by activities of co-tenants (a “noisy neighbor” problem), by file replication for availability, by EBS rebalancing, etc.

The elastic nature of EBS means that the system is designed to monitor utilization of underlying hardware assets and automatically rebalance itself to avoid hotspots. This has both a positive and negative impact on end-user operations. Users are assured that EBS will reasonably promptly resolve severe contention for I/O. But on the other side, relocation of files to new storage nodes during rebalancing adversely affects EBS volume performance.

To maximize consistency and performance characteristics of EBS, MemSQL encourages users to follow the general AWS recommendation to attach multiple EBS volumes to an instance and stripe across the volumes. This technique is widely employed by EC2 users and is characterized in a number of excellent studies. Since AWS charges by the EBS volume capacity, there should be no economic penalties for using multiple smaller EBS vs. one large EBS of the same total size.

Users can consider attaching3-4EBS volumes to each leaf server(instance) of the cluster and present this storage to the host as a software RAID0 device.

Studies show that there is an appreciable increase in RAID0 performance up to 4 EBS volumes in a stripe, with flattening after 6 volumes.

For more information, please see the AWS documentation section “Amazon EBS Volume Performance on Linux Instances”, in particular:

Raid Configuration on Linux Benchmark EBS Volumes

EBS Type

MemSQL EC2 customers with extremely demanding database performance requirements may consider provisioning enhanced EBS types such as io1 delivering very high IOPS rates.

In general, 3-4 gp2 EBS volumes in a RAID0 configuration deliver sufficiently good performance at a reasonable cost.

Storage Capacity

When provisioning EBS volumes’ capacity per application data retention requirements, MemSQL EC2 administrators need to include a “fudge” factor and ensure that no production environment is operated with less than 40% free storage space on the data volume. Free disk space is required for temporary data materializations during database operations and to support continuous data growth. Users should also be aware that EBS performance generally tracks with EBS volume size.

Storage Level Redundancy

As a reminder, MemSQL provides native out-of-the-box fault tolerance.

In MemSQL database environments running on physical hardware, MemSQL recommends supplementing cluster’s fault tolerance with storage level redundancy supported by hardware RAID controllers. It’s a cost effective approach diminishing the impact of a single drive failure on cluster operations.

However, in EC2 environments we see no practical opportunity for storage-level redundancy provisions because:

  • EBS volumes are not statistically independent (they may share the same physical network and storage infrastructure)
  • Studies and customer experience show that performance of software RAID in a redundant configuration, in particular RAID5 over EBS volumes is below acceptable levels

For fault tolerance, MemSQL EC2 users can just rely on MemSQL cluster level redundancy and under-the-cover mirroring of EBS volumes provided by AWS.

Instance (Ephemeral) Storage

EC2 instance types that meet recommendations for a MemSQL server typically come with preconfigured temporary block storage referred to as instance store or ephemeral store. Since ephemeral storage is physically attached to the host computer, it delivers superior I/O performance vs. network attached EBS.

However due to its ephemeral nature, instance storage is not a suitable media for persistent data storage in a production environment. The use of instance storage for MemSQL data is limited to use scenarios cases where the database can be reloaded entirely from persistent backups or custom save points. For example, as a “development sandbox”, or for one-time data mining/ad hoc analytics, or when data files loaded since the last save point are preserved and may be used to restore the latest content, etc.

Backup and Restore

The recommended practice for backing up and restoring MemSQL databases is to push all backups to a single shared mount point. AWS does not provide a single shared mount point within a standard node configuration. There are two recommended approaches to work around this scenario:

  1. Backup databases to local storage and copy the backup files on each node to a shared S3 location
  2. Leverage S3FS to present a shared mount of S3 disk to push backups. This approach leverages the S3FS open source utility.

Load Balancing of Client Connections

Application clients are accessing a MemSQL database cluster by connecting to aggregator nodes. Normally multiple aggregator nodes are provisioned for fault tolerance and performance considerations. A good practice is to spread client connections evenly across all aggregator nodes of a MemSQL cluster. This can be achieved with either of the following methods, or a combination thereof:

  • Application side connection pool. Sophisticated connection pool implementations offer load balancing, failover and failback, and even multi-pool failover and failback.
  • ELB, Elastic Load Balancing service.
  • Amazon Route 53 service. MemSQL EC2 customers can use Route 53 to implement DNS- based load balancing of client connections, with or without ELBs. A name is created for the MemSQL cluster and is associated with multiple IP records, one per each cluster Aggregator node, or ELB if configured. As clients make connection requests, DNS rotates through its list of IP addresses. This implements a simple yet effective balancing of client connections to a MemSQL cluster.

Feedback

The MemSQL AWS team is actively soliciting customer feedback and would appreciate hearing from you. Please send your comments and suggestions to feedback@memsql.com.