Outdated Version

You are viewing an older version of this section. View current production version.

CREATE TABLE

Creates a new table.

Syntax

CREATE [REFERENCE | TEMPORARY | GLOBAL TEMPORARY] TABLE [IF NOT EXISTS] tbl_name
    (create_definition,...)
    [table_options]
    [[AS] SELECT ...]

CREATE TABLE [IF NOT EXISTS] tbl_name
    { LIKE old_tbl_name | (LIKE old_tbl_name) }

create_definition:
    col_name { column_definition | AS computed_column_definition }
  | [CONSTRAINT [symbol]] PRIMARY KEY [index_type] (index_col_name,...)
      [index_option] ...
  | { INDEX | KEY } [index_name] [index_type] (index_col_name,...)
      [index_option] ...
  | [CONSTRAINT [symbol]] UNIQUE [INDEX|KEY]
      [index_name] [index_type] (index_col_name,...)
      [index_option] ...
  | [CONSTRAINT [symbol]] SHARD KEY [index_type] (index_col_name,...)
      [index_option] ...
  | FULLTEXT [index_name] (index_col_name,...)

column_definition:
    data_type [NOT NULL | NULL] [DEFAULT default_value] [ON UPDATE update_value]
      [AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY] [SPARSE] [SERIES TIMESTAMP]

computed_column_definition:
    computed_column_expression PERSISTED data_type

data_type:
    BIT[(length)]
  | TINYINT[(length)] [UNSIGNED]
  | SMALLINT[(length)] [UNSIGNED]
  | INT[(length)] [UNSIGNED]
  | INTEGER[(length)] [UNSIGNED]
  | BIGINT[(length)] [UNSIGNED]
  | REAL[(length,decimals)] [UNSIGNED]
  | DOUBLE[(length,decimals)] [UNSIGNED]
  | DECIMAL[(length[,decimals])] [UNSIGNED]
  | NUMERIC[(length[,decimals])] [UNSIGNED]
  | DATETIME
  | DATETIME(6)
  | TIMESTAMP
  | TIMESTAMP(6)
  | DATE
  | TIME
  | CHAR[(length)]
      [CHARACTER SET charset_name] [COLLATE collation_name]
  | VARCHAR(length)
      [CHARACTER SET charset_name] [COLLATE collation_name]
  | TINYBLOB
  | BLOB
  | MEDIUMBLOB
  | LONGBLOB
  | TINYTEXT [BINARY]
  | TEXT [BINARY]
  | MEDIUMTEXT [BINARY]
  | LONGTEXT [BINARY]
  | ENUM(value1,value2,value3,...)
  | SET(value1,value2,value3,...)
  | JSON [COLLATE collation_name]
  | GEOGRAPHY
  | GEOGRAPHYPOINT

index_col_name:
    col_name [(length)] [ASC | DESC]

index_type:
    USING { BTREE | HASH | CLUSTERED COLUMNSTORE }

index_option:
    KEY_BLOCK_SIZE [=] value
  | index_type
  | COMMENT 'string'
  | BUCKET_COUNT [=] value
  | WITH (index_kv_options)
  | UNENFORCED [RELY | NORELY]

index_kv_options:
    index_kv_option [, index_kv_option] ...

index_kv_option:
    RESOLUTION = value
  | COLUMNSTORE_SEGMENT_ROWS = value
  | COLUMNSTORE_FLUSH_BYTES = value

table_options:
    table_option [[,] table_option] ...

table_option:
    AUTO_INCREMENT [=] value
  | COMMENT [=] 'string'
  | AUTOSTATS_ENABLED = { TRUE | FALSE }
  | AUTOSTATS_CARDINALITY_MODE = {INCREMENTAL|PERIODIC|OFF}
  | AUTOSTATS_HISTOGRAM_MODE = {CREATE|UPDATE|OFF}
  | AUTOSTATS_SAMPLING = {ON|OFF}
  | COMPRESSION = SPARSE

Remarks

  • CREATE TABLE LIKE does not support REFERENCE or GLOBAL TEMPORARY tables.
  • For more information about the data types listed above, and for an explanation of UNSIGNED, refer to the Data Types topic.
  • The SET data type restricts the values that can be inserted for a table column. Only the set of strings that are listed for a column at the time of table creation can be inserted.
  • tbl_name is the name of the table to create in the SingleStore DB database.
  • The following note applies when the engine variable table_name_case_sensitivity is set to OFF: After you create a table, you cannot create another table having the same table name with a different case. Refer to the Database Object Case Sensitivity topic for more information.
  • CREATE TABLE is slower in SingleStore DB than in MySQL. See Code Generation for more information.
  • The BTREE index type creates a skip list index in SingleStore DB. This index has very similar characteristics to a BTREE index.
  • The CLUSTERED COLUMNSTORE key type creates a columnstore table. If you do not want to specify a column (or columns) to sort on, or do not care about the sort order for your data, you can specify an empty key (e.g. KEY() USING CLUSTERED COLUMNSTORE).
  • BUCKET_COUNT is specific to the HASH index type. It controls the bucket count of the hash table.
  • The UNENFORCED index option can be used on a UNIQUE constraint to specify that the unique constraint is unenforced. See Unenforced Unique Constraints.
  • RESOLUTION is specific to index on geospatial columns. See Geospatial Guide for more information.
  • COLUMNSTORE_SEGMENT_ROWS, COLUMNSTORE_FLUSH_BYTES controls configuration variables specific to columnstore tables. See Advanced Columnstore Configuration Options) for more information.
  • The only charset_name supported by SingleStore DB is ‘utf8’.
  • AUTOSTATS_ENABLED controls if automatic statistics should be collected on this table. There are three categories of autostats - AUTOSTATS_CARDINALITY_MODE, AUTOSTATS_HISTOGRAM_MODE, and AUTOSTATS_SAMPLING. SingleStore DB allows you to independently control how each category of statistics is automatically gathered. Multiple autostats settings can be combined in a single CREATE TABLE statement. See Automatic Statistics for more information.
  • This command must be run on the master aggregator node (see Node Requirements for SingleStore DB Commands), with the exception of CREATE TEMPORARY TABLE, which can be run on any aggregator node.
  • computed_column_expression defines the value of a computed column using other columns in the table, constants, built-in functions, operators, and combinations thereof. For more information see Persistent Computed Columns
  • Temporary tables, created with the TEMPORARY option, will be deleted when the client session terminates. For ODBC/JDBC, this is when the connection closes. For interactive client sessions, it is when the user terminates the client program.
  • Global temporary tables, created with the GLOBAL TEMPORARY option, exist beyond the duration of a client session. If failover occurs, the global temporary tables lose data and enter an errored state; they need to be dropped and recreated. This command can be run only on the master aggregator. See Global Temporary Tables topic for details.
  • The SERIES TIMESTAMP clause specifies a table column as the default column defining time order for implicit use by time series functions. This setting can be specified only for a single table column. The column can be one of the following data types: DATE,TIME,DATETIME, DATETIME(6), TIMESTAMP or TIMESTAMP(6). It is recommended to use either of the DATETIME or DATETIME(6) types instead of one of the TIMESTAMP types because the automatic update behavior of TIMESTAMP is subject to change. See Timestamp Behavior for details.
  • The SERIES TIMESTAMP clause does not affect the data type of a table column, rather it specifies the behavior of the column in the time-series-specific functions like FIRST(), LAST(), and TIME_BUCKET().
  • Keyless sharding distributes data across partitions uniformly at random but with the limitation that it does not allow single partition queries or local joins since rows are not assigned to specific partitions. Keyless sharding is the default for tables that do not have primary key or explicit shard key. You can explicitly declare a table as keyless sharded by specifying a shard key with an empty list of columns in the SHARD KEY() constraint in the table definition.
Info

SingleStore DB supports primary/unique keys only if the key contains all columns in the shard key. For more information, see Shard Keys.

MySQL Compatibility

SingleStore DB’s syntax differs from MySQL mainly in the datatypes and storage it supports, and some specific index hints.

  • KEY_BLOCK_SIZE [=] value : value is currently ignored.

DEFAULT Behavior

If DEFAULT default_value is specified in column_definition, and no value is inserted in the column, then default_value will be placed in the column during an INSERT operation. If the column is of the type TIMESTAMP, TIMESTAMP(6), DATETIME , or DATETIME(6), then you can update default_value to one of the following values: CURRENT_TIMESTAMP(), CURRENT_TIMESTAMP(6), NOW(), or NOW(6). For more information, see the Data Types topic.

ON UPDATE Behavior

If ON UPDATE update_value is specified in column_definition, and if any other column is updated but the specified column is not explicitly updated, then update_value will be placed in the column during an UPDATE operation. If the column is of the type TIMESTAMP, TIMESTAMP(6), DATETIME , or DATETIME(6), then you can update update_value to one of the following values: CURRENT_TIMESTAMP(), CURRENT_TIMESTAMP(6), NOW(), or NOW(6). For more information, see the Data Types topic.

AUTO_INCREMENT Behavior

AUTO_INCREMENT can be used to automatically generate a unique value for new rows. When you insert a new row, and the AUTO_INCREMENT field is DEFAULT, NULL, or 0, SingleStore DB will automatically assign a value. It’s important to understand that AUTO_INCREMENT only guarantees that automatically-generated values are unique. In general, it does not guarantee that they:

  • are consecutive or sequential
  • are monotonically increasing
  • start from any particular value
  • are distinct from explicitly-set values

If you explicitly set a value in an INSERT or UPDATE statement, it may collide with past or future automatically-generated values.

Applications interacting with SingleStore tables should not assume that AUTO_INCREMENT values will fit in a 32-bit integer.

A table can have only one AUTO_INCREMENT column. The AUTO_INCREMENT column must be included in an index (not necessarily a PRIMARY or UNIQUE key, a regular key is also allowed).

If syntax such as CREATE TABLE table_1 SELECT * FROM table_2 is used to create table_1 where table_2 has an AUTO_INCREMENT column, it will be created as a non-auto-increment column in table_1.

See LAST_INSERT_ID more information on AUTO_INCREMENT behavior.

If the AUTO_INCREMENT behavior described here does not satisfy your requirements you can create your own sequence generator using LAST_INSERT_ID. See the sequence generator stored procedure example.

Warning

Restarting an aggregator, such as during upgrades or host machine maintenance, will introduce a large gap between any AUTO_INCREMENT values inserted before the restart and any values inserted after. In the case of reference tables, this same behavior might also occur when a child aggregator is promoted to master aggregator. Depending on how often you restart your aggregators, you could see many jumps in values from a specific aggregator.

These jumps are because each aggregator defines and manages its own range of values to start incrementing from to prevent collisions in a table. With each restart, a new batch of values is used. For sharded tables, the range of AUTO_INCREMENT values increases to the next 1,000,000 after each restart (e.g. 2,430,403 before restart -> 3,000,000 after). For reference tables, the batch size jumps to the next 1,000. And as with previous versions of SingleStore DB, these values are also encoded with the aggregator ID, as described in the next section.

AUTO_INCREMENT in Sharded Tables

On a sharded (distributed) table, AUTO_INCREMENT can only be used on a BIGINT column (as they usually use the entire 64 bits). Each aggregator computes and tracks its own AUTO_INCREMENT values and uses those values when new rows are added to a table. AUTO_INCREMENT values in sharded tables are assigned using the high 14 bits to encode the aggregator ID and the bottom 50 bits for a per-aggregator unique value. The values on each aggregator are usually, but not always, sequential; therefore, inserts on an individual aggregator generate values which are unique and usually sequential. And because each aggregator manages its own AUTO_INCREMENT values, the automatically-generated values from inserts across multiple aggregators are only unique, never sequential.

Here is an example to illustrate how AUTO_INCREMENT values are generated across aggregators in a cluster as new rows are inserted into table tb.:

SELECT * FROM tb ORDER BY b;
****
+-------------------+------+------------+
| a             	  | b	   | c         	|
+-------------------+------+------------+
|             	  1 |	   1 | from MA	  |
|                 2 |	   2 | from MA	  |
|             	  3 |	   3 | from MA	  |
|              	  4 |	   4 | from MA	  |
|             	  5 |	   5 | from MA	  |
| 13510798882111489 |	   6 | from CA 96 |
| 13510798882111490 |	   7 | from CA 96 |
| 13510798882111491 |	   8 | from CA 96 |
| 13510798882111492 |	   9 | from CA 96 |
| 13510798882111493 |   10 | from CA 96 |
| 14636698788954113 |   11 | from CA 20 |
| 14636698788954114 |   12 | from CA 20 |
| 14636698788954115 |   13 | from CA 20 |
| 14636698788954116 |   14 | from CA 20 |
| 14636698788954117 |   15 | from CA 20 |
|                 6 |   16 | from MA  	|
| 15762598695796737 |   17 | from CA 17 |
| 13510798882111494 |   18 | from CA 96 |
|             	  7 |   19 | from MA   	|
| 14636698788954118 |   20 | from CA 20 |
+-------------------+------+------------+

As shown in the example above, automatically-generated AUTO_INCREMENT values can differ depending on which aggregator you run the inserts on. Of course, if you ran some inserts on one aggregator and some inserts on another aggregator, you would get different automatically generated values. Also note that automatically-generated values and explicitly-set values can collide in sharded tables.

AUTO_INCREMENT in Reference Tables

The AUTO_INCREMENT value for a reference table is tracked by the master aggregator. It is guaranteed that the next AUTO_INCREMENT value will always be greater than any value previously seen in this column. These generated values are usually sequential, but not always. Contrarily to the behavior for sharded tables, explicitly setting a value in an INSERT or UPDATE statement will not create a collision with future automatically generated values.

The next example shows some queries using AUTO_INCREMENT fields on reference tables.

CREATE REFERENCE TABLE t(id INT AUTO_INCREMENT PRIMARY KEY);
INSERT INTO t values();
INSERT INTO t values(5);
INSERT INTO t values();
SELECT id FROM t ORDER BY id;
****
+----+
| id |
+----+
|  1 |
|  5 |
|  6 |
+----+

UPDATE t SET id=9 WHERE id=5;
INSERT INTO t values();
SELECT id FROM t ORDER BY id;
****
+----+
| id |
+----+
|  1 |
|  6 |
|  9 |
| 10 |
+----+

DELETE FROM t;
INSERT INTO t values();
SELECT id FROM t order by id;
****
+----+
| id |
+----+
| 11 |
+----+

Setting AUTO_INCREMENT Starting Values

It is possible to override the starting AUTO_INCREMENT value for reference tables by setting the AUTO_INCREMENT option on a CREATE TABLE statement.

The following example shows how to set the AUTO_INCREMENT start value during table creation:

CREATE REFERENCE TABLE t (id int AUTO_INCREMENT PRIMARY KEY) AUTO_INCREMENT = 7;

INSERT INTO t VALUES (), ();

SELECT * FROM t;
+----+
| id |
+----+
|  7 |
|  8 |
+----+

This syntax has no effect on sharded tables. It will not return an error, for compatibility with external tools, but it will explicitly present a warning and no operation will be done.

AUTO_INCREMENT During Replication

When replicating data between clusters, the secondary cluster has all the replicated AUTO_INCREMENT values from the primary cluster. When you failover to a secondary cluster, SingleStore DB synchronizes the secondary cluster by looking for the maximum value in the range of AUTO_INCREMENT values on every aggregator.

Examples

CREATE TABLE IF NOT EXISTS my_MemSQL_table(id INT PRIMARY KEY AUTO_INCREMENT, v VARCHAR(10) NOT NULL);

CREATE REFERENCE TABLE pages(
page_id INT PRIMARY KEY AUTO_INCREMENT,
page_url VARCHAR(1000)
);

CREATE TABLE … SELECT

CREATE TABLE ... SELECT (often referred to as CREATE TABLE AS SELECT) can create one table from results of a SELECT query.

The table will include a column for each column of the SELECT query. You can define indexes, additional columns, and other parts of the table definition in the create_definition. Persisted computed columns can also be specified this way. Some examples:

CREATE TABLE t2 (PRIMARY KEY (a, b)) AS SELECT * FROM t1;
CREATE TABLE t2 (KEY (a, b) USING CLUSTERED COLUMNSTORE) AS SELECT * FROM t1;
CREATE TABLE t2 (a int, b int) AS SELECT c, d FROM t1;
CREATE TABLE t2 (b AS a+1 PERSISTED int) AS SELECT a FROM t1;

Example 1

Extract time column from an event table to build a times table.

CREATE TABLE events(type VARCHAR(256),time TIMESTAMP);

INSERT INTO events VALUES('WRITE', NOW());

CREATE TABLE times(id INT AUTO_INCREMENT KEY)
SELECT time FROM events;

SELECT * FROM times;
+----+---------------------+
| id | time                |
+----+---------------------+
|  1 | 2016-03-25 15:38:12 |
+----+---------------------+

Example 2

SELECT * FROM courses ORDER BY course_code, section_number;
****
+-------------+----------------+-----------------+
| course_code | section_number | number_students |
+-------------+----------------+-----------------+
| CS-101      |              1 |              20 |
| CS-101      |              2 |              16 |
| CS-101      |              3 |              22 |
| CS-101      |              4 |              25 |
| CS-101      |              5 |              22 |
| CS-150      |              1 |              10 |
| CS-150      |              2 |              16 |
| CS-150      |              3 |              11 |
| CS-150      |              4 |              17 |
| CS-150      |              5 |               9 |
| CS-201      |              1 |              14 |
| CS-201      |              2 |              17 |
| CS-301      |              1 |               7 |
| CS-301      |              2 |              10 |
+-------------+----------------+-----------------+
CREATE TABLE IF NOT EXISTS distinct_courses (PRIMARY KEY(course_code))
	AS SELECT DISTINCT(course_code) FROM courses;
SELECT * FROM distinct_courses ORDER by course_code;
****
+-------------+
| course_code |
+-------------+
| CS-101      |
| CS-150      |
| CS-201      |
| CS-301      |
+-------------+

COMPRESSION = SPARSE and SPARSE behavior

SingleStore DB supports sparse data compression for rowstore tables. Nullable structured columns can use sparse data compression. The data types of these columns include numbers, dates, datetimes, timestamps, times, and varchars.

Columns that use sparse data compression only store non-NULL data values. Example 4 discusses an excellent sparse data compression use case, which also includes the query to retrieve actual memory usage of rowstore tables that use sparse data compression.

Sparse compression has the following limitations:

  • The SPARSE clause cannot be used for key columns. However, if a rowstore table uses sparse data compression using the COMPRESSION = SPARSE clause, then the key columns are stored in-row.
  • The SPARSE clause cannot be used for columns where the non-NULL size of the column is greater than 15 bytes. Refer to the Data Types topic for details.

Examples

Example 1: Creating a Rowstore Table Having All Sparse Columns

The following example demonstrates the COMPRESSION = SPARSE clause. This clause indicates that all columns in the table will use sparse data compression.

CREATE TABLE transaction(
    id BIGINT NOT NULL,
    explanation VARCHAR(70),
    shares DECIMAL(18, 2),
    share_price DECIMAL(18, 2),
    total_amount as shares * share_price PERSISTED DECIMAL(18,2),
    transaction_date DATE,
    dividend_exdate DATE,
    misc_expenses DECIMAL(18, 2),
    country_abbreviation CHAR(6),
    correction_date DATE,  
    settlement_date DATE
    ) COMPRESSION = SPARSE;

Example 2: Creating a Rowstore Table Having Selected Sparse Columns

The following example demonstrates the SPARSE clause. This clause is applied to the columns that will use sparse data compression.

CREATE TABLE transaction(
    id BIGINT NOT NULL,
    explanation VARCHAR(70) SPARSE,
    shares DECIMAL(18, 2) SPARSE,
    share_price DECIMAL(18, 2),
    total_amount as shares * share_price PERSISTED DECIMAL(18,2),
    transaction_date DATE,
    dividend_exdate DATE SPARSE,
    misc_expenses DECIMAL(18, 2) SPARSE,
    country_abbreviation CHAR(6),
    correction_date DATE SPARSE,  
    settlement_date DATE SPARSE
    );

Example 3: Listing Whether Columns use Sparse Compression

The following query lists the columns in the transaction table that was created in Example 2. The query indicates, for each column, whether the column uses sparse compression.

SELECT column_name, is_sparse FROM information_schema.columns
WHERE table_name = 'transaction';
****
+----------------------+-----------+
| column_name          | is_sparse |
+----------------------+-----------+
| id                   | NO        |
| explanation          | YES       |
| shares               | YES       |
| share_price          | NO        |
| total_amount         | NO        |
| transaction_date     | NO        |
| dividend_exdate      | YES       |
| misc_expenses        | YES       |
| country_abbreviation | NO        |
| correction_date      | YES       |
| settlement_date      | YES       |
+----------------------+-----------+

Example 4: An Excellent Sparse Compression Use Case

Sparse rowstore compression works best on a wide table with more than half NULL values. The distribution of the NULL values in the table does not contribute to the amount of memory used.

For example, consider this wide table t having three-hundred columns:

CREATE TABLE t (
  c1 double,
  c2 double,
  …
  c300 double) COMPRESSION = SPARSE;

In MemSQL 7.1, table t was loaded with 1.05 million rows, two-thirds of which are NULL. To retrieve the actual memory usage (in GB) of table t, run the following command:

SELECT table_name, SUM(memory_use) memory_usage FROM information_schema.table_statistics WHERE table_name = 't' GROUP BY table_name;
****
+-------------+--------------+
| table_name  | memory_usage |
+-------------+--------------+
| t           |         1.23 |
+-------------+--------------+

The following table lists the memory usage of table t, with and without sparse compression: For this wide table with two-thirds NULL values, you can store more than twice the data in the same amount of RAM.

FULLTEXT behavior

SingleStore DB supports full text search across text columns in a columnstore table using the FULLTEXT index type. A full text index can only be added during CREATE TABLE and only on the text types CHAR, VARCHAR, TEXT, and LONGTEXT. Column data size is limited by the text type used. Size limitations for supported data types are documented in Data Types.

Warning

A FULLTEXT index cannot be dropped or altered after the table is created, and if the table is dropped, the index is deleted automatically.

If you query a column c that is part of a multi-column FULLTEXT index, where the query uses a FULLTEXT MATCH on c, the index on c will be applied.

This differs from a multi-column non-FULLTEXT index, where behavior is as follows: if you query column c that is part of index i, where the query uses an equality filter on c, the index on c will only be applied if c is the leftmost column in i.

Any column that is part of a FULLTEXT index can be queried, even if it is not the leftmost. Searches across FULLTEXT columns are done using the SELECT ... MATCH AGAINST syntax. For more information, see MATCH.

Errors

These are the possible errors you may encounter when using FULLTEXT.

Error Error String
Invalid Type specified for column Invalid type specified for FULLTEXT
Specifying FULLTEXT keyword more than once in a CREATE TABLE statement FULLTEXT may only be specified once in a CREATE TABLE statement
Specifying the same column multiple times Column may only be specified once in a FULLTEXT definition
Specifying a column that is not defined on the table Column not defined
Specifying FULLTEXT on a row store table Only column store tables may have a FULLTEXT index

Examples

This example creates a FULLTEXT index for both the title column and the body column. Either column could be queried separately using MATCH <column>, and the index on the column would be applied.

CREATE TABLE articles (
    id INT UNSIGNED,
    year int UNSIGNED,
    title VARCHAR(200),
    body TEXT,
    KEY (id) USING CLUSTERED COLUMNSTORE,
    FULLTEXT (title,body));

USING HASH behavior

The USING HASH clause creates a hash index in a table. If a columnstore table is being created, the following applies:

  • You can create only single-column hash indexes.
  • You can create at most one unique hash index.
  • You cannot create a unique hash index on a FLOAT, REAL, or DOUBLE column.

When you create such an index, the shard key can contain only one column and that column must be the same column that you have created the index on.

Example

The following example creates a columnstore table with three hash indexes that each have a one-column key.

CREATE TABLE articles (
    id INT UNSIGNED,
    year int UNSIGNED,
    title VARCHAR(200),
    body TEXT,
    SHARD KEY(title),
    KEY (id) USING CLUSTERED COLUMNSTORE,
    KEY (id) USING HASH,     
    UNIQUE KEY (title) USING HASH,
    KEY (year) USING HASH);

The query SELECT * FROM articles WHERE title = 'Interesting title here'; uses title's hash index since the query uses an equality predicate. The query runs faster than if the hash index had not been used.

The query SELECT * FROM articles WHERE year > 2010; does not use the hash index on year since the query does not use an equality predicate.

See the ColumnstoreFilter in the Query Plan Operations topic for an example EXPLAIN plan for a columnstore query that uses a hash index.

See Highly Selective Joins for an example of a columnstore query with a join that uses a hash index.