You are viewing an older version of this section. View current production version.
PROFILE
Provides detailed resources usage metrics about a query.
Syntax
PROFILE select_statement
SHOW PROFILE [JSON]
Remarks
When you run a PROFILE
statement, it executes the associated statement and collects resource usage metrics about the statement. The PROFILE
command works with SELECT
, UPDATE
, INSERT ... SELECT
, and DELETE
statements, but not INSERT
statements.
After the query has executed, run the SHOW PROFILE
statement to display the collected metrics. Like the EXPLAIN statement, SHOW PROFILE
displays query execution operations as a tree, where the operator at the top is the last executed before returning the result to the client, and operators below are executed before their parents; however, SHOW PROFILE
additionally displays resource usage metrics for the operators in the execution tree.
To get more accurate resource usage metrics of a query, run the PROFILE
statement twice followed by the SHOW PROFILE
statement.
To get profile information of a query in JSON (machine readable format), run the PROFILE
statement followed by the SHOW PROFILE JSON
statement. To get more accurate results, run PROFILE
a second time prior to running SHOW PROFILE JSON
.
The following table provides a brief explanation of the metrics that are gathered when executing a statement using PROFILE
:
Metric | Description |
---|---|
exec_time | Time spent in running each operator. For HashJoin, exec_time is calculated as the time spent in probing the hash table. build_exec_time (specific to HashJoin operator) shows the time spent in building the hash table |
network_time | Wait time spent while flushing data to the network |
memory_usage | Memory used by the operator in KB |
network_traffic | Data sent over the network in KB |
actual_rows | Number of rows processed by the operator. For example, for IndexRangeScan, actual_rows is the number of rows scanned |
est_table_rows | Estimated number of rows in a table. This is an attribute of a table, not of an operator |
est_filtered | Estimated number of rows of a table after applying all single-table filters |
start_time | Time difference between the query start and operator execution as start_time:hh:mm:ss.SSS. Alternatively, in start_time: [hh:mm:ss.SSS, hh:mm:ss.SSS] format, it shows maximum and minimum start time over all threads (partitions) |
end_time | Time when the operator finishes execution |
Not all of the above metrics will be gathered for queries where execution performance may be hindered by gathering such metrics.
To get information on progress of a query and to debug queries with long execution time, run PROFILE
statement in one connection and SHOW PROFILE PROCESS
statement in another connection.
For example, run the following statements in one connection:
PROFILE select_statement;
SHOW PROFILE;
Simultaneously, open another connection and run the SHOW PROFILE PROCESS
statement while the PROFILE
statement is in progress:
SHOW PROFILE PROCESS process_id;
You are also able to run SHOW PROFILE
even if a query or profile exits with an error. In this case, the output of SHOW PROFILE
will include profiling information for the operators executed before the point of failure. This may be helpful while troubleshooting the reason for the failure.
If there is a skew between different partitions, then SHOW PROFILE
displays additional details of the skewed metric. A skewed partition displays in the output of a SHOW PROFILE
query in the following format: [memory_usage: x1 | max:x2 at partition_x3, average: x4, std dev: x5]
where,
- The total across all partitions is x1.
- The partition with the largest amount is partition x3 with memory use x2.
- The average memory usage per partition is x4.
- The standard deviation is x5.
If all data is on one partition, then the actual_rows
has the same count across all partitions as max on a single partition. The output of SHOW PROFILE JSON
always displays these additional details, even when there is no skew.
Examples
The PROFILE
statement is particularly helpful when evaluating distributed query performance, such as a distributed join. Distributed joins that require broadcasts or repartitions are expensive, and the PROFILE
statement can help you understand how such queries are executed so that they can be optimized.
Example 1: Using PROFILE
with SHOW PROFILE
In the following example, a distributed join is executed to return data about customers and their orders. For demonstration purposes, we loaded simulated statistics for each table.
Suppose we have run the query:
memsql> SELECT COUNT(*)
FROM orders o JOIN customer c
WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
Now we want to diagnose why it is slower than expected.
It’s best to first execute an EXPLAIN
statement to understand the execution plan for a query.
memsql> EXPLAIN SELECT COUNT(*)
FROM orders o JOIN customer c
WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
+------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`] |
| Aggregate [SUM(remote_0.`count(*)`) AS $0] |
| Gather partitions:all est_rows:1 alias:remote_0 |
| Project [`count(*)`] est_rows:1 est_select_cost:87,206,400 |
| Aggregate [COUNT(*) AS `count(*)`] |
| NestedLoopJoin |
| |---IndexRangeScan orders AS o, KEY orders_fk1 (o_custkey) scan:[o_custkey = r1.c_custkey] est_table_rows:150,000,000 est_filtered:150,000,000 |
| TableScan r1 storage:list stream:no est_table_rows:2,725,200 |
| Broadcast [c.c_custkey] AS r1 distribution:direct est_rows:2,725,200 |
| Filter [c.c_mktsegment = ?] |
| TableScan customer AS c, PRIMARY KEY (c_custkey) est_table_rows:150,000,000 est_filtered:2,725,200 |
+------------------------------------------------------------------------------------------------------------------------------------------------+
From this EXPLAIN
statement, it’s clear that a broadcast is required, and that a nested loop join will be performed. Now you can run the PROFILE
statement with the same SELECT
query to gather resource usage metrics.
memsql> PROFILE SELECT COUNT(*)
FROM orders o JOIN customer c
WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
+----------+
| count(*) |
+----------+
| 421 |
+----------+
1 row in set (0.66 sec)
The PROFILE
statement will output the same results as the inner SELECT
statement, but it has also gathered resource usage metrics. To see the metrics, you must run the SHOW PROFILE
statement.
memsql > SHOW PROFILE;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PROFILE |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`] actual_rows: 1 exec_time: 0ms start_time: 00:00:00.584 network_traffic: 0.004000 KB |
| Aggregate [SUM(remote_0.`count(*)`) AS $0] actual_rows: 3 exec_time: 0ms start_time: 00:00:00.581 |
| Gather partitions:all est_rows:1 alias:remote_0 actual_rows: 3 exec_time: 0ms start_time: 00:00:00.003 end_time: 00:00:00.583 |
| Project [`count(*)`] est_rows:1 est_select_cost:87,206,400 actual_rows: 3 exec_time: 0ms start_time: [00:00:00.581, 00:00:00.582] network_traffic: 0.012000 KB |
| Aggregate [COUNT(*) AS `count(*)`] actual_rows: 421 exec_time: 0ms start_time: [00:00:00.053, 00:00:00.056] |
| NestedLoopJoin actual_rows: 421 exec_time: 0ms |
| |---IndexRangeScan orders AS o, KEY orders_fk1 (o_custkey) scan:[o_custkey = r1.c_custkey] est_table_rows:150,000,000 est_filtered:150,000,000 actual_rows: 421 exec_time: 532ms start_time: [00:00:00.053, 00:00:00.059] |
| TableScan r1 storage:list stream:no est_table_rows:2,725,200 actual_rows: 122,424 exec_time: 29ms start_time: [00:00:00.004, 00:00:00.005] |
| Broadcast [c.c_custkey] AS r1 distribution:direct est_rows:27,252 actual_rows: 40,808 exec_time: 19ms start_time: 00:00:00.002 network_traffic: 334.326996 KB |
| Filter [c.c_mktsegment = ?] actual_rows: 40,808 exec_time: 97ms start_time: 00:00:00.002 |
| TableScan customer AS c, PRIMARY KEY (c_custkey) est_table_rows:150,000 est_filtered:27,252 actual_rows: 200,000 exec_time: 68ms start_time: 00:00:00.002 |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
After the SHOW PROFILE
statement has been executed, you can see from the example above that actual rows, network traffic, and execution time have been appended to the end of the execution operator lines.
We can first identify that most expensive operator is IndexRangeScan orders
, which takes 532ms.
Then we can see that TableScan tpch.orders
has many fewer rows now than
are recorded in statistics, which suggests that our statistics are not up-to-date.
Similarly we can identify that the most network-consuming operator is
Broadcast [c.custkey]
and there is no memory-consuming operator (every operator uses only
a constant amount of memory).
After running ANALYZE TABLE
, we can rerun the PROFILE
query:
memsql> PROFILE SELECT COUNT(*)
FROM orders o JOIN customer c
WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
+----------+
| count(*) |
+----------+
| 421 |
+----------+
1 row in set (0.26 sec)
Finally, we can output the profile information for it:
memsql> SHOW PROFILE;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PROFILE |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`] actual_rows: 1 exec_time: 0ms start_time: 00:00:00.237 network_traffic: 0.004000 KB |
| Aggregate [SUM(remote_0.`count(*)`) AS $0] actual_rows: 3 exec_time: 0ms start_time: 00:00:00.223 |
| Gather partitions:all est_rows:1 alias:remote_0 actual_rows: 3 exec_time: 0ms start_time: 00:00:00.004 end_time: 00:00:00.236 |
| Project [`count(*)`] est_rows:1 est_select_cost:25,006 actual_rows: 3 exec_time: 0ms start_time: [00:00:00.221, 00:00:00.235] network_traffic: 0.012000 KB |
| Aggregate [COUNT(*) AS `count(*)`] actual_rows: 421 exec_time: 0ms start_time: [00:00:00.140, 00:00:00.141] |
| HashJoin [r1.o_custkey = customer.c_custkey] actual_rows: 421 exec_time: 11ms start_time: [00:00:00.140, 00:00:00.141] memory_usage: 655.359985 KB |
| |---Broadcast [orders.o_custkey] AS r1 distribution:direct est_rows:12,503 actual_rows: 12,503 exec_time: 11ms start_time: 00:00:00.002 network_traffic: 78.333000 KB |
| | TableScan tpch.orders, PRIMARY KEY (o_orderkey) est_table_rows:12,503 est_filtered:12,503 actual_rows: 12,503 exec_time: 13ms start_time: 00:00:00.002 |
| Filter [customer.c_mktsegment = ?] actual_rows: 40,808 exec_time: 52ms start_time: [00:00:00.140, 00:00:00.141] |
| TableScan tpch.customer, PRIMARY KEY (c_custkey) est_table_rows:200,000 est_filtered:40,832 actual_rows: 200,000 exec_time: 41ms start_time: [00:00:00.140, 00:00:00.141] |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The new plan is faster in execution time but notice that we introduced a memory-consuming operator HashJoin
, which allocates 655.359985 KB of memory at run time in the example. That could be something to watch out for if we add more data,
since it might limit the total number of queries that can run concurrently.
Example 2: Using PROFILE
with SHOW PROFILE JSON
To view the JSON format of a query, run the PROFILE
query:
> PROFILE SELECT name FROM company WHERE name LIKE "C%";
+------------------------------+
| name |
+------------------------------+
| Cutera, Inc. |
| CVB Financial Corporation |
| CVD Equipment Corporation |
| Cyanotech Corporation |
| ... |
| Curis, Inc. |
+------------------------------+
348 rows in set (404 ms)
Now run the SHOW PROFILE JSON
query to view the resource usage metrics in JSON format:
> SHOW PROFILE JSON;
+------------------------------+
| PROFILE |
+------------------------------+
| {
"profile":[
{
"executor":"Gather",
"partitions":"all",
"query":"SELECT `company`.`name` AS `name` FROM `trades_0`.`company`
as `company` WHERE (`company`.`name` LIKE 'C%')
OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=LLVM)",
"alias":"remote_0",
"actual_row_count":{ "value":348, "avg":0.000000, "stddev":0.000000,
"max":0, "maxPartition":0 },
"actual_total_time":{ "value":0 },
"start_time":{ "value":0 },
"end_time":{ "value":5 },
...
}
}
1 row in set (290 ms)
If you had run PROFILE
again prior to running SHOW PROFILE JSON
, the results would have been more accurate.
Related Topics
- EXPLAIN for definitions of each operator.