Outdated Version

You are viewing an older version of this section. View current production version.

PROFILE

Provides detailed resources usage metrics about a query.

PROFILE plans can also be visualized via a graphical interface using SingleStore DB Studio Visual Explain.

Syntax

PROFILE select_statement
SHOW PROFILE [JSON] [PLAN plan_id] [INTO OUTFILE file_name]

Remarks

When you run a PROFILE statement, it executes the associated statement and collects resource usage metrics about the statement. The PROFILE command works with SELECT, UPDATE, INSERT ... SELECT, and DELETE statements, but not INSERT statements.

After the query has executed, run the SHOW PROFILE statement to display the collected metrics. Like the EXPLAIN statement, SHOW PROFILE displays query execution operations as a tree, where the operator at the top is the last executed before returning the result to the client, and operators below are executed before their parents; however, SHOW PROFILE additionally displays resource usage metrics for the operators in the execution tree.

To get more accurate resource usage metrics of a query, run the PROFILE statement twice followed by the SHOW PROFILE statement.

The SHOW PROFILE JSON command returns the profile information of a query in JSON (machine readable format). To get this information, run the PROFILE command followed by the SHOW PROFILE JSON command. This information includes detailed statistics about some operators and statistics on compilation time (displayed under compile_time_stats). See Example1 for details.

Run the SHOW PROFILE PLAN command with a specific plan_id to display profile information (optionally in JSON format) for any plan in the cache that was run in PROFILE mode. For information related to plan_id, refer to the SHOW PLAN topic.

---To display profile information for a specific plan
SHOW PROFILE PLAN 4;

---To display the profile information for a specific plan in JSON format
SHOW PROFILE JSON PLAN 4;

To write the result of a profile query to a file, use the INTO OUTFILE file_name clause. It creates a new file at the specified location and writes the query profile details to the file. Enter the absolute path to the file as the file_name parameter. The output file can be a JSON or a text file.

Note: If the file already exists, then MemSQL will throw an error.

To write the profile information of a query in JSON format to a file:

SHOW PROFILE JSON INTO OUTFILE '/tmp/testprof.json';

To write the profile information of a query to a text file:

SHOW PROFILE INTO OUTFILE '/tmp/testprof.txt';

To write the profile statistics of a specific plan ID to a file:

SHOW PROFILE PLAN 7 INTO OUTFILE '/tmp/profile_plan7.txt';

To write the profile statistics of a specific plan ID in JSON format to a file:

SHOW PROFILE JSON PLAN 7 INTO OUTFILE '/tmp/profjson_plan7.json';

The SHOW PLAN [JSON] plan_id command displays the EXPLAIN plan of a query. Refer to the SHOW PLAN topic for details.

---To display the plan statistics of a plan with ID 4
SHOW PLAN 4;
---To display the plan statistics of a plan with ID 4 in JSON format
SHOW PLAN JSON 4;

To get information on progress of a query and to debug queries with long execution time, run PROFILE statement in one connection and SHOW PROFILE PROCESS statement in another connection. For example, run the following statements in one connection:

PROFILE select_statement;
SHOW PROFILE;

Simultaneously, open another connection and run the SHOW PROFILE PROCESS statement while the PROFILE statement is in progress:

SHOW PROFILE PROCESS process_id;

You can get the process ID from the SHOW PROCESSLIST command or the CONNECTION_ID() function as follows:

SHOW PROCESSLIST;
SELECT CONNECTION_ID();

If there is a skew between different partitions, then SHOW PROFILE displays additional details of the skewed metric. A skewed partition displays in the output of a SHOW PROFILE query in the following format: [memory_usage: x1 | max:x2 at partition_x3, average: x4, std dev: x5] where,

  • The total across all partitions is x1.
  • The partition with the largest amount is partition x3 with memory use x2.
  • The average memory usage per partition is x4.
  • The standard deviation is x5.

If all data is on one partition, then the actual_rows has the same count across all partitions as max on a single partition. The output of SHOW PROFILE JSON always displays these additional details, even when there is no skew.

PROFILE Metrics

The following table provides a brief explanation of the metrics that are gathered when executing a statement using PROFILE.

Metric Description
exec_time Time spent in running each operator. For HashJoin, exec_time is calculated as the time spent in probing the hash table. build_exec_time (specific to HashJoin operator) shows the time spent in building the hash table
network_time Wait time spent while flushing data to the network
memory_usage Memory used by the operator in KB
network_traffic Data sent over the network in KB
actual_rows Number of rows processed by the operator. For example, for IndexRangeScan, actual_rows is the number of rows scanned
est_table_rows Estimated number of rows in a table. This is an attribute of a table, not of an operator
est_filtered Estimated number of rows of a table after applying all single-table filters
start_time Time difference between the query start and operator execution as start_time:hh:mm:ss.SSS. Alternatively, in start_time: [hh:mm:ss.SSS, hh:mm:ss.SSS] format, it shows maximum and minimum start time over all threads (partitions)
end_time Time when the operator finishes execution

Not all of the above metrics will be gathered for queries where execution performance may be hindered by gathering such metrics.

PROFILE Metrics for Columnstore Queries that Use Filters

The following table provides a brief explanation of the columnstore metrics that are gathered when running PROFILE with a columnstore query that uses filters. These metrics appear after running SHOW PROFILE JSON, following PROFILE.

Metric Description
columnstore_filter_execution_type The type of filter used (full row scan, hash index, or bloom).
columnstore_filter_total_rows_in The number of rows that the filter predicate was evaluated on.
columnstore_filter_total_rows_out The number of rows that passed the filter predicate.
columnstore_filter_encoded_rows_in The number of rows that the filter predicate was evaluated on, using operations on encoded data.
columnstore_filter_encoded_rows_out The number of rows that passed the filter predicate, using operations on encoded data.
columnstore_filter_avg_filters_per_row The number of average filtering operations performed on each row. MemSQL combines filtering options where possible, but outside of these cases, the following logic is true: if two columns each had a filter applied to them on every row, the columnstore_filter_avg_per_row would be 2.0.
columnstore_filter_avg_index_filters_per_row This is the same as columnstore_filter_avg_filters_per_row, but instead looking at filtering on indices, rather than all filters as a group.
columnstore_filter_avg_bloom_filters_per_row This is the same as columnstore_filter_avg_filters_per_row, but instead looking at how many rows are filtered out through use of a bloom filter, rather than all filters as a group.

Whether or not a filter predicate was evaluated using operations on encoded data depends on the filter predicate itself as well as the encodings of the involved columns. If columnstore_filter_encoded_rows_in is lower than columnstore_filter_total_rows_in, either the filter predicate does not support operations on encoded data or there were some segments with encodings that do not support operations on encoded data.

Some filter predicates are children of other filter predicates. This is signaled by indentation levels in the text of the condition attribute. During filter evaluation, either the parent filter predicate is used, or its children are evaluated instead. The columnstore_filter_total_rows_in metric can be inspected to find which method of evaluation is used in what proportion.

Info

Up-to-date PROFILE metrics for columnstore filters are available after the columnstore background flusher has run. The flusher runs automatically on regular intervals. You can also run it manually via OPTIMIZE TABLE … FLUSH.

See an example that provides resource usage metrics for a columnstore query that uses filters.

Examples

Example 1

The SHOW PROFILE JSON statement displays the compilation and optimizer statistics. This helps to troubleshoot slow query performance and compare the time spent in running versus compiling the query.

For example, run the following PROFILE query:

PROFILE SELECT * from Emp;
****
+-----------+---------+
| Name      | City    |
+-----------+---------+
| Sam       | Chicago |
| Jack      | Norway  |
| Tom       | Chicago |
| Neil      | Chicago |
+-----------+---------+
4 rows in set (0.29 sec)

Now run the SHOW PROFILE JSON command. The following output is filtered to display only the compilation and optimization details.

SHOW PROFILE JSON;
****
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
"query_info":{
       "query_text":"profile select * from Emp",
       "total_runtime_ms":"291",
       "text_profile":"Gather partitions:all alias:remote_0 actual_rows: 9 exec_time: 0ms start_time: 00:00:00.001 end_time: 00:00:00.143\nProject [Emp.Name, Emp.City] actual_rows: 9 exec_time: 0ms start_time: 00:00:00.139 network_traffic: 0.100000 KB network_time: 1ms\nTableScan trades.Emp actual_rows: 9 exec_time: 0ms start_time: 00:00:00.139\nCompile Total Time: 147ms\n",
       "compile_time_stats":{
           "mbc_emission":"0",
           "create_mbc_context":"4",
           "expanding_views":"0",
           "optimizer_query_rewrites":"1",
           "optimizer_stats_analyze":"0",
           "optimizer_stats_other":"0",
           "optimizer_stats_sampling":"0",
           "optimizer_setting_up_subselect":"0",
           "optimizer_distributed_optimizations":"0",
           "optimizer_enumerate_temporary_tables":"0",
           "optimizer_singlebox_optimizations_agg":"0",
           "optimizer_rewrite_estimate_cost":"0",
           "optimizer_stats_autostats":"0",
           "ppc_check":"0",
           "generating_query_mpl":"1",
           "generating_user_function_mpl":"0",
           "module_cleaning":"0",
           "dump_clean_bitcode":"0",
           "llvm_optimization":"0",
           "dump_optimized_bitcode":"0",
           "machine_code_generation":"0",
           "llvm_printing":"0",
           "asm_printing":"0",
           "symbol_resolution":"0",
           "unknown":"138",
           "total":"147"
       }
   }
} |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-
1 row in set (0.00 sec)

If you had run PROFILE again prior to running SHOW PROFILE JSON, the results would have been more accurate.

Note: The PROFILE command can be used for long running queries to analyze the query plan and search for unexpectedly expensive operators. Output the result in JSON format using the SHOW PROFILE JSON command to analyze the query plan visually in the SingleStore DB Studio Visual Explain UI.

Example 2

The INTO OUTFILE file_name clause is useful to output the profile information of a query/plan to a file.

For example, run the following PROFILE query:

PROFILE SELECT * from Emp;

Now, run the following command to write the JSON profile information to the EmpProfile.json file in the /tmp directory.

SHOW PROFILE JSON INTO OUTFILE '/tmp/EmpProfile.json';

To view the contents of the EmpProfile.json file, run the following command at the root prompt:

cat '/tmp/EmpProfile.json';

Example 3

The PROFILE statement is particularly helpful when evaluating distributed query performance, such as a distributed join. Distributed joins that require broadcasts or repartitions are expensive, and the PROFILE statement can help you understand how such queries are executed so that they can be optimized.

In the following example, a distributed join is executed to return data about customers and their orders. For demonstration purposes, we loaded simulated statistics for each table.

Suppose we have run the query:

SELECT COUNT(*)
FROM orders o JOIN customer c
WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';

Now we want to diagnose why it is slower than expected. It’s best to first execute an EXPLAIN statement to understand the execution plan for a query.

EXPLAIN SELECT COUNT(*) FROM orders o JOIN customer c WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
+------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN                                                                                                                                        |
+------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`]                                                                                         |
| Aggregate [SUM(remote_0.`count(*)`) AS $0]                                                                                                     |
| Gather partitions:all est_rows:1 alias:remote_0                                                                                                |
| Project [`count(*)`] est_rows:1 est_select_cost:87,206,400                                                                                     |
| Aggregate [COUNT(*) AS `count(*)`]                                                                                                             |
| NestedLoopJoin                                                                                                                                 |
| |---IndexRangeScan orders AS o, KEY orders_fk1 (o_custkey) scan:[o_custkey = r1.c_custkey] est_table_rows:150,000,000 est_filtered:150,000,000 |
| TableScan r1 storage:list stream:no est_table_rows:2,725,200                                                                                   |
| Broadcast [c.c_custkey] AS r1 distribution:direct est_rows:2,725,200                                                                           |
| Filter [c.c_mktsegment = ?]                                                                                                                    |
| TableScan customer AS c, PRIMARY KEY (c_custkey) est_table_rows:150,000,000 est_filtered:2,725,200                                             |
+------------------------------------------------------------------------------------------------------------------------------------------------+

From this EXPLAIN statement, it’s clear that a broadcast is required, and that a nested loop join will be performed. Now you can run the PROFILE statement with the same SELECT query to gather resource usage metrics.

PROFILE SELECT COUNT(*) FROM orders o JOIN customer c WHERE o.custkey = c.custkey and c.mktsegment = 'BUILDING';
+----------+
| count(*) |
+----------+
| 421      |
+----------+
1 row in set (0.66 sec)

The PROFILE statement will output the same results as the inner SELECT statement, but it has also gathered resource usage metrics. To see the metrics, you must run the SHOW PROFILE statement.

SHOW PROFILE;
****
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PROFILE                                                                                                                                                                                                                   |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`] actual_rows: 1 exec_time: 0ms start_time: 00:00:00.584 network_traffic: 0.004000 KB                                                                                |
| Aggregate [SUM(remote_0.`count(*)`) AS $0] actual_rows: 3 exec_time: 0ms start_time: 00:00:00.581                                                                                                                         |
| Gather partitions:all est_rows:1 alias:remote_0 actual_rows: 3 exec_time: 0ms start_time: 00:00:00.003 end_time: 00:00:00.583                                                                                             |
| Project [`count(*)`] est_rows:1 est_select_cost:87,206,400 actual_rows: 3 exec_time: 0ms start_time: [00:00:00.581, 00:00:00.582] network_traffic: 0.012000 KB                                                            |
| Aggregate [COUNT(*) AS `count(*)`] actual_rows: 421 exec_time: 0ms start_time: [00:00:00.053, 00:00:00.056]                                                                                                               |
| NestedLoopJoin actual_rows: 421 exec_time: 0ms                                                                                                                                                                            |
| |---IndexRangeScan orders AS o, KEY orders_fk1 (o_custkey) scan:[o_custkey = r1.c_custkey] est_table_rows:150,000,000 est_filtered:150,000,000 actual_rows: 421 exec_time: 532ms start_time: [00:00:00.053, 00:00:00.059] |
| TableScan r1 storage:list stream:no est_table_rows:2,725,200 actual_rows: 122,424 exec_time: 29ms start_time: [00:00:00.004, 00:00:00.005]                                                                                |
| Broadcast [c.c_custkey] AS r1 distribution:direct est_rows:27,252 actual_rows: 40,808 exec_time: 19ms start_time: 00:00:00.002 network_traffic: 334.326996 KB                                                             |
| Filter [c.c_mktsegment = ?] actual_rows: 40,808 exec_time: 97ms start_time: 00:00:00.002                                                                                                                                  |
| TableScan customer AS c, PRIMARY KEY (c_custkey) est_table_rows:150,000 est_filtered:27,252 actual_rows: 200,000 exec_time: 68ms start_time: 00:00:00.002                                                                 |
| Compile Total Time: 238ms
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

After the SHOW PROFILE statement has been executed, you can see from the example above that actual rows, network traffic, and execution time have been appended to the end of the execution operator lines. We can first identify that most expensive operator is IndexRangeScan orders, which takes 532ms. Then we can see that TableScan tpch.orders has many fewer rows now than are recorded in statistics, which suggests that our statistics are not up-to-date.

Similarly we can identify that the most network-consuming operator is Broadcast [c.custkey] and there is no memory-consuming operator (every operator uses only a constant amount of memory).

After running ANALYZE TABLE, we can rerun the PROFILE query:

PROFILE SELECT COUNT(*) FROM orders o JOIN customer c where o.custkey = c.custkey and c.mktsegment = 'BUILDING';
****
+----------+
| count(*) |
+----------+
| 421      |
+----------+
1 row in set (0.26 sec)

Finally, we can output the profile information for it:

SHOW PROFILE;
****
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PROFILE                                                                                                                                                                   |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project [CAST(COALESCE($0,0) AS SIGNED) AS `count(*)`] actual_rows: 1 exec_time: 0ms start_time: 00:00:00.237 network_traffic: 0.004000 KB                                |
| Aggregate [SUM(remote_0.`count(*)`) AS $0] actual_rows: 3 exec_time: 0ms start_time: 00:00:00.223                                                                         |
| Gather partitions:all est_rows:1 alias:remote_0 actual_rows: 3 exec_time: 0ms start_time: 00:00:00.004 end_time: 00:00:00.236                                             |
| Project [`count(*)`] est_rows:1 est_select_cost:25,006 actual_rows: 3 exec_time: 0ms start_time: [00:00:00.221, 00:00:00.235] network_traffic: 0.012000 KB                |
| Aggregate [COUNT(*) AS `count(*)`] actual_rows: 421 exec_time: 0ms start_time: [00:00:00.140, 00:00:00.141]                                                               |
| HashJoin [r1.o_custkey = customer.c_custkey] actual_rows: 421 exec_time: 11ms start_time: [00:00:00.140, 00:00:00.141] memory_usage: 655.359985 KB                        |
| |---Broadcast [orders.o_custkey] AS r1 distribution:direct est_rows:12,503 actual_rows: 12,503 exec_time: 11ms start_time: 00:00:00.002 network_traffic: 78.333000 KB     |
| |   TableScan tpch.orders, PRIMARY KEY (o_orderkey) est_table_rows:12,503 est_filtered:12,503 actual_rows: 12,503 exec_time: 13ms start_time: 00:00:00.002                |
| Filter [customer.c_mktsegment = ?] actual_rows: 40,808 exec_time: 52ms start_time: [00:00:00.140, 00:00:00.141]                                                           |
| TableScan tpch.customer, PRIMARY KEY (c_custkey) est_table_rows:200,000 est_filtered:40,832 actual_rows: 200,000 exec_time: 41ms start_time: [00:00:00.140, 00:00:00.141] |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

The new plan is faster in execution time but notice that we introduced a memory-consuming operator HashJoin, which allocates 655.359985 KB of memory at run time in the example. That could be something to watch out for if we add more data, since it might limit the total number of queries that can run concurrently.

PROFILE SELECT name FROM company WHERE name LIKE "C%";
****
+------------------------------+
| name                         |
+------------------------------+
| Cutera, Inc.                 |
| CVB Financial Corporation    |
| CVD Equipment Corporation    |
| Cyanotech Corporation        |
| ...                          |
| Curis, Inc.                  |
+------------------------------+
348 rows in set (404 ms)

Now run the SHOW PROFILE JSON query to view the resource usage metrics in JSON format:

SHOW PROFILE JSON;
****
+------------------------------+
| PROFILE                      |
+------------------------------+
| {
    "profile":[
    {
            "executor":"Gather",
            "partitions":"all",
            "query":"SELECT `company`.`name` AS `name` FROM `trades_0`.`company`
             as `company`  WHERE (`company`.`name` LIKE 'C%')
             OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=LLVM)",
            "alias":"remote_0",
            "actual_row_count":{ "value":348, "avg":0.000000, "stddev":0.000000,
            "max":0, "maxPartition":0 },
            "actual_total_time":{ "value":0 },
            "start_time":{ "value":0 },
            "end_time":{ "value":5 },
            ...
            }
}
1 row in set (290 ms)

The SHOW PLAN command displays the plan information for the specified plan ID:

SHOW PLAN 5;
****
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PLAN                                                                                                                                                                    |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Gather partitions:all alias:remote_0                                                                                                                                    |
| Project [company.symbol, company.name, company.last_sale, company.market_cap, company.IPO_year, company.sector, company.industry, company.summary_quote, company.extra] |
| TableScan trades.company                                                                                                                                                |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

Next, get the JSON plan information by running the following command:

SHOW PLAN JSON 5;

This command produces a large amount of text as result.

Example 4

The following example provides resource usage metrics for columnstore query that uses filters.

Run this statement:

PROFILE SELECT * FROM products WHERE Color = 'Blue' and Qty = '5';

Then run SHOW PROFILE JSON.

Selected output:

...
"inputs":[
  {
    "executor":"ColumnStoreFilter",
    "keyId":4294968019,
    "condition":[
      "products.Color = ? AND products.Qty = ?"
    ],
    "columnstore_filter_predicates":[
      {
        "executor":"ColumnStoreFilterPredicate",
        "keyId":4294968020,
        "condition":[
          "products.Color = ? AND products.Qty = ?"
        ],
        "columnstore_filter_callback_ordinal":"2",
        "columnstore_filter_execution_type":"regular",
        "columnstore_filter_total_rows_in":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "columnstore_filter_total_rows_out":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "columnstore_filter_encoded_rows_in":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "columnstore_filter_encoded_rows_out":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "inputs":[]
      },
      {
        "executor":"ColumnStoreFilterPredicate",
        "keyId":4294968020,
        "condition":[
          "        products.Color = ?"
        ],
        "columnstore_filter_callback_ordinal":"3",
        "columnstore_filter_execution_type":"regular",
        "columnstore_filter_total_rows_in":{ "value":6, "avg":0.750000, "stddev":1.984313, "max":6, "maxPartition":2 },
        "columnstore_filter_total_rows_out":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
        "columnstore_filter_encoded_rows_in":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "columnstore_filter_encoded_rows_out":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
        "inputs":[]
      },
      {
        "executor":"ColumnStoreFilterPredicate",
        "keyId":4294968020,
        "condition":[
          "    AND products.Qty = ?"
        ],
        "columnstore_filter_callback_ordinal":"5",
        "columnstore_filter_execution_type":"regular",
        "columnstore_filter_total_rows_in":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
        "columnstore_filter_total_rows_out":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
        "columnstore_filter_encoded_rows_in":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
        "columnstore_filter_encoded_rows_out":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
        "inputs":[]
      }
    ],
    "subselects":[],
    "actual_row_count":{ "value":4, "avg":0.500000, "stddev":1.322876, "max":4, "maxPartition":2 },
    "actual_total_time":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
    "start_time":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":2 },
    "columnstore_filter_total_rows_in":{ "value":6, "avg":0.750000, "stddev":1.984313, "max":6, "maxPartition":2 },
    "columnstore_filter_avg_filters_per_row":{ "value":0, "avg":1.666667, "stddev":0.000000, "max":0, "maxPartition":0 },
    "columnstore_filter_avg_index_filters_per_row":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
    "columnstore_filter_avg_bloom_filters_per_row":{ "value":0, "avg":0.000000, "stddev":0.000000, "max":0, "maxPartition":0 },
...

Related Topics