SegmentMetadata queries
Apache Druid supports two query languages: Druid SQL and native queries. This document describes a query type that is only available in the native language. However, Druid SQL contains similar functionality in its metadata tables.
Segment metadata queries return per-segment information about:
- Number of rows stored inside the segment
- Interval the segment covers
- Estimated total segment byte size in if it was stored in a 'flat format' (e.g. a csv file)
- Segment id
- Is the segment rolled up
- Detailed per column information such as:
- type
- cardinality
- min/max values
- presence of null values
- estimated 'flat format' byte size
{
"queryType":"segmentMetadata",
"dataSource":"sample_datasource",
"intervals":["2013-01-01/2014-01-01"]
}
There are several main parts to a segment metadata query:
property | description | required? |
---|---|---|
queryType | This String should always be "segmentMetadata"; this is the first thing Apache Druid looks at to figure out how to interpret the query | yes |
dataSource | A String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information. | yes |
intervals | A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over. | no |
toInclude | A JSON Object representing what columns should be included in the result. Defaults to "all". | no |
merge | Merge all individual segment metadata results into a single result | no |
context | See Context | no |
analysisTypes | A list of Strings specifying what column properties (e.g. cardinality, size) should be calculated and returned in the result. Defaults to ["cardinality", "interval", "minmax"], but can be overridden with using the segment metadata query config. See section analysisTypes for more details. | no |
aggregatorMergeStrategy | The strategy Druid uses to merge aggregators across segments. If true and if the aggregators analysis type is enabled, aggregatorMergeStrategy defaults to strict . Possible values include strict , lenient , earliest , and latest . See aggregatorMergeStrategy for details. | no |
lenientAggregatorMerge | Deprecated. Use aggregatorMergeStrategy property instead. If true, and if the aggregators analysis type is enabled, Druid merges aggregators leniently. | no |
The format of the result is:
[ {
"id" : "some_id",
"intervals" : [ "2013-05-13T00:00:00.000Z/2013-05-14T00:00:00.000Z" ],
"columns" : {
"__time" : { "type" : "LONG", "hasMultipleValues" : false, "hasNulls": false, "size" : 407240380, "cardinality" : null, "errorMessage" : null },
"dim1" : { "type" : "STRING", "hasMultipleValues" : false, "hasNulls": false, "size" : 100000, "cardinality" : 1944, "errorMessage" : null },
"dim2" : { "type" : "STRING", "hasMultipleValues" : true, "hasNulls": true, "size" : 100000, "cardinality" : 1504, "errorMessage" : null },
"metric1" : { "type" : "FLOAT", "hasMultipleValues" : false, "hasNulls": false, "size" : 100000, "cardinality" : null, "errorMessage" : null }
},
"aggregators" : {
"metric1" : { "type" : "longSum", "name" : "metric1", "fieldName" : "metric1" }
},
"queryGranularity" : {
"type": "none"
},
"size" : 300000,
"numRows" : 5000000
} ]
All columns contain a typeSignature
that Druid uses to represent the column type information internally. The typeSignature
is typically the same value used to identify the JSON type information at query or ingest time. One of: STRING
, FLOAT
, DOUBLE
, LONG
, or COMPLEX<typeName>
, e.g. COMPLEX<hyperUnique>
.
Columns also have a legacy type
name. For some column types, the value may match the typeSignature
(STRING
, FLOAT
, DOUBLE
, or LONG
). For COMPLEX
columns, the type
only contains the name of the underlying complex type such as hyperUnique
.
New applications should use typeSignature
, not type
.
If the errorMessage
field is non-null, you should not trust the other fields in the response. Their contents are
undefined.
Only columns which are dictionary encoded (i.e., have type STRING
) will have any cardinality. Rest of the columns (timestamp and metric columns) will show cardinality as null
.
intervals
If an interval is not specified, the query will use a default interval that spans a configurable period before the end time of the most recent segment.
The length of this default time period is set in the Broker configuration via: druid.query.segmentMetadata.defaultHistory