Cracking the Code: Understanding High Cardinality in Metrics

May 7, 2024

High Cardinality Metrics


In our increasingly data-driven landscape, metrics serve as invaluable guides, offering intricate insights into the performance of systems and processes. They empower us to make well-informed decisions, driving efficiency and innovation across various domains. However, the true power of metrics emerges when they exhibit high cardinality – offering granular, nuanced data points that paint a comprehensive picture of operations.

Yet, with this richness comes a unique set of challenges. High cardinality metrics, while immensely valuable, can strain storage capacities, complicate querying processes, and present numerous other hurdles. In this article, we'll delve into the complexities of high cardinality metrics, exploring both their benefits and the strategies required to navigate the obstacles they present.

The Concept of High Cardinality

Now that we know about the importance of having high cardinality metrics question becomes what does this mean? The cardinality of a data attribute refers to the number of distinct values that it can have. For example, a boolean has a cardinality of 2 (True, False). 

So High Cardinality Metrics mean we have metrics that can hold more values. For instance, when tracking CPU utilization, a single metric indicating overall usage provides lower cardinality compared to metrics that capture utilization for each core individually. In the latter case, where data is broken down by core, we encounter higher cardinality due to the increased granularity of information

Challenges Posed by High Cardinality

While High Cardinality Metrics offer significant advantages over their Low Cardinality counterparts, they also introduce a unique set of challenges that must be addressed.

  • Metrics Collection: Since detailed metrics necessitate more computational resources during collection, they may potentially become a bottleneck for application performance
  • Metrics Storage: The adage more data, more storage holds, with the storage demands escalating exponentially, particularly contingent on the cardinality of your metrics.
  • Querying and Analysis: High cardinality metrics frequently necessitate aggregation and processing to transform raw data into actionable insights for users.

Strategies for Managing High Cardinality

With great power comes great responsibility

When faced with an inundation of detailed system insights, it's crucial to streamline the monitoring process by reaching a consensus on the essential information required. Without this clarity, the very data intended to aid can instead hinder progress. Here are several techniques to adeptly manage High Cardinality Metrics:

Time-windowed aggregation

Time-windowed aggregation entails segmenting data into fixed time intervals, allowing for the reduction of data volume while preserving its contextual significance. The selection of appropriate time frames depends on the nature of the metrics collected and the analytical requirements.


Clustering involves grouping similar data points based on their features. For instance, clustering could be used to group machines with similar applications deployed on them.

Stack for High Cardinality Handling

Storing and visualising High Cardinality metrics can be a pain point but this has been handled well by this stack:

Grafana + Influx DB


InfluxDB is an open-source time-series database designed for storing, querying, and visualizing time-stamped data. It excels in handling high volumes of time-series data with high ingest rates and fast query performance. InfluxQL is the query language used for interacting with InfluxDB. These features make it the ideal choice for high cardinality metrics.


Grafana is an open-source dashboarding and visualization tool that allows users to create interactive dashboards and panels to monitor and analyze data. Grafana provides a user-friendly query editor interface that allows users to write queries and retrieve data from data sources. Users can use InfluxQL queries to fetch time-series data from InfluxDB and visualize it in Grafana.

Ready to take control of your observabilty data?