Observability Is Not Just Three Pillars

May 30, 2025

We’ve all heard it: Observability has three pillars: logs, metrics, and traces.
That’s how you know it was a marketing person.

Somewhere along the way, someone even decided Observability needed a versioning system. And if you’ve been around long enough, you’ve also heard we’re in “Observability 2.0” now.
Because apparently, we needed semantic versioning for debugging practices, too.
(Coming soon: Observability 3.0, now with extra dashboard regret.)

But jokes aside, something has changed. The old model doesn’t scale, not with distributed systems, ballooning data, and dev teams that need answers before lunch.

Observability 1.0: More Data, More Problems

The original observability model focused on the three core data types:

  • Logs to track events and debug issues
  • Metrics to monitor performance and thresholds
  • Traces to understand request flows across systems

Each came with its backend system, its UI, and usually its own data silo. So you had more data, but not necessarily more clarity. Querying logs could take minutes. Traces might only capture partial context. And metrics often tell you something’s wrong, but not why.
In reality, developers didn’t need three separate tools; they needed one fast, flexible system that could answer questions, regardless of the data type.


Observability 2.0: From Pillars to Purpose

Observability 1.0 focuses too much on what data you collect and not enough on how you use it. In modern, cloud-native environments, we have plenty of data; what’s missing is context. And that’s what we call Observability 2.0;  Observability 2.0 is a mindset shift:

  • From collecting more data → to making sense of it faster
  • From dashboards and silos → to correlated, real-time exploration
  • From rigid schemas → to schema-less, service-aware context

And at CtrlB, that’s exactly the path we’re taking.


CtrlB’s Take: Observability Without the Wait

We built CtrlB to fix the part nobody talks about: the lag between your question and the answer. Traditional tools are slow, siloed, and bloated. You end up memorising dashboard layouts instead of understanding your system.

CtrlB is different by design:

▶ One Query, All Your Signals

Logs, traces, and services, not in separate tabs or different tools, but together. Search once, explore in real time, and pivot instantly between layers. No context-switching, no delay.

▶ Disk? What Disk?

We don’t write to disk. Not because we’re trying to be edgy, but because it’s slow, expensive, and unnecessary. Your logs stay in object storage (S3), and we query them on demand. That means no bulky indexing pipelines, no inflated infra bills, and no stale data.

▶ Compute-On-Demand, Not Always-On Waste

CtrlB’s Ingestor spins up only when you run a query. It reads logs and correlates them with trace spans or service metadata, right when you need it. You don’t pay for idle resources. You don’t wait for ETL jobs. You just get answers.

▶ Schema-Less

You don’t need to define your log format before shipping it. CtrlB figures it out when you query, dynamic fields, custom formats, whatever you’ve got. No rigid schemas. No “oops-we-dropped-that-field” surprises. Just full-fidelity data, ready to explore.

▶ Service-Aware, Not Log-Blind

A pile of raw logs doesn’t help if you don’t know which service they came from or what they were doing. CtrlB stitches logs to the services and operations behind them, giving you a view of the system, not just lines in a file.

▶ Trace-to-Log in One Click

With CtrlB, you just click, and the logs behind that span show up instantly, no timestamps to copy, no filters to guess. 


Context Propagation: Your Breadcrumb Trail for Root Cause

In modern systems, a single user action can trigger a storm of background service calls. Somewhere in that is the one operation that broke things. That’s why CtrlB treats context propagation like a first-class citizen. Request IDs and metadata travel across services automatically, so you’re never staring at logs wondering where they came from. You get a trace of the request journey, not just where it started and ended, but the messy, interesting middle bits too.

And this is where root cause analysis gets useful. You don’t just get a slow span; you click into that span and see the exact logs that happened during that window, across every relevant service. No digging through separate tools, no guesswork, making the RCA easier. Because when you’re debugging, you’re not chasing abstract metrics. CtrlB lays those breadcrumbs out for you, in context.


The Future Isn’t Pillars. It’s Context.

We’ve seen enough “three-pillar” architectures to know how they usually end: with three tools, three bills, and a frustrated team. The point of observability was never to collect telemetry for its own sake; it was to understand systems. To debug fast. To fix things. That future doesn’t come from stacking more tools.
And no, we’re not calling this Observability 3.0 - even though we probably could. But that’s how it starts, right? A couple of version bumps, a new logo, and suddenly the same old problems feel new again.
We’ll skip the rebrand. Let’s just build observability that works.

Not Another Cheaper Datadog

The first wave of observability startups was all just cheaper Datadogs. Same three pillars and slightly better pricing (until it wasn’t). And now something has shifted.

As Charity Majors said, this year we’ve started seeing something new: tools that look more like cheaper offerings, context-aware tracing, OTel-native pipelines, and unified querying.

Because the future isn’t “which pillar are you collecting?”
It’s “How fast can you go from signal to root cause?”
And for that, you need more clarity and a system that gives a damn about engineers.

Ready to take control of your observabilty data?