
Today, we’re proud to share that Observe has closed $156M in Series C funding. It’s a major step forward in our mission to reshape software development for the AI age through Observability. This moment signals more than growth — it’s the start of a new chapter for a stagnant industry ready for change.
The World Has Changed. Observability Hasn’t.
Let’s be honest: observability at scale is a mess. Siloed tools, runaway data volumes, and sky-high bills. You send logs one way, traces another, metrics somewhere else, and then scramble to stitch it all together during an outage, paying premium prices for the privilege.
As an industry, we’re generating 10x more telemetry data than just a few years ago. Legacy vendors expect your budget to grow 10x too.
- Every micro-service, function, container, and AI agent is emitting logs, metrics, and traces.
- OpenTelemetry automates instrumentation - great for coverage, but a nightmare for volume.
- LLMs and Agents generate orders of magnitude more traces and debugging context.
At any meaningful level of scale, the legacy model breaks. It punishes adoption. It reduces visibility. It turns observability into a cost center. Worst of all, it slows down engineers.
More Data = Better Outcomes. Not Bigger Bills.
At Observe, we’ve embraced a more modern, scalable, architecture:
- The O11y Data Lake™ - a streaming data lake that ingests and stores all your telemetry data in open industry-standard formats such as OpenTelemetry and Apache Iceberg.
- The O11y Knowledge Graph™ - a real-time, relationship-aware model of your applications and infrastructure. It connects telemetry to topology to code to deployments to users, all in one query-able graph.
- The O11y AI SRE™ - a closed-loop AI system that doesn’t just chat about incidents but behaves as an always-on SRE: detecting issues, mitigating customer impact, recommending next steps and suggesting improvements.
Observe was the first Observability vendor to build an O11y Data Lake. We’re not simply ingesting logs, metrics and traces with OpenTelemetry, we’re compressing and storing them in an open Apache Iceberg format. It’s not only cheaper, there is no lock in through proprietary formats. For the first time ever, customers can now own their telemetry data - not Observe, not anyone.
But that’s just the start of what we do with data. Every company on the planet is racing to embed AI into their products for more intelligence. But the truth is that AI is only as good as the context it has access to. That’s why we created the O11y Knowledge Graph — curated from data stored in the open Data Lake — it’s the industry’s first contextual data fabric for observability.
Now, when our O11y AI SRE investigates an incident, closes the loop on a failed deployment… or recommends the right bug fix, the O11y Knowledge Graph has the context it needs to intelligently reason about the problem. The O11y SRE is a closed loop system — we call it the “Vibe Loop” — it’ll instrument, investigate and improve your applications and infrastrucuture using agentic AI workflows. And we’re not just thinking about our own O11y AI SRE driving the incident management process - earlier this month, we released our MCP server which allows any partner or competitor AI SRE to interact with our system.
This is the future of reliable software development in the age of AI. This is Observe.
Customer Impact: Our Mission Is Simple But Ambitious
We are on a mission to re-shape software development for the AI age through observability.
In the last year, Observe tripled revenue and doubled its enterprise customer base, with an industry-leading net revenue retention (NRR) of 180%. At the same time, monthly active users also tripled as we processed over 150 petabytes of telemetry data with our largest customer topping 300 terabytes per day.
A major international bank recently signed an 8-figure contract with Observe to replace Splunk. The initial scope of the project was small - take care of almost 30TiB / day of ‘unloved’ logs for compliance purposes. These logs were deemed too voluminous, un-important and expensive to put in Splunk. Twelve months later, Observe had gone viral — teams kept adding use cases and users — driving data volumes up towards 100TiB / day and the user count over 3000. Splunk is now deprecated and the next target is to drive adoption of an OpenTelemetry-Native approach to APM, replacing AppDynamics.
A young software company in New York recently signed a 7-figure contract with Observe to replace New Relic, which had been purchased a year earlier in order to replace Datadog. Turns out that New Relic miscalculated exactly what their product was capable of as well as the total cost. Tired of switching vendors, the Observe approach of setting up an Open Data Lake using OpenTelemetry collectors — and data stored in Apache Iceberg format — resonated big time. This, combined with the lower TCO of Observe’s modern architecture sealed the deal.
What’s Next
With this funding, we’re staying hyper-focused on our core mission:
- R&D: More investment in AI capabilities on the frontend (O11y Knowledge Graph and O11y AI-SRE) and the Apache Iceberg-based Data Lake on the back-end.
- Go-to-Market: Scaling our sales and technical customer success teams to meet demand from companies operating at scale — Enterprise, AI-native and Cloud-Native companies
Let’s Go!
Observe was built for this moment — we’ve always believed that Observability is a data problem, and we now live in a world where systems are more complex, data volumes are larger and — with AI — it’s frictionless for users to ask questions. Ultimately, this is a game of analytics and the Observability vendor who can answer the most questions — most accurately — will win.
We’re not here to chase the old guard. They are becoming irrelevant in the era of data lakes and AI. We are here to make customers successful at any scale.
To our customers, investors, and team: thank you for believing in us and our mission. To everyone else: come see what you’ve been missing — we’re just getting started.