What Is The Observability Cloud?
This post is one of many in a series that attempts to explain basic concepts and terminology around Observe, and “The Observability Cloud.” Topics will range from architecture to deeper technical dives into topics like Temporal Algebra, Schema-On-Demand, and more.
If you’ve looked at Observe you’ve probably seen us use the term “Observability Cloud” and wondered what that really means. The Observability Cloud is your one-stop shop for all things observability. It means having all your data in one place, not having a mishmash of backends and silos for different data types, and not having to pay for multiple tools to accommodate your various use cases. The only way to eliminate the complexity in troubleshooting distributed applications is to solve the inherent data problem that exists in almost all organizations. To make this possible we created “The Observability Cloud.” It’s based on an entirely new architecture that changes how you ingest, store, analyze, and visualize observability data. It even features a usage-based pricing model, so you only pay for what you use.
The Observability Cloud architecture uses a Data Lake to unify telemetry, a Data Graph to map and link relevant Datasets, and Data Apps to get data into Observe and provide logical starting points for using that data. This all contributes to faster time to value, faster troubleshooting, and better economics for your growing organization.
The value the Observability Cloud delivers is greater than the sum of its parts, but each piece is carefully architected to work in tandem with the others. Let’s look at what constitutes the core components and what that means for end-users.
Data Lake: Observe provides one destination for all of your observability data – logs, metrics, traces, and everything else. Telegraf, Fluentbit, and other familiar open-source collectors are used to send observability data to Observe. There’s no fixed schema, and there’s no vendor lock-in. There aren’t multiple siloed data stores for different data types creating more silos as found in legacy tools. Because the Data Lake is a single datastore for all your telemetry it’s easy to create Datasets, understand the context around them, and link them together. Practically speaking, Datasets are used to track named “things” you actually care about, whether it’s something really familiar like container logs to customers or even shopping carts.
The Data Lake is key to enabling our usage-based pricing model which ensures your observability can scale economically with your organization’s data growth. Once ingested, Observe compresses data 10x and stores it for 13 months (or longer if needed) — resulting in inexpensive long-term storage. The Observability Cloud separates storage and compute by storing data in low-cost Amazon S3-based Data Lake and providing compute via Snowflake warehouses. Storing large volumes of data is incredibly economical, it’s typically 10% of your monthly bill, which means that you’re predominantly paying for using compute to query, accelerate or monitor your data.
Data Graph: The Data Graph is used to navigate your Datasets and bring you relevant context quickly. Observe accelerates data in the Data Lake into Datasets using the Data Graph, making it faster to query. This process turns data into Datasets — or “things” — that you care about such as users, pods, or even customer tickets. By defining and linking Datasets the Data Graph is created and provides context during investigations to answer questions such as “How were my customers affected by the latest push to production?” or “Do we need to increase our node count after deploying the latest feature?”. Data is then accelerated into the Data Graph on an ongoing basis so that it is fast to query. On the surface Data Graph is a visual representation of your Datasets and their relationships. The UI can be used to navigate this map and give you easy visibility across your environment.
Data Apps: Apps are an easy, self-service, way to get data into Observe, that offer a lot more than just integrations. Our Apps package integrations, Datasets, dashboards, monitors, and more to get you up and running in minutes with observability data from your favorite services. We provide Apps for AWS, Azure, GCP, Kubernetes, OpenTelemetry, and much more. Custom applications can also be created for customer-specific use cases.
Once you deploy an App you’ll be able to start ingesting relevant data quickly and help build out your Data Graph. Because Apps are influencing the Data Graph the Apps provide compounding value. For example, if you’re using Kubernetes on AWS then most legacy tools will give you siloed views of those environments, but with Observe Apps you can correlate data between the two and find information such as which EC2 instances your pods or containers are running on. This is because the AWS App extends the Data Graph that the Kubernetes App initially created. Since Apps also provide out-of-the-box Dashboards and Monitors you’ll have a logical starting point when it comes to setting up alerts and launching investigations.
The Observability Cloud And You
The Observability Cloud provides an architecture that will support your organization’s observability needs in the face of ongoing data growth, something that is an inevitable certainty. Observe is true SaaS, so you don’t have to worry about managing infrastructure, Snowflake, index, or anything else. With the Observability Cloud, you’ll have all your data in one place, The Data Lake, you’ll have a Data Graph to help you quickly find relevant context when investigating incidents and you’ll have access to many pre-built Data Apps to get you started in minutes with your favorite environments.
If you’d like to learn more about the Observability Cloud then check out the full series here.
Or, if you’re ready to see how the Observability Cloud will change how you ingest, store, analyze, and visualize your observability data, click here to get access today!