How Observe Uses Snowflake to Deliver the Observability Cloud [Part 3]

By Daniel Odievich,April 22, 2023

Observe’s scheduler combines our deep understanding of the Snowflake query scheduling with our own query costing estimator to build a powerful scheduling algorithm.

The Observability Cloud takes input from various data sources via open endpoints and agents and stores it in the Data Lake to unify telemetry. From there it builds an interactive, extensible, and connected map of your data called the Data Graph that allows you opportunities to explore and monitor the digital connections of your entire organization.

This series will look at how Snowflake is key to The Observability Cloud’s unique architecture and allows it to deliver an order-of-magnitude improvement in both the speed of troubleshooting and economics of deployment.

This 3 part blog series includes:

  • In Part 1, we reviewed how Observe accepts and processes customer data
  • In Part 2, we looked at how we shape that data into a useful form and “link” it to other data
  • In Part 3 (this article), we review how Observe manages resources with a focus on resilience, quality, and cost

Resource Management and Snowflake

The Snowflake platform is essential to just about everything that happens in Observe. Let’s take a moment to talk about how Observe’s Scheduler component (see below) manages Snowflake resources and queries.

Observe architecture scheduler view


Snowflake Infrastructure Used by Observe

Observe uses various Snowflake accounts worldwide, with a focus on cost efficiency and scale. Observe’s Dataset acceleration is somewhat similar to materialized views while workload scheduling that the rest of this article will describe implements an algorithm similar to that underpinning multi-cluster warehouses.

Each Observe tenant has its own Snowflake database with multiple schemas. All metadata objects necessary to operation (such as tables, views, and functions) are provisioned into schemas and versioned by an Observe control plane using industry-standard continuous integration/continuous delivery tooling.

All Observe customers within an Observe-managed Snowflake account share a single set of virtual warehouses, which are an ephemeral collection of compute instances managed by Snowflake for the duration of need and then put back into a free pool of available instances.

In principle, any warehouse can run any query for any customer in any database, but in practice, Observe has 3 different workload pools — LOAD, QUERY, and GENERIC — because these types of workloads have very different requirements. Virtual warehouses in different pools have different configuration parameters (such as lock timeout, statement timeout, query concurrency, and others). For example, MAX_CONCURRENCY_LEVEL is upped from its default value to allow more queries to be submitted to and run by the virtual warehouses in the QUERY pool.

Here is a snapshot of the maximum number of virtual warehouses in each of the pools in the main production Snowflake account (early April 2023):

Observe Snowflake warehouses per pool.

Under normal circumstances, these warehouses are not all active at the same time. There is a lot of headroom here to accommodate spikes with Observe-managed governors to avoid excessive spending. Other regions in the USA, Europe, and Asia-Pacific have fewer warehouses, but they can easily grow to accommodate any necessary workload.

We are currently working towards blurring the lines between these pools for even higher utilization numbers, particularly for the 3XL and 4XL warehouses that receive less traffic than smaller warehouses.


Scheduler Elasticity and Efficiency Monitoring

Whether you are focused on data engineering, data science, machine learning, or are building a custom analytics application as we did at Observe, using Snowflake efficiently requires:

  1. Choosing the right-size virtual warehouse for each query.
  2. Loading the virtual warehouse with the optimal mix of queries.
  3. Using the virtual warehouse for the correct duration.
  4. Turning the virtual warehouse off when it is not being used.

Snowflake makes it simple to submit an arbitrary query — or a torrent of them — to an arbitrarily sized virtual warehouse. Regardless of the query type or resource demands, Snowflake will perform a valiant attempt to produce results using the warehouse size specified by the submitting process — and it’s often successful. Thus, Snowflake customers can experience these workload scheduling and cost management pitfalls:

  • Using a virtual warehouse too large for a trivial SELECT query that returns just a handful of results can waste compute resources and money.
  • Requesting too small of a virtual warehouse for a complex query with aggregates and window functions operating on billions of rows. Not only can this take a long time to complete, but it’s also costly.
  • Flooding a single warehouse with too many concurrent queries can lead to long queuing of queries — or, work that can’t be accommodated until other queries complete.

Observe’s scheduler combines our deep understanding of the Snowflake query scheduling with our own query costing estimator to build a powerful scheduling algorithm. This scheduler matches the incoming queries (whether it is data load, data transformation, or an interactive query) to the right virtual warehouse size for an optimal cost-to-performance balance. This algorithm includes

  • Constant monitoring of currently running workloads, including the progress of queries and their current resource consumption.
  • Constant review of recently completed queries history, including comparisons of predicted query costs vs actually incurred costs (partitions scanned, partitions cached, etc.)
  • Preparation for the upcoming workloads by resuming the warehouses or not shutting them down if they are to be resumed shortly.

Workload-Aware Query Scheduling

For ad-hoc search queries (generated by users exploring data in Observe’s UI)  we wanted to achieve a reasonably fast tail latency (< 5 seconds) at a reasonable cost. To do so, we must find the smallest virtual warehouse that will not thrash or exceed our latency goals.

Let’s say a user opens a dashboard with dozens of widgets. This type of action translates into a spiky workload with lots of small queries for each of the widgets. In this case, Observe’s Scheduler chooses a couple of Small virtual warehouses and piles them full of queries.

observe automatically allocates workloads across snowflake warehouses

In another example, a user develops a complex query full of filters, aggregations, and extractions, and wants it to process millions of rows across a wide range of time (e.g., a 24-hour interval). Under the hood, Observe may partition this query into several temporal “slices” (like 24 hourly slices), and execute each one separately. For these queries, the best choice is to use a couple of very large (2X-Large) warehouses.

observe automatically allocates workloads across snowflake warehouses, large warehouses shown

Small warehouses generally resume and suspend faster and are typically more cost-efficient, especially for smaller time ranges of queries that Observe produces. The Observe scheduler doesn’t keep them idle for too long and shuts them off if they are not needed. On the other hand, larger Snowflake warehouses take longer (sometimes > 10 seconds) to start. Once started, Observe keeps them running for longer periods of time, instead of releasing them right back into the free pool as soon as queries stop running. And because Observe distributes workloads from many customers across many different warehouses, even the smallest customers can utilize a larger virtual warehouse for a short period of time. Thereby, Observe offers better compute cost-efficiency for us and our customers.

The transformation workloads that generate accelerated Datasets are essential for subsequent usage by interactive queries and scheduled Monitors. For those jobs, Observe typically chooses larger virtual warehouses and tries to run them FLAT OUT at full capacity. 

While at first glance the transformation workloads appear steady and predictable, changes in volumes of incoming data, data dependencies, and the batch-based execution model often introduce oscillation effects resulting in waves of jobs to execute. Finding a transform schedule that produces a smooth workload while meeting the users’ freshness goal is a very deep scheduling problem that we are constantly iterating on. 


Grouping Similar Tasks into Execution Pools

Observe manages 3 different workload pools —LOAD, QUERY, andGENERIC — and dynamically chooses the right number of virtual warehouses to run based on what is needed.

LOAD Pool for Loading Data

The LOAD pool runs queries associated with loading data into Observe. Load queries have a very distinct profile as they read data from Amazon S3 (see Part 1 for the explanation of how the data arrived there) and put it into Snowflake.

Here is an example of a randomly selected business day in March 2023 that shows the number of active warehouses in the LOAD pool in one of Observe’s production environments.

Number of active Snowflake warehouses in Observes LOAD pool

You can see steady use of X-Small warehouses throughout the day, followed by the use of Small, Medium, and Large warehouses at lesser numbers. You’ll also notice very infrequent spikes for X-Large and 2X-Large warehouses — with the 3X-Large warehouse being used only once that day.

QUERY Pool for Ad-Hoc Queries

The QUERY pool runs all ad-hoc queries driven by actions in the UI and is latency-sensitive as users expect low response times after clicking an element. 

In this example, we see a clear correlation with human activity during business hours (US-based). We see that for most queries, Observe uses Small, Medium, and Large warehouses, while X-Large isn’t used at all, and 2X-Large warehouses that are used to sustain worksheets with heavy query requirements.

Number of active Snowflake warehouses in Observes QUERY pool

GENERIC Pool for Data Transformation

The GENERIC pool runs transforms that generate Datasets (see Part 2 for an explanation of what they are and why we create them), run Monitors, and perform a number of other background tasks essential to platform maintenance — such as periodic table re-clustering or metric name extraction. 

Using the same time period as our other examples, we can see a very steady use of twenty to forty X-Small warehouses in the GENERIC pool with one-time spikes of up to 280 of them (the maximum amount we limit ourselves to). You can also see steady use of a couple of Small, Medium, and Large warehouses, with very regular heavy lift offered by 3X-Large for just a few minutes every hour.

Number of active Snowflake warehouses in Observe's Generic pool


Observing Query Execution and Modeling Scheduling Algorithms

Of course, even the most innocent-looking queries can blow up during execution. That is a fundamental property of relational query processing. To account for this, Observe records telemetry of all query execution (both from Observe’s perspective and using Snowflake’s ACCOUNT_USAGE share), builds models of user and Snowflake behavior, and then runs machine models over synthetic workloads to experiment and adjust algorithms. 

“Observe Programming and Analytics Language”, or OPAL for short is what powers all transformations and queries in Observe (see Part 2 for more on OPAL). OPAL statements are compiled by Observe to complex temporal SQL that produces large blocks of SQL with lots of Common Table Expressions (CTE), aggregate and window functions, significant subquery nesting and the possibility of band joins. These queries are then compiled by Snowflake for subsequent execution — a potentially taxing endeavor.

Observe is also quite prolific with metadata-only queries (such as counts of records that are added after a certain time watermark) that do not require a virtual warehouse to execute.

LIfe of a snowflake query in observe

In Snowflake, query compilation, execution plan preparation, and metadata-only queries are performed by the “Cloud Services” layer. This layer consists of a set of large compute (and memory) optimized instances run by Snowflake to perform computationally intensive query compilation, schedule queries, check permissions, and provide health monitoring.

Because of our unique workload demands, Observe worked with Snowflake to allocate dedicated Cloud Services clusters consisting of larger-than-usual machines for all our Snowflake deployments.

Once a query is submitted to a virtual warehouse for execution, its progress can be monitored with information available in the query profile. Observe tracks the progress of in-flight queries, reviews query profile data, and builds heuristics that allow Observe to estimate their progress and optionally terminate them in case of runaway resource consumption. 


Executing Queries in Snowflake via Our Client Library

To run queries in Snowflake, Observe uses a forked version of the Go Snowflake Driver client library modified with our own additional performance and feature improvements. It is frequently rebased to keep up-to-date with Snowflake development and the Go ecosystem. 

The main difference in Observe’s fork is that it does not download and expose the query results in ODBC fashion but instead gives direct access to the cursor object returned by the Snowflake REST API. That object contains links and access credentials to the S3 objects which contain the query results. Observe passes that cursor object across the network, so the process that initiates the Snowflake query (Observe scheduler) does not have to be the same process that downloads and handles the result (Observe UI).

Besides allowing us to write more efficient code to download and turn the S3 objects into our heavily optimized wire format for returning results to the Observe UI, it prevents the scheduler from becoming a bottleneck. Observe can scale to a very high volume of queries without needing multiple schedulers, which maximizes the multi-tenancy opportunities of pooled database connections.


Conclusion

By now you should have a thorough understanding of how Observe leverages Snowflake for ingesting, processing, and managing customer data while ensuring resource optimization and cost efficiency. 

The Observability Cloud, Powered By Snowflake.

In Part 1, we reviewed how Snowflake enables Observe to economically ingest, process, and store terabytes of customer data per day. Part 2 demonstrated how Snowflake allows us to shape that data into meaningful “things” people care about — like pods, shopping carts, and customers — and link them together for context when and where you need it. And in this piece, we explored how Observe intelligently utilizes Snowflake resources with a focus on resilience, quality, and ultimately cost.

But there’s still so much left to learn about how Snowflake powers Observe’s unique capabilities to help organizations gain valuable insights from their data. Check out our other Snowflake-related blog posts here to learn more about why we chose to build on Snowflake, how Snowflake allowed us to rewrite the economics of observability, and more!


Check out our demo to see how The Observability Cloud (Powered by Snowflake) can help you and your team unlock valuable insights from your telemetry at a fraction of the cost of other tools. Or, if you think you’re ready to get started, click here to request trial access!