Book a Demo

For information about how Observe handles your personal data, please see our Privacy Policy.

Blog
All Blogs

Drop Filters: Manage the Data That Observe Ingests and Stores

Logs start small but can explode quickly. As microservices, sidecars, and new teams come online, customers tell us their ingest volume can double in a quarter. With the increase in telemetry generated comes greater need to selectively manage the data ingested into Observe.

Enter Drop Filters, now graduating from private preview to General Availability.

Solving the data deluge problem

Teams we work with commonly face issues with the volume of observability data they manage:

  • Chatty services crank out gigabytes of health checks, heartbeat pings, or INFO-level noise.
  • Pipeline changes (think “turn on debug for one pod”) land in production and stay there… for weeks.
  • Cross-team coordination to “fix it at the source” can take engineering sprints you don’t have.

The result is often surprise overage bills and late-night log-scrubbing parties. Drop Filters helps alleviate the burden of increasing data volumes.

Drop Filters: a quick safety valve

Drop Filters let you discard matching log lines before they count toward ingest. Sitting at the dataset boundary, they throw away anything that matches the filter expression—nothing is merely hidden. You can safely trim up to 50 % of your average daily ingest volume while you work on the upstream fix, allowing you to continue using Observe while staying on budget towards your ingest quota.

Our customers describe how Drop Filters in helpful in their situations:

“Observe’s pricing lets us eliminate the ‘what-can-we-drop?’ cost-benefit meetings we had on New Relic. Drop Filters are there if you need them—ideally, you won’t.” 

“We’d stopped shipping most logs to Datadog because the cost was brutal. With Observe, we finally land everything and we can flip a Drop Filter when something gets noisy.”

“It used to take weeks of back-and-forth to tune Fluent Bit rules across teams. Now I hit one switch in Observe, stop the bleeding, and give the app squads time to clean up their log levels.”

Point-and-click in the UI

Drop Filters is easy to set up:

  1. Workspace Settings → Drop filters → New drop filter
  2. Choose the dataset (e.g., Kubernetes Explorer/OpenTelemetry Logs)
  3. Use the builder to create a filter expression
resource_attributes."k8s.container.name" = "image-provider"
  1. Set Drop rate = 100 % and Save
  2. Wait a minute for the pipeline to roll out — done!

Or codify it with Terraform

If infrastructure-as-code is your preferred mode of usage, you can manage Drop Filters just like any other Observe resource:

resource "observe_drop_filter" "example" {
    workspace = data.observe_workspace.default.oid
    name = "test-filter"
    pipeline       = <<PIPE
    filter resource_attributes."k8s.container.name" = "image-provider"
    PIPE
    source_dataset= data.observe_dataset.kubernetes.oid
    drop_rate = 1 
    # e.g., drop_rate = 0.99
    # enabled defaults to true - can optionally specify it
}

The provider schema requires the following fields.

Field
Type
Description
workspace
String
OID of the Observe workspace where the filter lives.
name
String
Human-readable filter name.
pipeline
String
OPAL expression that matches the log lines to drop.
source_dataset
String
Dataset OID the filter applies to.
drop_rate
Number
Fraction of matching logs to discard (0.0–1.0).

See the full schema in the Observe Terraform Provider docs. Once it’s in your state file, terraform apply becomes your log-spam kill switch.

Once a drop filter is running

A few minutes after you enable a filter, the Drop filters page turns into a real-time scorecard:

Column
What it shows
Why it matters
Source observations
Total raw data coming from the dataset (e.g., 60 MB/hr).
All incoming log traffic to a source dataset before any filtering.
Matched observations
Bytes and count that match the filter rule and are discarded (e.g., 2 MB/hr).
Immediate proof that the kill-switch is working.
Remaining observations
The data that actually lands in your Observe tenant (e.g., 58 MB/hr).
This is the number your monthly invoice is based on.

With one glance you can verify the filter is active, quantify the savings, and decide whether you can safely dial the drop rate back or turn the filter off entirely after upstream fixes are shipped.

Try Drop Filters for yourself

Drop Filters are live in every new workspace. Spin one up in the UI, wire it into Terraform, and watch your ingest volume flatten in minutes. 

Start a free Observe trial today at observeinc.com and put Drop Filters to work. Happy logging — minus the noise!