5 Things You Should Know About Container Monitoring

When you monitor containers and Kubernetes, you need to ingest data from a much wider variety of sources. Just as important, you must correlate those various data sources and analyze them comprehensively to gain full context on the health and performance of your complex application stack.

So you’ve decided to go cloud-native by deploying your apps inside containers – excellent. You’ve implemented a more flexible, scalable, and fault-tolerant application stack that will pay dividends in the long run. 

However, now you need to come up with a way to monitor and manage that containerized application stack on an ongoing basis. That’s no simple task because monitoring containers – along with orchestration platforms they run on (like Kubernetes) – requires different processes, tools, and strategies than monitoring monolithic apps. And without the right observability solutions for a containerized environment, you’ll struggle to understand trends and pinpoint root causes within complex, multi-layered stacks.

We’ve provided an overview of what makes container monitoring unique, and the best practices to keep in mind when devising a monitoring strategy for containers and Kubernetes.

Containers Mean More Data

The first thing to know about container monitoring is that there is simply more data to collect and monitor. 

In a traditional application stack, you have relatively few data sources to work with for monitoring purposes. You might collect some logs and metrics from your application, as well as the server that hosts it — but that’s usually it.

However, with a container-based application, there are many more components that produce monitoring data. You can track performance metrics like memory and CPU usage for each container, and each container typically also produces its own log files. On top of this, you have metrics and logs for the various parts of your Kubernetes architecture — Nodes, Pods, and so on. Then there are logs and metrics for the underlying physical infrastructure on which everything runs – not to mention distributed traces for your microservices-based application.

Connecting data together

All of this means that, when you monitor containers and Kubernetes, you need to ingest data from a much wider variety of sources. Just as important, you should correlate those various data sources and analyze them comprehensively to gain full context on the health and performance of your complex application stack.

Kubernetes Monitoring Depends on the Context

Although container-based application stacks involve more monitoring data than monoliths, the exact nature of your data sources can vary depending on how your stack is deployed.

For example, if you run Kubernetes on AWS using the EKS managed service, the amount of data you can collect about the underlying host infrastructure — and the level of control you have over that infrastructure – is different than if you run Kubernetes on-prem or on nodes that you manage yourself.

That’s because EKS is an example of a SaaS platform, in which the service provider (AWS in EKS’s case) manages infrastructure and software for you. This is convenient from the user’s perspective, but it doesn’t eliminate the need for SaaS observability. That said, SaaS monitoring and observability require a different approach than the one you’d follow when dealing with self-managed infrastructure. With fully managed AWS Kubernetes, you don’t need to worry as much about monitoring nodes, because AWS manages them for you.

However, you still need to monitor containers, track application traces, etc, because, with EKS, only the control plane is offered as a managed service. Responsibility for managing the workloads that you deploy on that control plane falls to you, the EKS user.

Container Monitoring Data May Be Short-lived

Another difference between container and Kubernetes monitoring is that with containers, some of the data you would want to monitor is simply not persistent as the containers that host it are not persistent. Though this is by design, and often desired, one may want to retain this data for various scenarios like in the event of an audit, outages, or even troubleshooting.

Container logs are a classic example of persistent data that you might want to keep around. They are usually stored inside the file system of the containers where they run. However, when a container shuts down, any log data inside its file system will be lost permanently. The only way around this is to have that log data sent and aggregated to a separate storage location beforehand.

data collection

For this reason, it’s critical to create a container monitoring strategy that allows you to collect data in real-time across the various parts of your environment. You can’t count on being able to go back later and pull out container log files if you decide you want to look at them, because they may no longer exist at that point.

Container Logs Can Live Anywhere

Container monitoring also poses a special challenge because container and Kubernetes logs are not necessarily stored in a consistent or predictable way.

With conventional application stacks, it’s a relatively safe bet that most, or all, of the logs you want to monitor, will end up in a predictable location on your host server — like /var/log. But with containers, your logs could be spread across various locations inside container file systems, as well as node files systems. Plus, if you use a managed service like EKS Kubernetes on AWS, the logs for your control plane may be pushed to a cloud service like CloudWatch.

To collect and monitor all of this data effectively, you need a highly flexible and extensible monitoring strategy for containers. You can’t simply watch for new log files in /var/log and call it a day.

Container Monitoring Tools Vary Greatly

The native tooling that you get for interacting with monitoring data in a containerized application stack can also vary widely. Instead of having a single tool that you can use to track container metrics or tail logs, you need to know a variety of different tools, such as:

  • docker stats: Streams metrics data about running containers.
  • kubectl logs: Displays K8s Pod logs.
  • tail: Views the “tail” of a log file directly from the Linux CLI on a node.
  • journalctl: a logging utility that can help you view data from multiple logs at once from across a Kubernetes cluster.

Beyond these standard monitoring tools, you may also need to familiarize yourself with monitoring software that is part of the cloud service where your containers run, like CloudWatch or Azure Monitor.

Of course, you can simplify container monitoring by deploying a tool that will automatically collect, analyze and correlate logs, metrics, and traces from across your containerized application stack all without requiring you to manage the data manually using CLI tools. Even then, it’s a good idea to know how to use commands like kubectl logs or tail to access monitoring data on a one-off basis from the CLI.

Wrapping Up

No matter how you spin it, monitoring containers are more complex than conventional monitoring. The key to success is to deploy monitoring tools and processes that are flexible enough to accommodate a variety of data sources and storage locations. At the same time, you want your container monitoring solution to be centralized, so that you can analyze and correlate the data in a consolidated way no matter how you choose to run containers, Kubernetes, and any other part of your application stack.