Observability Hero: Richard Marcus from AuditBoard

By Knox Lively,February 1, 2022

The economies of scale that come with the commodity pricing that we get from Observe means we can consume as much data as we want, from as many sources as we want, and retain it for as long as we want–all without worrying about the cost of storing that data.

Introduction

Richard Marcus from AuditBoardAt Observe, we take great pleasure in working closely with our customers to get to know them, the challenges they face, and how we can help them along their path to observability. This week I had the fortune to chat with Richard Marcus, Head of Information Security at AuditBoard, to better understand the unique challenges they face managing a modern cloud-based risk management platform.

If you’ve ever pitied yourself for having to undertake an audit for a compliance standard or regulatory framework, get over it. Richard and his team at AuditBoard have an alphabet soup of standards and frameworks they have to meet. In addition to the SecOps workload Richard and the team face, they’re also passionate about streaming customer support, incident management, and gaining new insights from their Observability data. This translates into an almost unbearable amount of work that has to be done.

Check out this interview to see exactly how Richard and his team balance all these tasks…hint, it involves Observe.


1. Let’s start by telling everyone who you are and what you do at AuditBoard.

I’m Richard Marcus, head of Information Security and IT at AuditBoard where I’m going on my 3rd year. I oversee product and enterprise security, as well as all the internal compliance, risk, and audit functions we perform on ourselves–using AuditBoard.

AuditBoard is focused on the idea of a “Connected Risk Platform” that enables teams to work cross-functionally and share data to understand the real-time view of risk in their organizations. I’m interested in applying that same concept to observability by pushing AuditBoard forward towards a modern security architecture. We do this by leveraging data to optimize our day-to-day operational activities, especially around security and resiliency, but also collaborating with other teams such as Customer Support, IT, and even Product Development.

2. Tell me about the path to becoming the head of Information Security at AuditBoard.

Prior to joining AuditBoard, I ran security operations at Edgecast Networks—a large content delivery network that was ultimately acquired by Verizon—where I performed typical security operations such as incident management, security monitoring, vulnerability management. Post-acquisition, my role changed and I began to focus more on driving GRC initiatives such as policy development, risk assessments, and compliance activities, across all of Verizon Media. In that role, I was tasked with evaluating really terrible tools that were available at the time designed to help support GRC professionals. After 9 months of evaluating these tools, I decided I was better off building my own.

Serendipitously, some friends of mine were working at AuditBoard–who were reinventing the GRC solution space by using more modern, connected solutions and taking the auditors’ perspective. They were professionals from the industry, so they were familiar with the struggles and the lacking solutions in the marketplace. It was a match. I got to bring my security leadership skill set to the company, and in exchange, they let me tinker along with the product development team, helping guide a product that would be useful not just for the CAE or CFO’s office, but also for InfoSec professionals like me.

3. What is your day-to-day at AuditBoard like? – It sounds like you eat your own dog food there.

We call it drinking our own champagne and we get to drink it often thanks to the internal compliance initiatives inside our company. Our customers expect us to be in compliance with SOC2, ISO 27001, HIPAA, CMMC, CSA STAR, and the list goes on and on because we have customers in every industry. Those expectations are pretty broad and we have to do our best to meet those on behalf of our customers.

Secure customer data with Observe

My team is also responsible for mitigating risks and implementing security in the product and the enterprise. Part of that is making sure we have visibility at every layer of our tech stack and by partnering with our infrastructure and IT teams to make sure that we’re monitoring for unusual, suspicious, or anomalous events.

4. Why is observability so important to you as a security professional, and AuditBoard as a company?

At a macro level, our customers place a tremendous amount of trust in us when they use our product. If you think about how the audit process works, you’re essentially taking all the skeletons out of the closet and inventorying them so that you can ideally improve on those and show meaningful progress on those issues over time. Whether they’re audit issues, risk findings, or regulatory findings, they’re all very sensitive and we have a duty as a custodian of this data to keep it confidential and protected from unauthorized disclosure. As a managed SAAS product, we have a very active role in monitoring for unauthorized access and helping customers deal with those scenarios, should they arise.

At a deeper level, we have to keep the service up. Things like uptime, availability, performance, and reliability are all things customers expect from us, so the visibility helps us partner with other teams to make sure that we have processes in place to identify events that may jeopardize those objectives. We’ve done a lot of bridge-building with our infrastructure and customer support teams to make sure that visibility is extended to those who need it.

5. What do you think is missing from most observability tools today?

The big challenge that Observe solves for us is related to the breadth and depth of data you can collectwhich are common weaknesses in other observability tools.

In a prior role, I got to work with one of the largest Splunk deployments in the world. The company hosting it was trying to capture years of observability data from all their data centers so that they could go back and investigate prior security incidents. When you are impacted by security incidents you come to understand the true value of historical data and you tend to want to collect as much as possible moving forward. However, you typically can only achieve that at the expense of millions and millions of dollars and a huge amount of man-hours.

When you’re trying to evaluate the ROI on observability, you end up asking questions like, “What’s it going to cost me to store and retain these logs?”, or “What’s the value in these logs?”. Unless you are in the headspace where you actively need them, you can’t evaluate that tradeoff appropriately. A lot of organizations struggle with that ROI calculation. At AuditBoard, my goal is to collect as many logs from as many sources as I can and retain them as long as I can. In the event of an incident, that visibility is priceless.

The economies of scale that come with the commodity pricing that we get from Observe means we can consume as much data as we want, from as many sources as we want, and retain it for as long as we want–all without worrying about the cost of storing that data. We still have to apply some scrutiny around how we use that data, but now the ROI is much more clear.

6. How does Observe help you meet these objectives you mentioned earlier like monitoring uptime, performance, and availability?

We collect application logs, metrics from Kubernetes, and infrastructure-related logs from AWS. Having all of that data in one place where it can be aggregated, correlated, and then used to detect common failure patterns is tremendously helpful to our team. It allows them to detect, diagnose, and troubleshoot security, performance, or availability impacting events.

SRE Engineer at his desk

But where the visibility is really powerful is when we combine that data with our customer-facing systems. We’ve started to pull in data from our customer relationship management system, and our customer service desk. When we pull that data in and correlate it, we can tell support proactively, “Here are customers that could be experiencing issues.” and even tell them the type of issues they may be experiencing. The goal is to do that proactively so that we know before the customer knows.

7. Tell me more about your future observability goals, and how you hope Observe can help you meet them.

Once we finish up the customer support use cases, I’m interested in exploring more security-related applications.

One that we’re exploring has to do with anomalous data access in our environment. We have file logging that tells us when, and how, a user takes an action on a file: create, read, update, or delete. We want to take these actions and correlate them with the number of files we expect a user to interact with over a given time. Let’s say a customer is being onboarded so we notice a bulk upload of files, or maybe the customer is being off-boarded and we notice a bulk download of files. That’s fine, but if these events occur out of the blue, say “User XYZ downloaded 5,000 files in the last 5 minute(s)”, let’s find out what’s going on there!

On a typical day, our support teams work on a bunch of different customer applications. But if we see anyone outside of our support team accessing multiple customer sites in a given time range, that’s something we want to investigate. With Observe, we can parse and filter these events by IP address, username, or customer ID to then tease out these strange and suspicious event patterns.

Thinking more long-term, we have all of this great threat intel data. There are various threat feeds we subscribe to and lots of security tooling we use that flags suspicious or malicious IPs. So we know who these bad actors are and maybe even some of their TTP’s—Tactics, Techniques, and Procedures. I think we can create a more automated process around identifying those bad actors in any, or all, of our other log sets automatically. For example, let’s say we have an IP address blocked at the WAF in our production environment for port scanning, and later that day we see them trying to access one of our enterprise applications. That’s a very powerful capability to have, so we are going to dig into it further this year.


We want to extend a huge thanks to Richard for his time, enthusiasm, and willingness to chat. We also want to thank AuditBoard for their continued support and partnership, we’ve enjoyed working with them at every level along the way.

If you or your company wants to take your observability strategy to the next level, we encourage you to try our demo of Observe in action.