O11y AI
Investigate and resolve incidents faster with O11y Investigator. Use natural language to explore observability data with O11y Copilot, generate Regular Expressions effortlessly with O11y Regex, and obtain precise answers with O11y GPT.
-
O11y Investigator
Agentic AI for Faster, More Accurate Investigations
Precise, low-hallucination agents collaborate with an AI Planner to drive complex investigative workflows—reducing investigation time and accelerating resolutions.
Domain-specific agents learn from previous incidents and runbooks. They are specially designed to detect anomalies in dynamic environments like Kubernetes or cloud platforms and are fine-tuned to identify issues arising from application code commits.
Collaborative Resolution with Investigation Notebook
AI-generated investigation plan and AI’s chain of thought guides on-call engineers through the troubleshooting steps.
Blend AI insights with your team’s expertise in a shared workspace – Investigation Notebook – enhancing visibility and collaboration throughout the investigation process.
Streamline the creation of Jira tickets, declare incident severity, or draft incident summaries and postmortem reports—with AI providing the first draft automatically.
Customizable AI Models Tailored to Your Needs
Choose from leading Large Language Models like GPT-4 or integrate models supported by Snowflake Cortex, customizing the AI experience to fit your organization’s specific requirements.
-
O11y Copilot
Accelerate OPAL Querying with Conversational Language
Effortlessly explore observability data—including logs, metrics, and traces—using O11y Copilot. Craft powerful queries in the Observe Processing and Analytics Language (OPAL) simply by conversing in natural language.
Privacy-First Large Language Model
Observe LLM, trained specifically on OPAL, generates and executes queries without any data leaving your organization’s boundaries, ensuring complete data privacy and compliance.
-
O11y GPT and O11y Regex
Ask any question about using Observe specific to your use case or workflow in natural language and get an immediate response from O11y GPT trained on Observe documentation.
Precision Retrieval-Augmented Generation (RAG) and Embeddings deliver accurate and fast responses reducing the risk of model hallucinations or incorrect response.
Generate Regular Expressions in seconds with a user-friendly, point-and-click interface to extract and parse desired fields from log messages.
-
LLM Observability
Instantly monitor operational metrics like token usage, API error rates, and response latency to optimize costs and performance.
Identify opportunities to reduce expenses by pinpointing the most token-intensive calls within your LLM chain.
Evaluate the effectiveness of LLM responses by analyzing request-response pairs, word embeddings, and user prompts.
Resources
What’s New: OPAL Co-Pilot! (Powered by GPT)
No longer are users divided between those who are “point-and-click” and those who can code. With OPAL Co-Pilot, everyone can code. OPAL, Before AI Back in the early…
April 27, 2023
What We Learned Building O11y GPT: Part I
Upon the release of GPT-3.5, we began inputting our documentation and posing questions, from “What does Observe do?” to “How do I summarize daily errors from last week?”…
May 9, 2023
What We Learned Building O11y GPT: Part II
We’re genuinely excited, as it appears we’re at the very beginning of a new way of helping engineers and other observability users around the world! In our previous…
May 15, 2023
Want to dig deeper?
Integration
Observe can ingest data from almost any source, in any format with 400+ integrations.
Learn More