What’s New: OpenAI App!
At Observe, we believe that not only will LLMs become pervasive in almost every application, but as a result, we see the rise of “LLM Ops” functions within organizations to fine-tune and perfect the experience.
Introducing the OpenAI App
The last couple of blog posts have focused on how we use LLMs within Observe to improve the productivity of both new and seasoned users. There are many barriers to adopting new features and functionality in enterprise software and LLMs promise to lower those barriers by 20 to 30%.
But what if you want to integrate LLMs into your application to improve the productivity of your users? What is the latency of the API? What’s the impact on customer experience? What questions are users posing to your LLM and what responses are they getting? Are the responses correct or inaccurate? Using OpenAI APIs is just the beginning. To use them effectively and learn where the gaps are in your answers you need to observe OpenAI itself.
Why can’t you do all this in the OpenAI playground? If you’ve dug around in the OpenAI UI, then you’ll know that it’s not very compelling. You get some basic information about usage and relative costs. But there’s no real insight — there’s no ability to drill into user requests and see exactly what’s going on. In observability terms, there are a few basic metrics you can monitor, but there’s no additional context to show you who is doing what and what the experience is.
Empowering LLM Observability
At Observe, we believe that not only will LLMs become pervasive in almost every application, but as a result, we see the rise of “LLM Ops” functions within organizations to fine-tune and perfect the experience. For this reason, we’ve introduced the new OpenAI App.
Observe Apps are packaged alerts, dashboards, models, and integrations — basically, everything you need to observe a particular use case. Our customers immediately benefit from our knowledge and best practices in this instance, around LLMs.
The basic dashboard for the OpenAI App provides visibility into the overall cost and usage of the system — put another way, it’s the overall health of the system.
But a simple dashboard with a few metrics that we’re keeping an eye on isn’t really observability. For that, we need more context — logs and timing information for each request so we can see exactly what’s going on. With a couple of lines of code wrapping the OpenAI API call, engineers can get this additional context.
// get start time
startTime = datetime.datetime.now()
answer = openai.ChatCompletion.create(
model = HELP_MODEL_NAME, temperature = temperature, messages = messages)
// get end time
endTime = datetime.datetime.now()
// log additional context so Observe has more info
"event":"question answered", "question":question, "previous":previousQuestions,
Now we get visibility into questions that were sent to OpenAI and answers that were returned! This is perhaps some of the most important insights because it informs the engineer as to exactly what is confusing the user and driving them to ask questions.
And to complete the observability picture, we can now drill into each user’s session and see the performance breakdown of their requests. Note that Observe provides general support for visualizing timing information, which is why OpenTelemetry span information is NOT required in order to produce this chart.
GPT-Powered Observability with Observe
As with all of our Apps, I’m sure we’ve just scratched the surface of what can be done with this new App. We can’t wait to see how the OpenAI App can help you build better products — faster. Whether it’s understanding how your users are using OpenAI, monitoring its performance, or keeping an eye on costs, the OpenAI App from Observe can help you gain deeper insights, and make data-driven decisions for your OpenAI integrations.
Click here to request trial access and get started with the latest GPT-Powered integrations from Observe!