Ad Image

How Observability Platforms Can Ensure Trust with AI

Observability Platforms

Observability Platforms

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Payal Kindiger of Riverbed warns observability platforms not to place all of their trust in the AI basket.

Unified observability, a fast-emerging field that goes beyond monitoring to present a painstakingly thorough assessment of system health, stands to gain a lot from recent advancements in artificial intelligence. AI can add speed and precision to automated assessments, quickly reviewing scenarios and presenting options on actions to correct any problems.

But organizations need to tread carefully when employing AI’s powerful capabilities. AI models can’t be fully trusted to gather the right information and draw the right conclusions simply because they’re AI. They need to be thoroughly trained for the job they’re performing. They must be given complete information and be designed for the model an organization is running. And then you have to check its work, because AI models have been known to be wrong.

For all of the potential that AI offers in fields such as observability, it’s still best to view it as a talented intern on your team, capable of great things but a little too erratic to be trusted entirely.

Download Link to Data Integration Buyers Guide

How Observability Platforms Can Ensure Trust with AI


Full Trust in AI is a Hallucination

Large Language Models (LLMs) like ChatGPT have drawn a lot of attention recently for their ability to perform an enormous range of tasks, from brainstorming ideas and translating text to writing songs and troubleshooting programming code. But its successes have also come with some glaring missteps, underscored by AI’s seemingly supreme self-confidence even when it’s completely wrong.

When lawyers in a personal injury case in New York used ChatGPT to generate motions, for example, the model bolstered its argument by citing cases that didn’t exist. During its debut earlier this year, Google’s Bard confidently reported that the James Webb Space Telescope took the first pictures of a planet outside our solar system—despite the fact that the first such picture was taken in 2004, 14 years before the Webb telescope was launched.

AI hallucinations, as such mistakes are called, result when a model has incomplete training, or has insufficient or biased information, and responds by randomly making things up so it can continue. And hallucinations are common at this stage of AI development, with potential consequences in fields such as law, medicine, and cybersecurity, as well as others. IT teams using AI for observability need to ensure they have the right oversight and control of AI to make certain the models deliver more trustworthy, accurate data analysis and interpretation.

How Runbooks Can Guide AI Models

The first step is making sure an AI model is being trained on trustworthy data sources. AI models are learning models, designed to absorb information on the job, and to learn from their mistakes. Ensuring its input data is complete can help minimize mistakes by reducing the extent a model needs to try to interpret data — reading between the lines — and drawing its own, sometimes spurious, conclusions.

Maintaining proper oversight of a model can be a matter of ensuring it’s following the right processes, because, in this context, AI hallucinations may not be caused so much by incorrect data as AI’s incorrect interpretation of the data. OpenAI, which created ChatGPT, said recently that it is working to clean up the chatbot’s hallucinations by focusing on “process supervision” rather than “outcome supervision”— that is, providing feedback on each step in a process as opposed to providing feedback only on the outcome.

Unified observability teams can apply that kind of control function to AI by using runbooks. Runbooks are automated workflows that investigate network incidents, respond to various triggers, and mimic an organization’s troubleshooting processes for getting to the root of a problem. They can be built with the input of experts currently on staff while also incorporating a wide range of network, infrastructure, and application data. They can do this by automating the collection of incident details, and delivering immediately actionable insights for IT teams to follow up on.

AI, which can sort through scenarios and options faster than humans can, is capable of accelerating the runbook process, shortening the mean time to resolution of incidents that most affect a company’s operations. But the reliability of its conclusions depends on having full-fidelity telemetry in its training and clear processes to follow. Runbooks also provide visibility into the process, which is something lacking with AI. Much of AI’s thought processes are obscured from an observer, and models are not adept at explaining how they reach a conclusion. The logic engine in a runbook, however, is transparent. It executes a predictable series of steps and lets you see how decisions are being made.

Final Thoughts on AI and Observability Platforms

AI has a lot to offer unified observability platforms, just like it does for many other applications and fields. However, as powerful as AI is, it can’t be completely trusted when acting alone. It’s too prone to hallucinations, bias and mistakes to take its findings as truthful and correct at face value, especially when charged with making mission critical decisions.

But that does not mean that AI can’t be a powerful tool for unified observability platforms. Instead of letting an AI act alone, users should provide AI models with a foundation of full-fidelity telemetry for training, while also using runbooks to guide and check its decision-making process. Doing that can provide a necessary foundation of trust that will enable AI to take on increasingly important roles within unified observability platforms, as well as other places where accuracy is just as critical as speed.

Download Link to Data Integration Buyers Guide

Payal Kindiger
Follow Payal
Latest posts by Payal Kindiger (see all)

Share This

Related Posts