GenAI Articles

Troubleshooting your Agents and Amazon Bedrock AgentCore with Elastic Observability
AWSAWS BedrockGenAILLM Observability

Troubleshooting your Agents and Amazon Bedrock AgentCore with Elastic Observability

Discover how to achieve end-to-end observability for Amazon Bedrock AgentCore: from tracking service health and token costs to debugging complex reasoning loops with distributed tracing.

Daniela Tzvetkova

Agi K Thomas

Ishleen Kaur

Bahubali Shetti

Elastic Observability: Streams Data Quality and Failure Store Insights
Log AnalyticsGenAI

Elastic Observability: Streams Data Quality and Failure Store Insights

Discover how the Streams a new AI driven Elastic Observability feature help manage data quality with a failure store to help you monitor, troubleshoot, and retain high-quality data.

Elena Stoeva

Yngrid Coello

Reconciliation in Elastic Streams: A Robust Architecture Deep Dive
Log AnalyticsGenAI

Reconciliation in Elastic Streams: A Robust Architecture Deep Dive

Learn how Elastic's engineering team refactored Streams using a reconciliation model inspired by Kubernetes & React to build a robust, extensible, and debuggable system.

Milton Hultgren

How Streams in Elastic Observability Simplifies Retention Management
Log AnalyticsOpenTelemetryGenAI

How Streams in Elastic Observability Simplifies Retention Management

Learn how Streams simplifies retention management in Elasticsearch with a unified view to monitor, visualize, and control data lifecycles using DSL or ILM.

Kevin Lacabane

Live logs and prosper: fixing a fundamental flaw in observability
Log AnalyticsGenAIOpenTelemetry

Live logs and prosper: fixing a fundamental flaw in observability

Stop chasing symptoms. Learn how Streams, in Elastic Observability fixes the fundamental flaw in observability, using AI to proactively find the 'why' in your logs for faster resolution.

Ken Exner

Automating User Journeys for Synthetic Monitoring with MCP in Elastic
PythonSyntheticsGenAI

Automating User Journeys for Synthetic Monitoring with MCP in Elastic

This post explores how you can automatically create user journeys with Synthetic Monitoring in Elastic Observability, TypeScript, and FastMCP, and walks through the app and its workflow.

Jessica Garson

LLM Observability with Elastic’s Azure AI Foundry Integration
AzureGenAILLM Observability

LLM Observability with Elastic’s Azure AI Foundry Integration

Gain comprehensive visibility into your generative AI workloads on Azure AI Foundry. Monitor token usage, latency, and cost, while leveraging built-in content filters to ensure safe and compliant application behavior—all with out-of-the-box observability powered by Elastic.

Bahubali Shetti

Muthukumar Paramasivam

Daniela Tzvetkova

Optimizing Spend and Content Moderation on Azure OpenAI with Elastic
AzureGenAILLM ObservabilityAzure OpenAI

Optimizing Spend and Content Moderation on Azure OpenAI with Elastic

We have added further capabilities to the Azure OpenAI GA package, which now offer content filter monitoring and enhancements to the billing insights!

Muthukumar Paramasivam

Bahubali Shetti

Daniela Tzvetkova

Transforming Industries and the Critical Role of LLM Observability: How to use Elastic's LLM integrations in real-world scenarios
GenAILLM ObservabilityCloud Monitoring

Transforming Industries and the Critical Role of LLM Observability: How to use Elastic's LLM integrations in real-world scenarios

This blog explores four industry specific use cases that use Large Language Models (LLMs) and highlights how Elastic's LLM observability integrations provide insights into the cost, performance, reliability and the prompts and response exchange with the LLM.

Ishleen Kaur

Daniela Tzvetkova

LLM Observability for Google Cloud’s Vertex AI platform - understand performance, cost and reliability
GenAIGoogle CloudLLM Observability

LLM Observability for Google Cloud’s Vertex AI platform - understand performance, cost and reliability

Enhance LLM observability with Elastic's GCP Vertex AI Integration — gain actionable insights into model performance, resource efficiency, and operational reliability.

Ishleen Kaur

Muthukumar Paramasivam

Daniela Tzvetkova

End to end LLM observability with Elastic: seeing into the opaque world of generative AI applications
Azure OpenAIOpenAIOpenTelemetryGenAIAWS BedrockLLM Observability

End to end LLM observability with Elastic: seeing into the opaque world of generative AI applications

Elastic’s LLM Observability delivers end-to-end visibility into the performance, reliability, cost, and compliance of LLMs across Amazon Bedrock, Azure OpenAI, Google Vertex AI, and OpenAI, empowering SREs to optimize and troubleshoot AI-powered applications.

Daniela Tzvetkova

Bahubali Shetti

LLM observability: track usage and manage costs with Elastic's OpenAI integration
OpenAIGenAILLM Observability

LLM observability: track usage and manage costs with Elastic's OpenAI integration

Elastic's new OpenAI integration for Observability provides comprehensive insights into OpenAI model usage. With our pre-built dashboards and metrics, you can effectively track and monitor OpenAI model usage including GPT-4o and DALL·E.

Subham Sarkar

Daniela Tzvetkova