- Elastic Cloud Serverless
- Elasticsearch
- Elastic Observability
- Get started
- Observability overview
- Elastic Observability Serverless billing dimensions
- Create an Observability project
- Quickstart: Monitor hosts with Elastic Agent
- Quickstart: Monitor your Kubernetes cluster with Elastic Agent
- Quickstart: Monitor hosts with OpenTelemetry
- Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)
- Quickstart: Collect data with AWS Firehose
- Get started with dashboards
- Applications and services
- Application performance monitoring (APM)
- Get started with traces and APM
- Learn about data types
- Collect application data
- View and analyze data
- Act on data
- Use APM securely
- Reduce storage
- Managed intake service event API
- Troubleshooting
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure a Synthetics project
- Multifactor Authentication for browser monitors
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Scale and architect a deployment
- Synthetics Encryption and Security
- Troubleshooting
- Application performance monitoring (APM)
- Infrastructure and hosts
- Logs
- Inventory
- Incident management
- Data set quality
- Observability AI Assistant
- Machine learning
- Reference
- Get started
- Elastic Security
- Elastic Security overview
- Security billing dimensions
- Create a Security project
- Elastic Security requirements
- Elastic Security UI
- AI for Security
- Ingest data
- Configure endpoint protection with Elastic Defend
- Manage Elastic Defend
- Endpoints
- Policies
- Trusted applications
- Event filters
- Host isolation exceptions
- Blocklist
- Optimize Elastic Defend
- Event capture and Elastic Defend
- Endpoint protection rules
- Identify antivirus software on your hosts
- Allowlist Elastic Endpoint in third-party antivirus apps
- Elastic Endpoint self-protection features
- Elastic Endpoint command reference
- Endpoint response actions
- Cloud Security
- Explore your data
- Dashboards
- Detection engine overview
- Rules
- Alerts
- Advanced Entity Analytics
- Investigation tools
- Asset management
- Manage settings
- Troubleshooting
- Manage your project
- Changelog
Connect to Azure OpenAI
editConnect to Azure OpenAI
editThis page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within Kibana. You’ll first need to configure Azure, then configure the connector in Kibana.
Configure Azure
editConfigure a deployment
editFirst, set up an Azure OpenAI deployment:
- Log in to the Azure console and search for Azure OpenAI.
- In Azure AI services, select Create.
- For the Project Details, select your subscription and resource group. If you don’t have a resource group, select Create new to make one.
-
For Instance Details, select the desired region and specify a name, such as
example-deployment-openai
. - Select the Standard pricing tier, then click Next.
- Configure your network settings, click Next, optionally add tags, then click Next.
- Review your deployment settings, then click Create. When complete, select Go to resource.
The following video demonstrates these steps.
Configure keys
editNext, create access keys for the deployment:
- From within your Azure OpenAI deployment, select Click here to manage keys.
- Store your keys in a secure location.
The following video demonstrates these steps.
Configure a model
editNow, set up the Azure OpenAI model:
- From within your Azure OpenAI deployment, select Model deployments, then click Manage deployments.
- On the Deployments page, select Create new deployment.
-
Under Select a model, choose
gpt-4o
orgpt-4 turbo
. - Set the model version to "Auto-update to default".
- Under Deployment type, select Standard.
- Name your deployment.
- Slide the Tokens per Minute Rate Limit to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits.
- Click Create.
The models available to you will depend on region availability. For best results, use GPT-4o 2024-05-13
with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the LLM performance matrix.
The following video demonstrates these steps.
Configure Elastic AI Assistant
editFinally, configure the connector in Kibana:
- Log in to Kibana.
- Find Connectors in the navigation menu or use the global search field. Then click Create Connector, and select OpenAI.
-
Give your connector a name to help you keep track of different models, such as
Azure OpenAI (GPT-4 Turbo v. 0125)
. - For Select an OpenAI provider, choose Azure OpenAI.
-
Update the URL field. We recommend doing the following:
- Navigate to your deployment in Azure AI Studio and select Open in Playground. The Chat playground screen displays.
-
Select View code, then from the drop-down, change the Sample code to
Curl
. - Highlight and copy the URL without the quotes, then paste it into the URL field in Kibana.
- (Optional) Alternatively, refer to the API documentation to learn how to create the URL manually.
- Under API key, enter one of your API keys.
- Click Save & test, then click Run.
The following video demonstrates these steps.
On this page