Configure the Elasticsearch output
editConfigure the Elasticsearch output
editThe Elasticsearch output sends events directly to Elasticsearch by using the Elasticsearch HTTP API.
Compatibility: This output works with all compatible versions of Elasticsearch. See the Elastic Support Matrix.
This example configures an Elasticsearch output called default
in the
elastic-agent.yml
file:
outputs: default: type: elasticsearch hosts: [127.0.0.1:9200] username: elastic password: changeme
This example is similar to the previous one, except that it uses the recommended token-based (API key) authentication:
outputs: default: type: elasticsearch hosts: [127.0.0.1:9200] api_key: "my_api_key"
Token-based authentication is required in an Elastic Cloud Serverless environment.
Elasticsearch output configuration settings
editThe elasticsearch
output type supports the following settings, grouped by
category. Many of these settings have sensible defaults that allow you to run
Elastic Agent with minimal configuration.
Commonly used settings
editSetting | Description | ||
---|---|---|---|
(boolean) Enables or disables the output. If set to Default: |
|||
(list) The list of Elasticsearch nodes to connect to. The events are distributed to
these nodes in round robin order. If one node becomes unreachable, the event is
automatically sent to another node. Each Elasticsearch node can be defined as a When a node is defined as an outputs: default: type: elasticsearch hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] protocol: https path: /elasticsearch
Note that Elasticsearch Nodes in the Elastic Cloud Serverless environment are exposed on port 443. |
|||
(string) The name of the protocol Elasticsearch is reachable on. The options are:
|
|||
(boolean) If set to Default: |
|||
(string) Additional headers to send to proxies during CONNECT requests. |
|||
(string) The URL of the proxy to use when connecting to the Elasticsearch servers. The
value may be either a complete URL or a |
Authentication settings
editWhen sending data to a secured cluster through the elasticsearch
output, Elastic Agent can use any of the following authentication methods:
Basic authentication credentials
editoutputs: default: type: elasticsearch hosts: ["https://myEShost:9200"] username: "your-username" password: "your-password"
Setting | Description |
---|---|
(string) The basic authentication password for connecting to Elasticsearch. |
|
(string) The basic authentication username for connecting to Elasticsearch. This user needs the privileges required to publish events to Elasticsearch. Note that in an Elastic Cloud Serverless environment you need to use token-based (API key) authentication. |
Token-based (API key) authentication
editoutputs: default: type: elasticsearch hosts: ["https://myEShost:9200"] api_key: "KnR6yE41RrSowb0kQ0HWoA"
Setting | Description |
---|---|
(string) Instead of using a username and password, you can use API keys to
secure communication with Elasticsearch. The value must be the ID of the API key and the
API key joined by a colon: |
Public Key Infrastructure (PKI) certificates
editoutputs: default: type: elasticsearch hosts: ["https://myEShost:9200"] ssl.certificate: "/etc/pki/client/cert.pem" ssl.key: "/etc/pki/client/cert.key"
For a list of available settings, refer to SSL/TLS, specifically the settings under Table 7, “Common configuration options” and Table 8, “Client configuration options”.
Kerberos
editThe following encryption types are supported:
- aes128-cts-hmac-sha1-96
- aes128-cts-hmac-sha256-128
- aes256-cts-hmac-sha1-96
- aes256-cts-hmac-sha384-192
- des3-cbc-sha1-kd
- rc4-hmac
Example output config with Kerberos password-based authentication:
outputs: default: type: elasticsearch hosts: ["http://my-elasticsearch.elastic.co:9200"] kerberos.auth_type: password kerberos.username: "elastic" kerberos.password: "changeme" kerberos.config_path: "/etc/krb5.conf" kerberos.realm: "ELASTIC.CO"
The service principal name for the Elasticsearch instance is constructed from these options. Based on this configuration, the name would be:
HTTP/[email protected]
Setting | Description |
---|---|
(string) The type of authentication to use with Kerberos KDC:
Default: |
|
(string) Path to the |
|
(boolean) Enables or disables the Kerberos configuration. Kerberos settings are disabled if either |
|
(boolean) If Default: |
|
(string) If |
|
(string) If |
|
(string) Name of the realm where the output resides. |
|
(string) Name of the principal used to connect to the output. |
Compatibility setting
editSetting | Description |
---|---|
Allow Elastic Agent to connect and send output to an Elasticsearch instance that is running an earlier version than the agent version. Note that this setting does not affect Elastic Agent’s ability to connect to Fleet Server. Fleet Server will not accept a connection from an agent at a later major or minor version. It will accept a connection from an agent at a later patch version. For example, an Elastic Agent at version 8.14.3 can connect to a Fleet Server on version 8.14.0, but an agent at version 8.15.0 or later is not able to connect. Default: |
Data parsing, filtering, and manipulation settings
editSettings used to parse, filter, and transform data.
Setting | Description |
---|---|
(boolean) Configures escaping of HTML in strings. Set to Default: |
|
(string) A format string value that specifies the ingest pipeline to write events to. outputs: default: type: elasticsearchoutput.elasticsearch: hosts: ["http://localhost:9200"] pipeline: my_pipeline_id You can set the ingest pipeline dynamically by using a format string to
access any event field. For example, this configuration uses a custom field,
outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipeline: "%{[fields.log_type]}_pipeline" With this configuration, all events with To learn how to add custom fields to events, see the See the |
|
An array of pipeline selector rules. Each rule specifies the ingest pipeline
to use for events that match the rule. During publishing, Elastic Agent uses the first
matching rule in the array. Rules can contain conditionals, format string-based
fields, and name mappings. If the Rule settings:
All the conditions supported by processors are also supported here. The following example sends events to a specific pipeline based on whether the
outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipelines: - pipeline: "warning_pipeline" when.contains: message: "WARN" - pipeline: "error_pipeline" when.contains: message: "ERR" The following example sets the pipeline by taking the name returned by the
outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipelines: - pipeline: "%{[fields.log_type]}" mappings: critical: "sev1_pipeline" normal: "sev2_pipeline" default: "sev3_pipeline" With this configuration, all events with |
HTTP settings
editSettings that modify the HTTP requests sent to Elasticsearch.
Setting | Description |
---|---|
Custom HTTP headers to add to each request created by the Elasticsearch output. Example: outputs: default: type: elasticsearch headers: X-My-Header: Header contents Specify multiple header values for the same header name by separating them with a comma. |
|
Dictionary of HTTP parameters to pass within the URL with index operations. |
|
(string) An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix. |
Memory queue settings
editThe memory queue keeps all events in memory.
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.
The memory queue is controlled by the parameters flush.min_events
and flush.timeout
.
flush.min_events
gives a limit on the number of events that can be included in a single batch, and
flush.timeout
specifies how long the queue should wait to completely fill an event request. If the
output supports a bulk_max_size
parameter, the maximum batch size will be the smaller of
bulk_max_size
and flush.min_events
.
flush.min_events
is a legacy parameter, and new configurations should prefer to control batch size
with bulk_max_size
. As of 8.13, there is never a performance advantage to limiting batch size with
flush.min_events
instead of bulk_max_size
.
In synchronous mode, an event request is always filled as soon as events are available, even if
there are not enough events to fill the requested batch. This is useful when latency must be
minimized. To use synchronous mode, set flush.timeout
to 0.
For backwards compatibility, synchronous mode can also be activated by setting flush.min_events
to 0
or 1. In this case, batch size will be capped at 1/2 the queue capacity.
In asynchronous mode, an event request will wait up to the specified timeout to try and fill the
requested batch completely. If the timeout expires, the queue returns a partial batch with all
available events. To use asynchronous mode, set flush.timeout
to a positive duration, for example 5s.
This sample configuration forwards events to the output when there are enough events to fill the
output’s request (usually controlled by bulk_max_size
, and limited to at most 512 events by
flush.min_events
), or when events have been waiting for
queue.mem.events: 4096 queue.mem.flush.min_events: 512 queue.mem.flush.timeout: 5s
Setting | Description |
---|---|
The number of events the queue can store. This value should be evenly divisible by the smaller of Default: |
|
Default: |
|
(int) The maximum wait time for Default: |
Performance tuning settings
editSettings that may affect performance when sending data through the Elasticsearch output.
Use the preset
option to automatically configure the group of performance tuning settings to optimize for throughput
, scale
, latency
, or you can select a balanced
set of performance specifications.
The performance tuning preset
values take precedence over any settings that may be defined separately. If you want to change any setting, set preset
to custom
and specify the performance tuning settings individually.
Setting | Description |
---|---|
(string) The number of seconds to wait before trying to reconnect to Elasticsearch
after a network error. After waiting Default: |
|
(string) The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. Default: |
|
(int) The maximum number of events to bulk in a single Elasticsearch bulk API index request. Events can be collected into batches. Elastic Agent will split batches larger than
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. Setting Default: |
|
(int) The gzip compression level. Set this value to Increasing the compression level reduces network usage but increases CPU usage. Default: |
|
(int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Set Default: |
|
Configures the full group of performance tuning settings to optimize your Elastic Agent performance when sending data to an Elasticsearch output. Refer to Performance tuning settings for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options. Performance tuning preset settings:
Default: |
|
(string) The HTTP request timeout in seconds for the Elasticsearch request. Default: |
|
(int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host). Default: |