- Filebeat Reference: other versions:
- Overview
- Getting Started With Filebeat
- Setting up and running Filebeat
- Upgrading Filebeat
- How Filebeat works
- Configuring Filebeat
- Specify which modules to run
- Configure inputs
- Manage multiline messages
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Configure index lifecycle management
- Load balance the output hosts
- Specify SSL settings
- Filter and enhance the exported data
- Define processors
- Add cloud metadata
- Add fields
- Add labels
- Add the local time zone
- Add tags
- Decode CEF
- Decode CSV fields
- Decode JSON fields
- Decode Base64 fields
- Decompress gzip fields
- Community ID Network Flow Hash
- Convert
- Drop events
- Drop fields from events
- Extract array
- Keep fields from events
- Registered Domain
- Rename fields from events
- Add Kubernetes metadata
- Add Docker metadata
- Add Host metadata
- Add Observer metadata
- Dissect strings
- DNS Reverse Lookup
- Add process metadata
- Script Processor
- Timestamp
- Parse data by using ingest node
- Enrich events with geoIP information
- Configure project paths
- Configure the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- HTTP Endpoint
- filebeat.reference.yml
- Beats central management
- Modules
- Modules overview
- Apache module
- Auditd module
- AWS module
- CEF module
- Cisco module
- Coredns Module
- Elasticsearch module
- Envoyproxy Module
- Google Cloud module
- haproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Kafka module
- Kibana module
- Logstash module
- MongoDB module
- MSSQL module
- MySQL module
- nats module
- NetFlow module
- Nginx module
- Osquery module
- Palo Alto Networks module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Santa module
- Suricata module
- System module
- Traefik module
- Zeek (Bro) Module
- Exported fields
- Apache fields
- Auditd fields
- AWS fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Docker fields
- ECS fields
- elasticsearch fields
- Envoyproxy fields
- Google Cloud fields
- haproxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- mongodb fields
- mssql fields
- MySQL fields
- nats fields
- NetFlow fields
- NetFlow fields
- Nginx fields
- Osquery fields
- panw fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Google Santa fields
- Suricata fields
- System fields
- Traefik fields
- Zeek fields
- Monitoring Filebeat
- Securing Filebeat
- Troubleshooting
- Get help
- Debug
- Common problems
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Contributing to Beats
Settings for internal collection
editSettings for internal collection
editUse the following settings to configure internal collection when you are not using Metricbeat to collect monitoring data.
You specify these settings in the X-Pack monitoring section of the
filebeat.yml
config file:
monitoring.enabled
editThe monitoring.enabled
config is a boolean setting to enable or disable X-Pack monitoring.
If set to true
, monitoring is enabled.
The default value is false
.
monitoring.elasticsearch
editThe Elasticsearch instances that you want to ship your Filebeat metrics to. This configuration option contains the following fields:
bulk_max_size
editThe maximum number of metrics to bulk in a single Elasticsearch bulk API index request.
The default is 50
. For more information, see Elasticsearch.
backoff.init
editThe number of seconds to wait before trying to reconnect to Elasticsearch after
a network error. After waiting backoff.init
seconds, Filebeat tries to
reconnect. If the attempt fails, the backoff timer is increased exponentially up
to backoff.max
. After a successful connection, the backoff timer is reset. The
default is 1s.
backoff.max
editThe maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. The default is 60s.
compression_level
editThe gzip compression level. Setting this value to 0
disables compression. The
compression level must be in the range of 1
(best speed) to 9
(best
compression). The default value is 0
. Increasing the compression level
reduces the network usage but increases the CPU usage.
headers
editCustom HTTP headers to add to each request. For more information, see Elasticsearch.
hosts
editThe list of Elasticsearch nodes to connect to. Monitoring metrics are distributed to these nodes in round robin order. For more information, see Elasticsearch.
max_retries
editThe number of times to retry sending the monitoring metrics after a failure.
After the specified number of retries, the metrics are typically dropped. The
default value is 3
. For more information, see Elasticsearch.
parameters
editDictionary of HTTP parameters to pass within the url with index operations.
password
editThe password that Filebeat uses to authenticate with the Elasticsearch instances for shipping monitoring data.
metrics.period
editThe time interval (in seconds) when metrics are sent to the Elasticsearch cluster. A new snapshot of Filebeat metrics is generated and scheduled for publishing each period. The default value is 10 * time.Second.
state.period
editThe time interval (in seconds) when state information are sent to the Elasticsearch cluster. A new snapshot of Filebeat state is generated and scheduled for publishing each period. The default value is 60 * time.Second.
protocol
editThe name of the protocol to use when connecting to the Elasticsearch cluster. The options
are: http
or https
. The default is http
. If you specify a URL for hosts
,
however, the value of protocol is overridden by the scheme you specify in the URL.
proxy_url
editThe URL of the proxy to use when connecting to the Elasticsearch cluster. For more information, see Elasticsearch.
timeout
editThe HTTP request timeout in seconds for the Elasticsearch request. The default is 90
.
ssl
editConfiguration options for Transport Layer Security (TLS) or Secure Sockets Layer
(SSL) parameters like the certificate authority (CA) to use for HTTPS-based
connections. If the ssl
section is missing, the host CAs are used for
HTTPS connections to Elasticsearch. For more information, see Specify SSL settings.
username
editThe user ID that Filebeat uses to authenticate with the Elasticsearch instances for shipping monitoring data.
On this page