- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Salesforce module
- Santa module
- Snort module
- Snyk module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- System module
- Threat Intel module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Lumberjack fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Salesforce fields
- Google Santa fields
- Snort/Sourcefire fields
- Snyk fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- System fields
- threatintel fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Understand logged metrics
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Secure communication with Elasticsearch
editSecure communication with Elasticsearch
editWhen sending data to a secured cluster through the elasticsearch
output, Filebeat can use any of the following authentication methods:
- Basic authentication credentials (username and password).
- Token-based API authentication.
- A client certificate.
Authentication is specified in the Filebeat configuration file:
-
To use basic authentication, specify the
username
andpassword
settings underoutput.elasticsearch
. For example:output.elasticsearch: hosts: ["https://myEShost:9200"] username: "filebeat_writer" password: "YOUR_PASSWORD"
This user needs the privileges required to publish events to Elasticsearch. To create a user like this, see Create a publishing user.
This example shows a hard-coded password, but you should store sensitive values in the secrets keystore.
-
To use token-based API key authentication, specify the
api_key
underoutput.elasticsearch
. For example:output.elasticsearch: hosts: ["https://myEShost:9200"] api_key: "ZCV7VnwBgnX0T19fN8Qe:KnR6yE41RrSowb0kQ0HWoA"
This API key must have the privileges required to publish events to Elasticsearch. To create an API key like this, see Grant access using API keys.
-
To use Public Key Infrastructure (PKI) certificates to authenticate users, specify the
certificate
andkey
settings underoutput.elasticsearch
. For example:output.elasticsearch: hosts: ["https://myEShost:9200"] ssl.certificate: "/etc/pki/client/cert.pem" ssl.key: "/etc/pki/client/cert.key"
These settings assume that the distinguished name (DN) in the certificate is mapped to the appropriate roles in the
role_mapping.yml
file on each node in the Elasticsearch cluster. For more information, see Using role mapping files.By default, Filebeat uses the list of trusted certificate authorities (CA) from the operating system where Filebeat is running. If the certificate authority that signed your node certificates is not in the host system’s trusted certificate authorities list, you need to add the path to the
.pem
file that contains your CA’s certificate to the Filebeat configuration. This will configure Filebeat to use a specific list of CA certificates instead of the default list from the OS.Here is an example configuration:
output.elasticsearch: hosts: ["https://myEShost:9200"] ssl.certificate_authorities: - /etc/pki/my_root_ca.pem - /etc/pki/my_other_ca.pem ssl.certificate: "/etc/pki/client.pem" ssl.key: "/etc/pki/key.pem"
Specify the path to the local
.pem
file that contains your Certificate Authority’s certificate. This is needed if you use your own CA to sign your node certificates.The path to the certificate for SSL client authentication
The client certificate key
For any given connection, the SSL/TLS certificates must have a subject that matches the value specified for
hosts
, or the SSL handshake fails. For example, if you specifyhosts: ["foobar:9200"]
, the certificate MUST includefoobar
in the subject (CN=foobar
) or as a subject alternative name (SAN). Make sure the hostname resolves to the correct IP address. If no DNS is available, then you can associate the IP address with your hostname in/etc/hosts
(on Unix) orC:\Windows\System32\drivers\etc\hosts
(on Windows).
Secure communication with the Kibana endpoint
editIf you’ve configured the Kibana endpoint,
you can also specify credentials for authenticating with Kibana under kibana.setup
.
If no credentials are specified, Kibana will use the configured authentication method
in the Elasticsearch output.
For example, specify a unique username and password to connect to Kibana like this:
This user needs privileges required to set up dashboards. To create a user like this, see Create a setup user. |
|
This example shows a hard-coded password, but you should store sensitive values in the secrets keystore. |
Learn more about secure communication
editMore information on sending data to a secured cluster is available in the configuration reference: