- Filebeat Reference: other versions:
- Overview
- Getting Started With Filebeat
- Step 1: Install Filebeat
- Step 2: Configure Filebeat
- Step 3: Configure Filebeat to use Logstash
- Step 4: Load the index template in Elasticsearch
- Step 5: Set up the Kibana dashboards
- Step 6: Start Filebeat
- Step 7: View the sample Kibana dashboards
- Quick start: modules for common log formats
- Repositories for APT and YUM
- Setting up and running Filebeat
- Upgrading Filebeat
- How Filebeat works
- Configuring Filebeat
- Specify which modules to run
- Configure inputs
- Manage multiline messages
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Load balance the output hosts
- Specify SSL settings
- Filter and enhance the exported data
- Parse data by using ingest node
- Set up project paths
- Set up the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- HTTP Endpoint
- filebeat.reference.yml
- Beats central management
- Modules
- Exported fields
- Apache2 fields
- Auditd fields
- Beat fields
- Cloud provider metadata fields
- Docker fields
- elasticsearch fields
- haproxy fields
- Host fields
- Icinga fields
- IIS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- mongodb fields
- MySQL fields
- Nginx fields
- Osquery fields
- PostgreSQL fields
- Redis fields
- System fields
- Traefik fields
- Monitoring Filebeat
- Securing Filebeat
- Troubleshooting
- Migrating from Logstash Forwarder to Filebeat
- Contributing to Beats
Step 3: Configure Filebeat to use Logstash
editStep 3: Configure Filebeat to use Logstash
editPrerequisite
To send events to Logstash, you also need to create a Logstash configuration pipeline that listens for incoming Beats connections and indexes the received events into Elasticsearch. For more information, see the section about configuring Logstash in the Elastic Stack getting started tutorial. Also see the documentation for the Beats input and Elasticsearch output plugins.
If you want to use Logstash to perform additional processing on the data collected by Filebeat, you need to configure Filebeat to use Logstash.
To do this, you edit the Filebeat configuration file to disable the Elasticsearch output by commenting it out and enable the Logstash output by uncommenting the logstash section:
#----------------------------- Logstash output -------------------------------- output.logstash: hosts: ["127.0.0.1:5044"]
The hosts
option specifies the Logstash server and the port (5044
) where Logstash is configured to listen for incoming
Beats connections.
For this configuration, you must load the index template into Elasticsearch manually because the options for auto loading the template are only available for the Elasticsearch output.
To test your configuration file, change to the directory where the
Filebeat binary is installed, and run Filebeat in the foreground with
the following options specified: ./filebeat test config -e
. Make sure your
config files are in the path expected by Filebeat (see Directory layout),
or use the -c
flag to specify the path to the config file.