- Filebeat Reference: other versions:
- Overview
- Contributing to Beats
- Getting Started With Filebeat
- Step 1: Install Filebeat
- Step 2: Configure Filebeat
- Step 3: Configure Filebeat to use Logstash
- Step 4: Load the index template in Elasticsearch
- Step 5: Set up the Kibana dashboards
- Step 6: Start Filebeat
- Step 7: View the sample Kibana dashboards
- Quick start for common log formats
- Repositories for APT and YUM
- Setting up and running Filebeat
- Upgrading Filebeat
- How Filebeat works
- Configuring Filebeat
- Specify which modules to run
- Set up prospectors
- Manage multiline messages
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Load balance the output hosts
- Specify SSL settings
- Filter and enhance the exported data
- Parse data by using ingest node
- Set up project paths
- Set up the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- filebeat.reference.yml
- Modules
- Exported fields
- Monitoring Filebeat
- Securing Filebeat
- Troubleshooting
- Migrating from Logstash Forwarder to Filebeat
WARNING: Version 6.2 of Filebeat has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Modules overview
editModules overview
editFilebeat modules simplify the collection, parsing, and visualization of common log formats.
A typical module (say, for the Nginx logs) is composed of one or
more filesets (in the case of Nginx, access
and error
). A fileset contains
the following:
- Filebeat prospector configurations, which contain the default paths where to look or the log files. These default paths depend on the operating system. The Filebeat configuration is also responsible with stitching together multiline events when needed.
- Elasticsearch Ingest Node pipeline definition, which is used to parse the log lines.
- Fields definitions, which are used to configure Elasticsearch with the correct types for each field. They also contain short descriptions for each of the fields.
- Sample Kibana dashboards, which can be used to visualize the log files.
Filebeat automatically adjusts these configurations based on your environment and loads them to the respective Elastic stack components.
At the moment, Filebeat modules require using the Elasticsearch Ingest Node. In the future, Filebeat Modules will be able to also configure Logstash as a more powerful alternative to Ingest Node. For now, if you want to use Logstash, you can follow the steps described in the section called Working with Filebeat Modules in the Logstash Reference.
Filebeat modules require Elasticsearch 5.2 or later.
Get started
editTo learn how to configure and run Filebeat modules:
- Get started by reading Quick start for common log formats.
- Learn about the different ways to enable modules in Specify which modules to run.
- Dive into the documentation for each module.
On this page