- Filebeat Reference: other versions:
- Overview
- Getting Started With Filebeat
- Step 1: Install Filebeat
- Step 2: Configure Filebeat
- Step 3: Configure Filebeat to use Logstash
- Step 4: Load the index template in Elasticsearch
- Step 5: Set up the Kibana dashboards
- Step 6: Start Filebeat
- Step 7: View the sample Kibana dashboards
- Quick start: modules for common log formats
- Repositories for APT and YUM
- Setting up and running Filebeat
- Upgrading Filebeat
- How Filebeat works
- Configuring Filebeat
- Specify which modules to run
- Configure inputs
- Manage multiline messages
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Configure index lifecycle management
- Load balance the output hosts
- Specify SSL settings
- Filter and enhance the exported data
- Define processors
- Add cloud metadata
- Add fields
- Add labels
- Add the local time zone
- Add tags
- Decode JSON fields
- Drop events
- Drop fields from events
- Keep fields from events
- Rename fields from events
- Add Kubernetes metadata
- Add Docker metadata
- Add Host metadata
- Dissect strings
- DNS Reverse Lookup
- Add process metadata
- Parse data by using ingest node
- Enrich events with geoIP information
- Configure project paths
- Configure the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- HTTP Endpoint
- filebeat.reference.yml
- Beats central management
- Modules
- Modules overview
- Apache module
- Auditd module
- Elasticsearch module
- haproxy module
- Icinga module
- IIS module
- Iptables module
- Kafka module
- Kibana module
- Logstash module
- MongoDB module
- MySQL module
- Nginx module
- Osquery module
- PostgreSQL module
- Redis module
- Santa module
- Suricata module
- System module
- Traefik module
- Zeek (Bro) Module
- Exported fields
- Apache fields
- Auditd fields
- Beat fields
- Cloud provider metadata fields
- Docker fields
- ECS fields
- elasticsearch fields
- haproxy fields
- Host fields
- Icinga fields
- IIS fields
- iptables fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- mongodb fields
- MySQL fields
- NetFlow fields
- Nginx fields
- Osquery fields
- PostgreSQL fields
- Process fields
- Redis fields
- Google Santa fields
- Suricata fields
- System fields
- Traefik fields
- Zeek fields
- Monitoring Filebeat
- Securing Filebeat
- Troubleshooting
- Contributing to Beats
Parse data by using ingest node
editParse data by using ingest node
editWhen you use Elasticsearch for output, you can configure Filebeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. Ingest node is a convenient processing option when you want to do some extra processing on your data, but you do not require the full power of Logstash. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.
After defining the pipeline in Elasticsearch, you simply configure Filebeat
to use the pipeline. To configure Filebeat, you specify the pipeline ID in
the parameters
option under elasticsearch
in the filebeat.yml
file:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: my_pipeline_id
For example, let’s say that you’ve defined the following pipeline in a file
named pipeline.json
:
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "agent.name" } } ] }
To add the pipeline in Elasticsearch, you would run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json
Then in the filebeat.yml
file, you would specify:
output.elasticsearch: hosts: ["localhost:9200"] pipeline: "test-pipeline"
When you run Filebeat, the value of agent.name
is converted to lowercase before indexing.
For more information about defining a pre-processing pipeline, see the Ingest Node documentation.