- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Multiline messages
- AWS CloudWatch
- AWS S3
- Azure Event Hub
- Azure Blob Storage
- Benchmark
- CEL
- Cloud Foundry
- CometD
- Container
- Entity Analytics
- ETW
- filestream
- GCP Pub/Sub
- Google Cloud Storage
- HTTP Endpoint
- HTTP JSON
- journald
- Kafka
- Log
- MQTT
- NetFlow
- Office 365 Management Activity API
- Redis
- Salesforce
- Stdin
- Streaming
- Syslog
- TCP
- UDP
- Unix
- winlog
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- append
- cache
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_ldap_attribute
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- Feature flags
- filebeat.reference.yml
- Inputs
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Migrating from a Deprecated Filebeat Module
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Elasticsearch module
- Envoyproxy Module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- RabbitMQ module
- Redis module
- Salesforce module
- Santa module
- Snyk module
- Sophos module
- Suricata module
- System module
- Threat Intel module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Beat fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Lumberjack fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- RabbitMQ fields
- Redis fields
- s3 fields
- Salesforce fields
- Google Santa fields
- Snyk fields
- sophos fields
- Suricata fields
- System fields
- threatintel fields
- Traefik fields
- Windows ETW fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Understand logged metrics
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Journald input
editJournald input
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
journald
is a system service that collects and stores logging data. The journald
input
reads this log data and the metadata associated with it. To read this
log data Filebeat calls journalctl
to read from the journal, therefore
Filebeat needs permission to execute journalctl
.
If the journalctl
process exits unexpectedly the journald input will
terminate with an error and Filebeat will need to be
restarted to start reading from the jouranl again.
The simplest configuration example is one that reads all logs from the default journal.
filebeat.inputs: - type: journald id: everything
You may wish to have separate inputs for each service. You can use
include_matches
to specify filtering expressions.
A good way to list the journald fields that are available for
filtering messages is to run journalctl -o json
to output logs and metadata as
JSON. This example collects logs from the vault.service
systemd unit.
filebeat.inputs: - type: journald id: service-vault include_matches.match: - _SYSTEMD_UNIT=vault.service
This example collects kernel logs where the message begins with iptables
.
Note that include_matches
is more efficient than Beat processors because that
are applied before the data is passed to the Filebeat so prefer them where
possible.
filebeat.inputs: - type: journald id: iptables include_matches.match: - _TRANSPORT=kernel processors: - drop_event: when.not.regexp.message: '^iptables'
Each example adds the id
for the input to ensure the cursor is persisted to
the registry with a unique ID. The ID should be unique among journald inputs.
If you don’t specify and id
then one is created for you by hashing
the configuration. So when you modify the config this will result in a new ID
and a fresh cursor.
Configuration options
editThe journald
input supports the following configuration options plus the
Common options described later.
id
editAn unique identifier for the input. By providing a unique id
you can
operate multiple inputs on the same journal. This allows each input’s cursor to
be persisted independently in the registry file. Each journald input must have
an unique ID.
filebeat.inputs: - type: journald id: consul.service include_matches.match: - _SYSTEMD_UNIT=consul.service - type: journald id: vault.service include_matches.match: - _SYSTEMD_UNIT=vault.service
paths
editA list of paths that will be crawled and fetched. Each path can be a directory path (to collect events from all journals in a directory), or a file path. If you specify a directory, Filebeat merges all journals under the directory into a single journal and reads them.
If no paths are specified, Filebeat reads from the default journal.
seek
editThe position to start reading the journal from. Valid settings are:
-
head
: Starts reading at the beginning of the journal. After a restart, Filebeat resends all log messages in the journal. -
tail
: Starts reading at the end of the journal. This means that no events will be sent until a new message is written. -
since
: Use thesince
option to determine where to start reading from.
Regardless of the value of seek
if Filebeat has a state (cursor) for this
input, the seek
value is ignored and the current cursor is used. To reset
the cursor, just change the id
of the input, this will start from a fresh state.
since
editA time offset from the current time to start reading from. To use
since
, seek
option must be set to since
.
This example demonstrates how to resume from the persisted cursor when it exists, or otherwise begin reading logs from the last 24 hours.
seek: since since: -24h
units
editIterate only the entries of the units specified in this option. The iterated entries include messages from the units, messages about the units by authorized daemons and coredumps. However, it does not match systemd user units.
syslog_identifiers
editRead only the entries with the selected syslog identifiers.
transports
editCollect the messages using the specified transports. Example: syslog.
Valid transports:
- audit: messages from the kernel audit subsystem
- driver: internally generated messages
- syslog: messages received via the local syslog socket with the syslog protocol
- journal: messages received via the native journal protocol
- stdout: messages from a service’s standard output or error output
- kernel: messages from the kernel
facilities
editFilter entries by facilities, facilities must be specified using their numeric code.
include_matches
editA collection of filter expressions used to match fields. The format of the expression
is field=value
. Filebeat fetches all events that exactly match the
expressions. Pattern matching is not supported.
If you configured a filter expression, only entries with this field set will be iterated by the journald reader of Filebeat. If the filter expressions apply to different fields, only entries with all fields set will be iterated. If they apply to the same fields, only entries where the field takes one of the specified values will be iterated.
match
: List of filter expressions to match fields.
Please note that these expressions are limited. You can build complex filtering, but full logical expressions are not supported.
The following include matches configuration will ingest entries that
contain journald.process.name: systemd
and systemd.transport: syslog
.
include_matches: match: - "journald.process.name=systemd" - "systemd.transport=syslog"
The following include matches configuration will ingest entries that
contain systemd.transport: systemd
or systemd.transport: kernel
.
include_matches: match: - "systemd.transport=kernel" - "systemd.transport=syslog"
To reference fields, use one of the following:
-
The field name used by the systemd journal. For example,
CONTAINER_TAG=redis
. -
The translated field name
used by Filebeat. For example,
container.image.tag=redis
. Filebeat does not translate all fields from the journal. For custom fields, use the name specified in the systemd journal.
Translated field names
editYou can use the following translated names in filter expressions to reference journald fields:
Journald field name |
Translated name |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The following translated fields for Docker are also available:
|
|
|
|
|
|
|
|
|
|
|
|
Common options
editThe following configuration options are supported by all inputs.
enabled
editUse the enabled
option to enable and disable inputs. By default, enabled is
set to true.
tags
editA list of tags that Filebeat includes in the tags
field of each published
event. Tags make it easy to select specific events in Kibana or apply
conditional filtering in Logstash. These tags will be appended to the list of
tags specified in the general configuration.
Example:
filebeat.inputs: - type: journald . . . tags: ["json"]
fields
editOptional fields that you can specify to add additional information to the
output. For example, you might add fields that you can use for filtering log
data. Fields can be scalar values, arrays, dictionaries, or any nested
combination of these. By default, the fields that you specify here will be
grouped under a fields
sub-dictionary in the output document. To store the
custom fields as top-level fields, set the fields_under_root
option to true.
If a duplicate field is declared in the general configuration, then its value
will be overwritten by the value declared here.
filebeat.inputs: - type: journald . . . fields: app_id: query_engine_12
fields_under_root
editIf this option is set to true, the custom
fields are stored as top-level fields in
the output document instead of being grouped under a fields
sub-dictionary. If
the custom field names conflict with other field names added by Filebeat,
then the custom fields overwrite the other fields.
processors
editA list of processors to apply to the input data.
See Processors for information about specifying processors in your config.
pipeline
editThe ingest pipeline ID to set for the events generated by this input.
The pipeline ID can also be configured in the Elasticsearch output, but this option usually results in simpler configuration files. If the pipeline is configured both in the input and output, the option from the input is used.
The pipeline
is always lowercased. If pipeline: Foo-Bar
, then
the pipeline name in Elasticsearch needs to be defined as foo-bar
.
keep_null
editIf this option is set to true, fields with null
values will be published in
the output document. By default, keep_null
is set to false
.
index
editIf present, this formatted string overrides the index for events from this input
(for elasticsearch outputs), or sets the raw_index
field of the event’s
metadata (for other outputs). This string can only refer to the agent name and
version and the event timestamp; for access to dynamic fields, use
output.elasticsearch.index
or a processor.
Example value: "%{[agent.name]}-myindex-%{+yyyy.MM.dd}"
might
expand to "filebeat-myindex-2019.11.01"
.
publisher_pipeline.disable_host
editBy default, all events contain host.name
. This option can be set to true
to
disable the addition of this field to all events. The default value is false
.
On this page