- Filebeat Reference: other versions:
- Overview
- Getting Started With Filebeat
- Step 1: Install Filebeat
- Step 2: Configure Filebeat
- Step 3: Configure Filebeat to use Logstash
- Step 4: Load the index template in Elasticsearch
- Step 5: Set up the Kibana dashboards
- Step 6: Start Filebeat
- Step 7: View the sample Kibana dashboards
- Quick start: modules for common log formats
- Repositories for APT and YUM
- Setting up and running Filebeat
- Upgrading Filebeat
- How Filebeat works
- Configuring Filebeat
- Specify which modules to run
- Configure inputs
- Manage multiline messages
- Specify general settings
- Load external configuration files
- Configure the internal queue
- Configure the output
- Configure index lifecycle management
- Load balance the output hosts
- Specify SSL settings
- Filter and enhance the exported data
- Define processors
- Add cloud metadata
- Add fields
- Add labels
- Add the local time zone
- Add tags
- Decode JSON fields
- Drop events
- Drop fields from events
- Keep fields from events
- Rename fields from events
- Add Kubernetes metadata
- Add Docker metadata
- Add Host metadata
- Dissect strings
- DNS Reverse Lookup
- Add process metadata
- Parse data by using ingest node
- Enrich events with geoIP information
- Configure project paths
- Configure the Kibana endpoint
- Load the Kibana dashboards
- Load the Elasticsearch index template
- Configure logging
- Use environment variables in the configuration
- Autodiscover
- YAML tips and gotchas
- Regular expression support
- HTTP Endpoint
- filebeat.reference.yml
- Beats central management
- Modules
- Modules overview
- Apache module
- Auditd module
- Elasticsearch module
- haproxy module
- Icinga module
- IIS module
- Iptables module
- Kafka module
- Kibana module
- Logstash module
- MongoDB module
- MySQL module
- Nginx module
- Osquery module
- PostgreSQL module
- Redis module
- Santa module
- Suricata module
- System module
- Traefik module
- Zeek (Bro) Module
- Exported fields
- Apache fields
- Auditd fields
- Beat fields
- Cloud provider metadata fields
- Docker fields
- ECS fields
- elasticsearch fields
- haproxy fields
- Host fields
- Icinga fields
- IIS fields
- iptables fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- mongodb fields
- MySQL fields
- NetFlow fields
- Nginx fields
- Osquery fields
- PostgreSQL fields
- Process fields
- Redis fields
- Google Santa fields
- Suricata fields
- System fields
- Traefik fields
- Zeek fields
- Monitoring Filebeat
- Securing Filebeat
- Troubleshooting
- Contributing to Beats
IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Decode JSON fields
editDecode JSON fields
editThe decode_json_fields
processor decodes fields containing JSON strings and
replaces the strings with valid JSON objects.
processors: - decode_json_fields: fields: ["field1", "field2", ...] process_array: false max_depth: 1 target: "" overwrite_keys: false
The decode_json_fields
processor has the following configuration settings:
-
fields
- The fields containing JSON strings to decode.
-
process_array
- (Optional) A boolean that specifies whether to process arrays. The default is false.
-
max_depth
- (Optional) The maximum parsing depth. The default is 1.
-
target
-
(Optional) The field under which the decoded JSON will be written. By
default the decoded JSON object replaces the string field from which it was
read. To merge the decoded JSON fields into the root of the event, specify
target
with an empty string (target: ""
). Note that thenull
value (target:
) is treated as if the field was not set at all. -
overwrite_keys
- (Optional) A boolean that specifies whether keys that already exist in the event are overwritten by keys from the decoded JSON object. The default value is false.
Was this helpful?
Thank you for your feedback.