- Legacy APM Server Reference:
- Overview
- Getting Started With APM Server
- Setting up APM Server
- Upgrading APM Server
- Configuring APM Server
- General configuration options
- Configure the output
- Parse data using ingest node pipelines
- SSL/TLS settings
- Load the Elasticsearch index template
- Index lifecycle management (ILM)
- Configure logging
- Configure the Kibana endpoint
- Set up Real User Monitoring (RUM) support
- Jaeger integration
- Use environment variables in the configuration
- Configure project paths
- Securing APM Server
- Monitoring APM Server
- Real User Monitoring (RUM)
- Tune Data Ingestion
- Storage Management
- API
- Exploring data in Elasticsearch
- Exported fields
- Troubleshooting
- Release notes
- APM Server version 7.6
- APM Server version 7.5
- APM Server version 7.4
- APM Server version 7.3
- APM Server version 7.2
- APM Server version 7.1
- APM Server version 7.0
- APM Server version 6.8
- APM Server version 6.7
- APM Server version 6.6
- APM Server version 6.5
- APM Server version 6.4
- APM Server version 6.3
- APM Server version 6.2
- APM Server version 6.1
Parse data using ingest node pipelinesedit
You can configure APM Server to use an ingest node to pre-process documents before indexing them in Elasticsearch.
A pipeline is a definition of a series of processors that operate on your data. For example, a pipeline can define one processor to remove a field, and another to rename a field.
Default ingest pipelineedit
By default, register.ingest.pipeline.enabled
is set to true
.
This loads the default pipeline definition to Elasticsearch on APM Server startup.
The default pipeline is apm
. It adds user agent information to events and processes Geo-IP data,
which is especially useful for Elastic’s JavaScript RUM Agent.
You can view the pipeline configuration by navigating to the APM Server’s home directory and then
viewing ingest/pipeline/definition.json
.
To disable this, or any other pipeline, set output.elasticsearch.pipeline: _none
.
Custom pipelinesedit
Using custom pipelines involves two steps:
- First, you need to register a pipeline in Elasticsearch.
- Then, the pipeline needs to be applied during data ingestion.
Register pipelines in Elasticsearchedit
To register a pipeline in Elasticsearch, you can either configure APM Server to register pipelines on startup, or you can manually upload a pipeline definition.
Register pipelines on APM Server startupedit
Automatic pipeline registration requires output.elasticsearch
to be enabled and configured.
Navigate to APM Server’s home directory and find the default pipeline configuration at
ingest/pipeline/definition.json
.
To add, change, or remove pipelines in Elasticsearch,
change the definitions in this file and restart your APM Server or run apm-server setup --pipelines
.
By default, pipeline registration is enabled.
Manually upload pipeline definitionsedit
You can manually upload pipeline definitions by describing them in a file.
Consider the following sample pipeline in a file named pipeline.json
.
This pipeline definition converts the value of beat.name
to lowercase before indexing each document.
{ "description": "Test pipeline", "processors": [ { "lowercase": { "field": "beat.name" } } ] }
To register this pipeline, run:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' -d @pipeline.json
Apply pipelines during data ingestionedit
To specify which pipelines to apply during data ingestion,
add the pipeline IDs to the pipelines
option under output.elasticsearch
in the apm-server.yml
file:
output.elasticsearch: pipelines: - pipeline: "test-pipeline"
More information and examples for applying pipelines is available in the Elasticsearch output pipeline documentation.
On this page