- Observability: other versions:
- What is Elastic Observability?
- What’s new in 8.14
- Get started
- Observability AI Assistant
- Application performance monitoring (APM)
- Self manage APM Server
- Data Model
- Features
- Navigate the APM UI
- Perform common tasks in the APM UI
- Configure APM agents with central config
- Control access to APM data
- Create an alert
- Create and upload source maps (RUM)
- Create custom links
- Filter data
- Find transaction latency and failure correlations
- Identify deployment details for APM agents
- Integrate with machine learning
- Explore mobile sessions with Discover
- Observe Lambda functions
- Query your data
- Storage Explorer
- Track deployments with annotations
- OpenTelemetry integration
- Manage storage
- Configure
- Advanced setup
- Secure communication
- Monitor
- APM Server API
- APM UI API
- Troubleshoot
- Upgrade
- Release notes
- Known issues
- Log monitoring
- Infrastructure monitoring
- AWS monitoring
- Azure monitoring
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure projects
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Use Synthetics with traffic filters
- Migrate from the Elastic Synthetics integration
- Scale and architect a deployment
- Synthetics support matrix
- Synthetics Encryption and Security
- Troubleshooting
- Uptime monitoring
- Real user monitoring
- Universal Profiling
- Alerting
- Service-level objectives (SLOs)
- Cases
- CI/CD observability
- Troubleshooting
- Fields reference
- Tutorials
Parse data using ingest pipelines
editParse data using ingest pipelines
editIngest pipelines preprocess and enrich APM documents before indexing them. For example, a pipeline might define one processor that removes a field, one that transforms a field, and another that renames a field.
The default APM pipelines are defined in index templates that Fleet loads into Elasticsearch. Elasticsearch then uses the index pattern in these index templates to match pipelines to APM data streams.
Custom ingest pipelines
editThe Elastic APM integration supports custom ingest pipelines. A custom pipeline allows you to transform data to better match your specific use case. This can be useful, for example, to ensure data security by removing or obfuscating sensitive information.
Each data stream ships with a default pipeline.
This default pipeline calls an initially non-existent and non-versioned "@custom
" ingest pipeline.
If left uncreated, this pipeline has no effect on your data. However, if utilized,
this pipeline can be used for custom data processing, adding fields, sanitizing data, and more.
In addition, ingest pipelines can also be used to direct application metrics (metrics-apm.app.*
) to a data stream with a different dataset, e.g. to combine metrics for two applications.
Sending other APM data to alternate data streams, like traces (traces-apm.*
), logs (logs-apm.*
), and internal metrics (metrics-apm.internal*
) is not currently supported.
@custom
ingest pipeline naming convention
edit@custom
pipelines are specific to each data stream and follow a similar naming convention: <type>-<dataset>@custom
.
As a reminder, the default APM data streams are:
-
Application traces:
traces-apm-<namespace>
-
RUM and iOS agent application traces:
traces-apm.rum-<namespace>
-
APM internal metrics:
metrics-apm.internal-<namespace>
-
APM transaction metrics:
metrics-apm.transaction.<metricset.interval>-<namespace>
-
APM service destination metrics:
metrics-apm.service_destination.<metricset.interval>-<namespace>
-
APM service transaction metrics:
metrics-apm.service_transaction.<metricset.interval>-<namespace>
-
APM service summary metrics:
metrics-apm.service_summary.<metricset.interval>-<namespace>
-
Application metrics:
metrics-apm.app.<service.name>-<namespace>
-
APM error/exception logging:
logs-apm.error-<namespace>
-
APM app logging:
logs-apm.app.<service.name>-<namespace>
To match a custom ingest pipeline with a data stream, follow the <type>-<dataset>@custom
template,
or replace -namespace
with @custom
in the table above.
For example, to target application traces, you’d create a pipeline named traces-apm@custom
.
The @custom
pipeline can directly contain processors or you can use the
pipeline processor to call other pipelines that can be shared across multiple data streams or integrations.
The @custom
pipeline will persist across all version upgrades.
Create a @custom
ingest pipeline
editThe process for creating a custom ingest pipeline is as follows:
- Create a pipeline with processors specific to your use case
-
Add the newly created pipeline to an
@custom
pipeline that matches an APM data stream
If you prefer more guidance, see one of these tutorials:
-
Create an ingest pipeline filter — Learn how to obfuscate passwords stored in the
http.request.body.original
field. - APM data stream rerouting — Learn how to reroute APM data to user-defined APM data streams.
- Transform data with custom ingest pipelines — A basic Elastic integration tutorial where you learn how to add a custom field to incoming data.
On this page