Parse data by using ingest node

edit

When you use Elasticsearch for output, you can configure Functionbeat to use ingest node to pre-process documents before the actual indexing takes place in Elasticsearch. For example, you can create an ingest node pipeline in Elasticsearch that consists of one processor that removes a field in a document followed by another processor that renames a field.

After defining the pipeline in Elasticsearch, you simply configure Functionbeat to use the pipeline. To configure Functionbeat, you specify the pipeline ID in the parameters option under elasticsearch in the functionbeat.yml file:

output.elasticsearch:
  hosts: ["localhost:9200"]
  pipeline: my_pipeline_id

For example, let’s say that you’ve defined the following pipeline in a file named pipeline.json:

{
    "description": "Test pipeline",
    "processors": [
        {
            "lowercase": {
                "field": "agent.name"
            }
        }
    ]
}

To add the pipeline in Elasticsearch, you would run:

curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/test-pipeline' [email protected]

Then in the functionbeat.yml file, you would specify:

output.elasticsearch:
  hosts: ["localhost:9200"]
  pipeline: "test-pipeline"

When you run Functionbeat, the value of agent.name is converted to lowercase before indexing.

For more information about defining a pre-processing pipeline, see the Ingest Node documentation.