ECS Logging with Pino
editECS Logging with Pino
editThis Node.js package provides a formatter for the pino
logger, compatible with Elastic Common Schema (ECS) logging.
In combination with the Filebeat shipper,
you can monitor all your logs in one
place in the Elastic Stack. pino
6.x, 7.x, and 8.x versions are supported.
Setup
editStep 1: Install
edit$ npm install @elastic/ecs-pino-format
Step 2: Configure
editconst { ecsFormat } = require('@elastic/ecs-pino-format'); const pino = require('pino'); const log = pino(ecsFormat(/* options */)); log.info('hi'); log.error({ err: new Error('boom') }, 'oops there is a problem'); // ...
This will configure Pino’s |
See usage discussion and examples below.
Step 3: Configure Filebeat
editThe best way to collect the logs once they are ECS-formatted is with Filebeat:
- Follow the Filebeat quick start
-
Add the following configuration to your
filebeat.yaml
file.
For Filebeat 7.16+
filebeat.yaml.
filebeat.inputs: - type: filestream paths: /path/to/logs.json parsers: - ndjson: overwrite_keys: true add_error_key: true expand_keys: true processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
Use the filestream input to read lines from active log files. |
|
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
|
Processors enhance your data. See processors to learn more. |
For Filebeat < 7.16
filebeat.yaml.
filebeat.inputs: - type: log paths: /path/to/logs.json json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.expand_keys: true processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
-
Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
docker-compose.yml.
labels: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
For more information, see the Filebeat reference.
Usage
editconst { ecsFormat } = require('@elastic/ecs-pino-format'); const pino = require('pino'); const log = pino(ecsFormat(/* options */)); log.info('Hello world'); const child = log.child({ module: 'foo' }); child.warn('From child');
See available options below. |
Running this will produce log output similar to the following:
{"log.level":"info","@timestamp":"2023-10-14T02:07:47.901Z","process.pid":56645,"host.hostname":"pink.local","ecs.version":"8.10.0","message":"Hello world"} {"log.level":"warn","@timestamp":"2023-10-14T02:07:47.901Z","process.pid":56645,"host.hostname":"pink.local","ecs.version":"8.10.0","module":"foo","message":"From child"}
Error Logging
editBy default, the formatter will convert an err
field that is an Error instance
to ECS Error fields.
For example:
const { ecsFormat } = require('@elastic/ecs-pino-format'); const pino = require('pino'); const log = pino(ecsFormat()); const myErr = new Error('boom'); log.info({ err: myErr }, 'oops');
will yield (pretty-printed for readability):
% node examples/error.js | jq . { "log.level": "info", "@timestamp": "2021-01-26T17:02:23.697Z", ... "error": { "type": "Error", "message": "boom", "stack_trace": "Error: boom\n at Object.<anonymous> (..." }, "message": "oops" }
This is analogous to and overrides
Pino’s default err serializer.
Special handling of the err
field can be disabled via the convertErr: false
option:
const log = pino(ecsFormat({ convertErr: false }));
HTTP Request and Response Logging
editWith the convertReqRes: true
option, the formatter will automatically
convert Node.js core request
and response
objects when passed as the req
and res
fields, respectively.
(This option replaces the usage of req
and res
Pino serializers.)
const http = require('http'); const { ecsFormat } = require('@elastic/ecs-pino-format'); const pino = require('pino'); const log = pino(ecsFormat({ convertReqRes: true })); const server = http.createServer(function handler (req, res) { res.setHeader('Foo', 'Bar'); res.end('ok'); log.info({ req, res }, 'handled request'); }); server.listen(3000, () => { log.info('listening at http://localhost:3000'); }
This will produce logs with request and response info using ECS HTTP fields. For example:
% node examples/http.js | jq . # using jq for pretty printing ... # run 'curl http://localhost:3000/' { "log.level": "info", "@timestamp": "2023-10-14T02:10:14.477Z", "process.pid": 56697, "host.hostname": "pink.local", "ecs.version": "8.10.0", "http": { "version": "1.1", "request": { "method": "GET", "headers": { "host": "localhost:3000", "user-agent": "curl/8.1.2", "accept": "*/*" } }, "response": { "status_code": 200, "headers": { "foo": "Bar" } } }, "url": { "full": "http://localhost:3000/", "path": "/" }, "client": { "address": "::ffff:127.0.0.1", "ip": "::ffff:127.0.0.1", "port": 49504 }, "user_agent": { "original": "curl/8.1.2" }, "message": "handled request" }
The examples/ directory shows sample programs using request and response logging: with Express, with the pino-http middleware package, etc.
Log Correlation with APM
editThis ECS log formatter integrates with Elastic APM. If your Node app is using the Node.js Elastic APM Agent, then a number of fields are added to log records to correlate between APM services or traces and logging data:
-
Log statements (e.g.
logger.info(...)
) called when there is a current tracing span will include tracing fields —trace.id
,transaction.id
,span.id
. -
A number of service identifier fields determined by or configured on the APM
agent allow cross-linking between services and logs in Kibana —
service.name
,service.version
,service.environment
,service.node.name
. -
event.dataset
enables log rate anomaly detection in the Elastic Observability app.
For example, running examples/http-with-elastic-apm.js and curl -i localhost:3000/
results in a log record with the following:
% node examples/http-with-elastic-apm.js | jq . ... "service.name": "http-with-elastic-apm", "service.version": "1.4.0", "service.environment": "development", "event.dataset": "http-with-elastic-apm", "trace.id": "9f338eae7211b7993b98929046aed21d", "transaction.id": "2afbef5642cc7a3f", ...
These IDs match trace data reported by the APM agent.
Integration with Elastic APM can be explicitly disabled via the
apmIntegration: false
option, for example:
const log = pino(ecsFormat({ apmIntegration: false }));
Limitations and Considerations
editThe ecs-logging spec
suggests that the first three fields in log records must be @timestamp
,
log.level
, and message
. Pino does not provide a mechanism to put
the message
field near the front. Given that ordering of ecs-logging fields
is for human readability and does not affect interoperability, this is not
considered a significant concern.
The hooks that Pino currently provides do not enable this package to convert
fields passed to <logger>.child({ ... })
. This means that, even with the
convertReqRes
option, a call to <logger>.child({ req })
will not convert
that req
to ECS HTTP fields. This is a slight limitation for users of
pino-http which does this.
Reference
editecsFormat([options])
edit-
options
{type-object}
The following options are supported:-
convertErr
{type-boolean}
Whether to convert a loggederr
field to ECS error fields. Default:true
. -
convertReqRes
{type-boolean}
Whether to loggedreq
andres
HTTP request and response fields to ECS HTTP, User agent, and URL fields. Default:false
. -
apmIntegration
{type-boolean}
Whether to enable APM agent integration. Default:true
. -
serviceName
{type-string}
A "service.name" value. If specified this overrides any value from an active APM agent. -
serviceVersion
{type-string}
A "service.version" value. If specified this overrides any value from an active APM agent. -
serviceEnvironment
{type-string}
A "service.environment" value. If specified this overrides any value from an active APM agent. -
serviceNodeName
{type-string}
A "service.node.name" value. If specified this overrides any value from an active APM agent. -
eventDataset
{type-string}
A "event.dataset" value. If specified this overrides the default of using${serviceVersion}
.
-
Create options for pino(...)
that configures ECS Logging format output.