Get started
editGet started
editStep 1: Install
editAdd the package to your go.mod
file:
require go.elastic.co/ecszap master
Step 2: Configure
editSet up a default logger. For example:
encoderConfig := ecszap.NewDefaultEncoderConfig() core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel) logger := zap.New(core, zap.AddCaller())
You can customize your ECS logger. For example:
encoderConfig := ecszap.EncoderConfig{ EncodeName: customNameEncoder, EncodeLevel: zapcore.CapitalLevelEncoder, EncodeDuration: zapcore.MillisDurationEncoder, EncodeCaller: ecszap.FullCallerEncoder, } core := ecszap.NewCore(encoderConfig, os.Stdout, zap.DebugLevel) logger := zap.New(core, zap.AddCaller())
Examples
editUse structured logging
edit// Add fields and a logger name logger = logger.With(zap.String("custom", "foo")) logger = logger.Named("mylogger") // Use strongly typed Field values logger.Info("some logging info", zap.Int("count", 17), zap.Error(errors.New("boom")))
The example above produces the following log output:
{ "log.level": "info", "@timestamp": "2020-09-13T10:48:03.000Z", "log.logger": "mylogger", "log.origin": { "file.name": "main/main.go", "file.line": 265 }, "message": "some logging info", "ecs.version": "1.6.0", "custom": "foo", "count": 17, "error": { "message":"boom" } }
Log errors
editerr := errors.New("boom") logger.Error("some error", zap.Error(pkgerrors.Wrap(err, "crash")))
The example above produces the following log output:
{ "log.level": "error", "@timestamp": "2020-09-13T10:48:03.000Z", "log.logger": "mylogger", "log.origin": { "file.name": "main/main.go", "file.line": 290 }, "message": "some error", "ecs.version": "1.6.0", "custom": "foo", "error": { "message": "crash: boom", "stack_trace": "\nexample.example\n\t/Users/xyz/example/example.go:50\nruntime.example\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/proc.go:203\nruntime.goexit\n\t/Users/xyz/.gvm/versions/go1.13.8.darwin.amd64/src/runtime/asm_amd64.s:1357" } }
Use sugar logger
editsugar := logger.Sugar() sugar.Infow("some logging info", "foo", "bar", "count", 17, )
The example above produces the following log output:
{ "log.level": "info", "@timestamp": "2020-09-13T10:48:03.000Z", "log.logger": "mylogger", "log.origin": { "file.name": "main/main.go", "file.line": 311 }, "message": "some logging info", "ecs.version": "1.6.0", "custom": "foo", "foo": "bar", "count": 17 }
Wrap a custom underlying zapcore.Core
editencoderConfig := ecszap.NewDefaultEncoderConfig() encoder := zapcore.NewJSONEncoder(encoderConfig.ToZapCoreEncoderConfig()) syslogCore := newSyslogCore(encoder, level) //create your own loggers core := ecszap.WrapCore(syslogCore) logger := zap.New(core, zap.AddCaller())
Transition from existing configurations
editDepending on your needs there are different ways to create the logger:
encoderConfig := ecszap.ECSCompatibleEncoderConfig(zap.NewDevelopmentEncoderConfig()) encoder := zapcore.NewJSONEncoder(encoderConfig) core := zapcore.NewCore(encoder, os.Stdout, zap.DebugLevel) logger := zap.New(ecszap.WrapCore(core), zap.AddCaller())
config := zap.NewProductionConfig() config.EncoderConfig = ecszap.ECSCompatibleEncoderConfig(config.EncoderConfig) logger, err := config.Build(ecszap.WrapCoreOption(), zap.AddCaller())
Step 3: Configure Filebeat
edit- Follow the Filebeat quick start
-
Add the following configuration to your
filebeat.yaml
file.
For Filebeat 7.16+
filebeat.yaml.
filebeat.inputs: - type: filestream paths: /path/to/logs.json parsers: - ndjson: overwrite_keys: true add_error_key: true expand_keys: true processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
Use the filestream input to read lines from active log files. |
|
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
|
Processors enhance your data. See processors to learn more. |
For Filebeat < 7.16
filebeat.yaml.
filebeat.inputs: - type: log paths: /path/to/logs.json json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.expand_keys: true processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Kubernetes guide.
-
Enable hints-based autodiscover (uncomment the corresponding section in
filebeat-kubernetes.yaml
). - Add these annotations to your pods that log using ECS loggers. This will make sure the logs are parsed appropriately.
annotations: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
- Make sure your application logs to stdout/stderr.
- Follow the Run Filebeat on Docker guide.
- Enable hints-based autodiscover.
- Add these labels to your containers that log using ECS loggers. This will make sure the logs are parsed appropriately.
docker-compose.yml.
labels: co.elastic.logs/json.overwrite_keys: true co.elastic.logs/json.add_error_key: true co.elastic.logs/json.expand_keys: true
Values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts. |
|
Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors. |
|
Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. |
For more information, see the Filebeat reference.