Ty Bekiares

3 models for logging with OpenTelemetry and Elastic

Because OpenTelemetry increases usage of tracing and metrics with developers, logging continues to provide flexible, application-specific, and event-driven data. Explore OpenTelemetry logging and how it provides guidance on the available approaches.

3 models for logging with OpenTelemetry and Elastic

Arguably, OpenTelemetry exists to (greatly) increase usage of tracing and metrics among developers. That said, logging will continue to play a critical role in providing flexible, application-specific, event-driven data. Further, OpenTelemetry has the potential to bring added value to existing application logging flows:

  1. Common metadata across tracing, metrics, and logging to facilitate contextual correlation, including metadata passed between services as part of REST or RPC APIs; this is a critical element of service observability in the age of distributed, horizontally scaled systems

  2. An optional unified data path for tracing, metrics, and logging to facilitate common tooling and signal routing to your observability backend

Adoption of metrics and tracing among developers to date has been relatively small. Further, the number of proprietary vendors and APIs (compared to adoption rate) is relatively large. As such, OpenTelemetry took a greenfield approach to developing new, vendor-agnostic APIs for tracing and metrics. In contrast, most developers have nearly 100% log coverage across their services. Moreover, logging is largely supported by a small number of vendor-agnostic, open-source logging libraries and associated APIs (e.g., Logback and ILogger). As such, OpenTelemetry’s approach to logging meets developers where they already are using hooks into existing, popular logging frameworks. In this way, developers can add OpenTelemetry as a log signal output without otherwise altering their code and investment in logging as an observability signal.

Notably, logging is the least mature of OTel supported observability signals. Depending on your service’s language, and your appetite for adventure, there exist several options for exporting logs from your services and applications and marrying them together in your observability backend.

The intent of this article is to explore the current state of the art of OpenTelemetry logging and to provide guidance on the available approaches with the following tenants in mind:

  • Correlation of service logs with OTel-generated tracing where applicable
  • Proper capture of exceptions
  • Common context across tracing, metrics, and logging
  • Support for slf4j key-value pairs (“structured logging”)
  • Automatic attachment of metadata carried between services via OTel baggage
  • Use of an Elastic® Observability backend
  • Consistent data fidelity in Elastic regardless of the approach taken

OpenTelemetry logging models

Three models currently exist for getting your application or service logs to Elastic with correlation to OTel tracing and baggage:

  1. Output logs from your service (alongside traces and metrics) using an embedded OpenTelemetry Instrumentation library to Elastic via the OTLP protocol

  2. Write logs from your service to a file scraped by the OpenTelemetry Collector, which then forwards to Elastic via the OTLP protocol

  3. Write logs from your service to a file scraped by Elastic Agent (or Filebeat), which then forwards to Elastic via an Elastic-defined protocol

Note that (1), in contrast to (2) and (3), does not involve writing service logs to a file prior to ingestion into Elastic.

Logging vs. span events

It is worth noting that most APM systems, including OpenTelemetry, include provisions for span events. Like log statements, span events contain arbitrary, textual data. Additionally, span events automatically carry any custom attributes (e.g., a “user ID”) applied to the parent span, which can help with correlation and context. In this regard, it may be advantageous to translate some existing log statements (inside spans) to span events. As the name implies, of course, span events can only be emitted from within a span and thus are not intended to be a general purpose replacement for logging.

Unlike logging, span events do not pass through existing logging frameworks and therefore cannot (practically) be written to a log file. Further, span events are technically emitted as part of trace data and follow the same data path and signal routing as other trace data.

Polyfill appender

Some of the demos make use of a custom Logback “Polyfill appender” (inspired by OTel’s Logback MDC), which provides support for attaching slf4j key-value pairs to log messages for models (2) and (3).

Elastic Common Schema

For log messages to exhibit full fidelity within Elastic, they eventually need to be formatted in accordance with the Elastic Common Schema (ECS). In models (1) and (2), log messages remain formatted in OTel log semantics until ingested by the Elastic APM Server. The Elastic APM Server then translates OTel log semantics to ECS. In model (3), ECS is applied at the source.

Notably, OpenTelemetry recently adopted the Elastic Common Schema as its standard for semantic conventions going forward! As such, it is anticipated that current OTel log semantics will be updated to align with ECS.

Getting started

The included demos center around a “POJO” (no assumed framework) Java project. Java is arguably the most mature of OTel-supported languages, particularly with respect to logging options. Notably, this singular Java project was designed to support the three models of logging discussed here. In practice, you would only implement one of these models (and corresponding project dependencies).

The demos assume you have a working Docker environment and an Elastic Cloud instance.

  1. git clone https://github.com/ty-elastic/otel-logging

  2. Create an .env file at the root of otel-logging with the following (appropriately filled-in) environment variables:

# the service name
OTEL_SERVICE_NAME=app4

# Filebeat vars
ELASTIC_CLOUD_ID=(see https://www.elastic.co/guide/en/beats/metricbeat/current/configure-cloud-id.html)
ELASTIC_CLOUD_AUTH=(see https://www.elastic.co/guide/en/beats/metricbeat/current/configure-cloud-id.html)

# apm vars
ELASTIC_APM_SERVER_ENDPOINT=(address of your Elastic Cloud APM server... i.e., https://xyz123.apm.us-central1.gcp.cloud.es.io:443)
ELASTIC_APM_SERVER_SECRET=(see https://www.elastic.co/guide/en/apm/guide/current/secret-token.html)
  1. Start up the demo with the desired model:
  • If you want to demo logging via OTel APM Agent, run MODE=apm docker-compose up
  • If you want to demo logging via OTel filelogreceiver, run MODE=filelogreceiver docker-compose up
  • If you want to demo logging via Elastic filebeat, run MODE=filebeat docker-compose up
  1. Validate incoming span and correlated log data in your Elastic Cloud instance

Model 1: Logging via OpenTelemetry instrumentation

This model aligns with the long-term goals of OpenTelemetry: integrated tracing, metrics, and logging (with common attributes) from your services via the OpenTelemetry Instrumentation libraries, without dependency on log files and scrappers.

In this model, your service generates log statements as it always has, using popular logging libraries (e.g., Logback for Java). OTel provides a “Southbound hook” to Logback via the OTel Logback Appender, which injects ServiceName, SpanID, TraceID, slf4j key-value pairs, and OTel baggage into log records and passes the composed records to the co-resident OpenTelemetry Instrumentation library. We further employ a custom LogRecordProcessor to add baggage to the log record as attributes.

The OTel instrumentation library then formats the log statements per the OTel logging spec and ships them via OTLP to either an OTel Collector for further routing and enrichment or directly to Elastic.

Notably, as language support improves, this model can and will be supported by runtime agent binding with auto-instrumentation where available (e.g., no code changes required for runtime languages).

One distinguishing advantage of this model, beyond the simplicity it affords, is the ability to more easily tie together attributes and tracing metadata directly with log statements. This inherently makes logging more useful in the context of other OTel-supported observability signals.

Architecture

Although not explicitly pictured, an OpenTelemetry Collector can be inserted in between the service and Elastic to facilitate additional enrichment and/or signal routing or duplication across observability backends.

Pros

  • Simplified signal architecture and fewer “moving parts” (no files, disk utilization, or file rotation concerns)
  • Aligns with long-term OTel vision
  • Log statements can be (easily) decorated with OTel metadata
  • No polyfill adapter required to support structured logging with slf4j
  • No additional collectors/agents required
  • Conversion to ECS happens within Elastic keeping log data vendor-agnostic until ingestion
  • Common wireline protocol (OTLP) across tracing, metrics, and logs

Cons

  • Not available (yet) in many OTel-supported languages
  • No intermediate log file for ad-hoc, on-node debugging
  • Immature (alpha/experimental) Unknown “glare” conditions, which could result in loss of log data if service exits prematurely or if the backend is unable to accept log data for an extended period of time

Demo

MODE=apm docker-compose up

Model 2: Logging via the OpenTelemetry Collector

Given the cons of Model 1, it may be advantageous to consider a model that continues to leverage an actual log file intermediary between your services and your observability backend. Such a model is possible using an OpenTelemetry Collector collocated with your services (e.g., on the same host), running the filelogreceiver to scrape service log files.

In this model, your service generates log statements as it always has, using popular logging libraries (e.g., Logback for Java). OTel provides a MDC Appender for Logback (Logback MDC), which adds SpanID, TraceID, and Baggage to the Logback MDC context.

Notably, no log record structure is assumed by the OTel filelogreceiver. In the example provided, we employ the logstash-logback-encoder to JSON-encode log messages. The logstash-logback-encoder will read the OTel SpanID, TraceID, and Baggage off the MDC context and encode it into the JSON structure. Notably, logstash-logback-encoder doesn’t explicitly support slf4j key-value pairs. It does, however, support Logback structured arguments, and thus I use the Polyfill Appender to convert slf4j key-value pairs to Logback structured arguments.

From there, we write the log lines to a log file. If you are using Kubernetes or other container orchestration in your environment, you would more typically write to stdout (console) and let the orchestration log driver write to and manage log files.

We then configure the OTel Collector to scrape this log file (using the filelogreceiver). Because no assumptions are made about the format of the log lines, you need to explicitly map fields from your log schema to the OTel log schema.

From there, the OTel Collector batches and ships the formatted log lines via OTLP to Elastic.

Architecture

Pros

  • Easy to debug (you can manually read the intermediate log file)
  • Inherent file-based FIFO buffer
  • Less susceptible to “glare” conditions when service prematurely exits
  • Conversion to ECS happens within Elastic keeping log data vendor-agnostic until ingestion
  • Common wireline protocol (OTLP) across tracing, metrics, and logs

Cons

  • All the headaches of file-based logging (rotation, disk overflow)
  • Beta quality and not yet proven in the field
  • No support for slf4j key-value pairs

Demo

MODE=filelogreceiver docker-compose up

Model 3: Logging via Elastic Agent (or Filebeat)

Although the second model described affords some resilience as a function of the backing file, the OTel Collector filelogreceiver module is still decidedly “beta” in quality. Because of the importance of logs as a debugging tool, today I generally recommend that customers continue to import logs into Elastic using the field-proven Elastic Agent or Filebeat scrappers. Elastic Agent and Filebeat have many years of field maturity under their collective belt. Further, it is often advantageous to deploy Elastic Agent anyway to capture the multitude of signals outside the purview of OpenTelemetry (e.g., deep Kubernetes and host metrics, security, etc.).

In this model, your service generates log statements as it always has, using popular logging libraries (e.g., Logback for Java). As with model 2, we employ OTel’s Logback MDC to add SpanID, TraceID, and Baggage to the Logback MDC context.

From there, we employ the Elastic ECS Encoder to encode log statements compliant to the Elastic Common Schema. The Elastic ECS Encoder will read the OTel SpanID, TraceID, and Baggage off the MDC context and encode it into the JSON structure. Similar to model 2, the Elastic ECS Encoder doesn’t support sl4f key-vair arguments. Curiously, the Elastic ECS encoder also doesn’t appear to support Logback structured arguments. Thus, within the Polyfill Appender, I add slf4j key-value pairs as MDC context. This is less than ideal, however, since MDC forces all values to be strings.

From there, we write the log lines to a log file. If you are using Kubernetes or other container orchestration in your environment, you would more typically write to stdout (console) and let the orchestration log driver write to and manage log files.We then configure Elastic Agent or Filebeat to scrape the log file. Notably, the Elastic ECS Encoder does not currently translate incoming OTel SpanID and TraceID variables on the MDC. Thus, we need to perform manual translation of these variables in the Filebeat (or Elastic Agent) configuration to map them to their ECS equivalent.

Architecture

Pros

  • Robust and field-proven
  • Easy to debug (you can manually read the intermediate log file)
  • Inherent file-based FIFO buffer
  • Less susceptible to “glare” conditions when service prematurely exits
  • Native ECS format for easy manipulation in Elastic
  • Fleet-managed via Elastic Agent

Cons

  • All the headaches of file-based logging (rotation, disk overflow)
  • No support for slf4j key-value pairs or Logback structured arguments
  • Requires translation of OTel SpanID and TraceID in Filebeat config
  • Disparate data paths for logs versus tracing and metrics
  • Vendor-specific logging format

Demo

MODE=filebeat docker-compose up

Recommendations

For most customers, I currently recommend Model 3 — namely, write to logs in ECS format (with OTel SpanID, TraceID, and Baggage metadata) and collect them with an Elastic Agent installed on the node hosting the application or service. Elastic Agent (or Filebeat) today provides the most field-proven and robust means of capturing log files from applications and services with OpenTelemetry context.

Further, you can leverage this same Elastic Agent instance (ideally running in your Kubernetes daemonset) to collect rich and robust metrics and logs from Kubernetes and many other supported services via Elastic Integrations. Finally, Elastic Agent facilitates remote management via Fleet, avoiding bespoke configuration files.

Alternatively, for customers who either wish to keep their nodes vendor-neutral or use a consolidated signal routing system, I recommend Model 2, wherein an OpenTelemetry collector is used to scrape service log files. While workable and practiced by some early adopters in the field today, this model inherently carries some risk given the current beta nature of the OpenTelemetry filelogreceiver.

I generally do not recommend Model 1 given its limited language support, experimental/alpha status (the API could change), and current potential for data loss. That said, in time, with more language support and more thought to resilient designs, it has clear advantages both with regard to simplicity and richness of metadata.

Extracting more value from your logs

In contrast to tracing and metrics, most organizations have nearly 100% log coverage over their applications and services. This is an ideal beachhead upon which to build an application observability system. On the other hand, logs are notoriously noisy and unstructured; this is only amplified with the scale enabled by the hyperscalers and Kubernetes. Collecting log lines reliably is the easy part; making them useful at today’s scale is hard.

Given that logs are arguably the most challenging observability signal from which to extract value at scale, one should ideally give thoughtful consideration to a vendor’s support for logging in the context of other observability signals. Can they handle surges in log rates because of unexpected scale or an error or test scenario? Do they have the machine learning tool set to automatically recognize patterns in log lines, sort them into categories, and identify true anomalies? Can they provide cost-effective online searchability of logs over months or years without manual rehydration? Do they provide the tools to extract and analyze business KPIs buried in logs?

As an ardent and early supporter of OpenTelemetry, Elastic, of course, natively ingests OTel traces, metrics, and logs. And just like all logs coming into our system, logs coming from OTel-equipped sources avail themselves of our mature tooling and next-gen AI Ops technologies to enable you to extract their full value.Interested? Reach out to our pre-sales team to get started building with Elastic!

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Share this article