Configure the internal queue

edit

Functionbeat uses an internal queue to store events before publishing them. The queue is responsible for buffering and combining events into batches that can be consumed by the outputs. The outputs will use bulk operations to send a batch of events in one transaction.

You can configure the type and behavior of the internal queue by setting options in the queue section of the functionbeat.yml config file. Only one queue type can be configured.

This sample configuration sets the memory queue to buffer up to 4096 events:

queue.mem:
  events: 4096

Configure the memory queue

edit

The memory queue keeps all events in memory.

The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.

The memory queue is controlled by the parameters flush.min_events and flush.timeout. If flush.timeout is 0s or flush.min_events is 0 or 1 then events can be sent by the output as soon as they are available. If the output supports a bulk_max_size parameter it controls the maximum batch size that can be sent.

If flush.min_events is greater than 1 and flush.timeout is greater than 0s, events will only be sent to the output when the queue contains at least flush.min_events events or the flush.timeout period has expired. In this mode the maximum size batch that that can be sent by the output is flush.min_events. If the output supports a bulk_max_size parameter, values of bulk_max_size greater than flush.min_events have no effect. The value of flush.min_events should be evenly divisible by bulk_max_size to avoid sending partial batches to the output.

This sample configuration forwards events to the output if 512 events are available or the oldest available event has been waiting for 5s in the queue:

queue.mem:
  events: 4096
  flush.min_events: 512
  flush.timeout: 5s

Configuration options

edit

You can specify the following options in the queue.mem section of the functionbeat.yml config file:

events
edit

Number of events the queue can store. This value should be evenly divisible by flush.min_events to avoid sending partial batches to the output.

The default value is 4096 events.

flush.min_events
edit

Minimum number of events required for publishing. If this value is set to 0 or 1, events are available to the output immediately. If this value is greater than 1 the output must wait for the queue to accumulate this minimum number of events or for flush.timeout to expire before publishing. When greater than 1 this value also defines the maximum possible batch that can be sent by the output.

The default value is 2048.

flush.timeout
edit

Maximum wait time for flush.min_events to be fulfilled. If set to 0s, events are available to the output immediately.

The default value is 1s.

Configure the disk queue

edit

The disk queue stores pending events on the disk rather than main memory. This allows Beats to queue a larger number of events than is possible with the memory queue, and to save events when a Beat or device is restarted. This increased reliability comes with a performance tradeoff, as every incoming event must be written and read from the device’s disk. However, for setups where the disk is not the main bottleneck, the disk queue gives a simple and relatively low-overhead way to add a layer of robustness to incoming event data.

To enable the disk queue with default settings, specify a maximum size:

queue.disk:
  max_size: 10GB

The queue will use up to the specified maximum size on disk. It will only use as much space as required. For example, if the queue is only storing 1GB of events, then it will only occupy 1GB on disk no matter how high the maximum is. Queue data is deleted from disk after it has been successfully sent to the output.

Configuration options

edit

You can specify the following options in the queue.disk section of the functionbeat.yml config file:

path
edit

The path to the directory where the disk queue should store its data files. The directory is created on startup if it doesn’t exist.

The default value is "${path.data}/diskqueue".

max_size (required)
edit

The maximum size the queue should use on disk. Events that exceed this maximum will either pause their input or be discarded, depending on the input’s configuration.

A value of 0 means that no maximum size is enforced, and the queue can grow up to the amount of free space on the disk. This value should be used with caution, as completely filling a system’s main disk can make it inoperable. It is best to use this setting only with a dedicated data or backup partition that will not interfere with Functionbeat or the rest of the host system.

The default value is 10GB.

segment_size
edit

Data added to the queue is stored in segment files. Each segment contains some number of events waiting to be sent to the outputs, and is deleted when all its events are sent. By default, segment size is limited to 1/10 of the maximum queue size. Using a smaller size means that the queue will use more data files, but they will be deleted more quickly after use. Using a larger size means some data will take longer to delete, but the queue will use fewer auxiliary files. It is usually fine to leave this value unchanged.

The default value is max_size / 10.

read_ahead
edit

The number of events that should be read from disk into memory while waiting for an output to request them. If you find outputs are slowing down because they can’t read as many events at a time, adjusting this setting upward may help, at the cost of higher memory usage.

The default value is 512.

write_ahead
edit

The number of events the queue should accept and store in memory while waiting for them to be written to disk. If you find the queue’s memory use is too high because events are waiting too long to be written to disk, adjusting this setting downward may help, at the cost of reduced event throughput. On the other hand, if inputs are waiting or discarding events because they are being produced faster than the disk can handle, adjusting this setting upward may help, at the cost of higher memory usage.

The default value is 2048.

retry_interval
edit

Some disk errors may block operation of the queue, for example a permission error writing to the data directory, or a disk full error while writing an event. In this case, the queue reports the error and retries after pausing for the time specified in retry_interval.

The default value is 1s (one second).

max_retry_interval
edit

When there are multiple consecutive errors writing to the disk, the queue increases the retry interval by factors of 2 up to a maximum of max_retry_interval. Increase this value if you are concerned about logging too many errors or overloading the host system if the target disk becomes unavailable for an extended time.

The default value is 30s (thirty seconds).

Configure the file spool queue

edit

This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.

The file spool queue is a deprecated feature offered as-is for backwards compatibility. The supported way to queue events in persistent storage is the disk queue.

The file spool queue stores all events in an on disk ring buffer. The spool has a write buffer, which new events are written to. Events written to the spool are forwarded to the outputs, only after the write buffer has been flushed successfully.

The spool waits for the output to acknowledge or drop events. If the spool is full, no new events can be inserted. The spool will block. Space is freed only after a signal from the output has been received.

On disk, the spool divides a file into pages. The file.page_size setting configures the file’s page size at file creation time. The optimal page size depends on the effective block size, used by the underlying file system.

This sample configuration enables the spool with all default settings (See Configuration options for defaults) and the default file path:

queue.spool: ~

This sample configuration creates a spool of 512MiB, with 16KiB pages. The write buffer is flushed if 10MiB of contents, or 1024 events have been written. If the oldest available event has been waiting for 5s in the write buffer, the buffer will be flushed as well:

queue.spool:
  file:
    path: "${path.data}/spool.dat"
    size: 512MiB
    page_size: 16KiB
  write:
    buffer_size: 10MiB
    flush.timeout: 5s
    flush.events: 1024

Configuration options

edit

You can specify the following options in the queue.spool section of the functionbeat.yml config file:

file.path
edit

The spool file path. The file is created on startup, if it does not exist.

The default value is "${path.data}/spool.dat".

file.permissions
edit

The file permissions. The permissions are applied when the file is created. In case the file already exists, the file permissions are compared with file.permissions. The spool file is not opened if the actual file permissions are more permissive then configured.

The default value is 0600.

file.size
edit

Spool file size.

The default value is 100 MiB.

The size should be much larger then the expected event sizes and write buffer size. Otherwise the queue will block, because it has not enough space.

The file size cannot be changed once the file has been generated. This limitation will be removed in the future.

file.page_size
edit

The file’s page size.

The spool file is split into pages of page_size. All I/O operations operate on complete pages.

The default value is 4096 (4KiB).

This setting should match the file system’s minimum block size. If the page_size is not a multiple of the file system’s block size, the file system might create additional read operations on writes.

The page size is only set at file creation time. It cannot be changed afterwards.

file.prealloc
edit

If prealloc is set to true, truncate is used to reserve the space up to file.size. This setting is only used when the file is created.

The file will dynamically grow, if prealloc is set to false. The spool blocks, if prealloc is false and the system is out of disk space.

The default value is true.

write.buffer_size
edit

The write buffer size. The write buffer is flushed, once the buffer size is exceeded.

Very big events are allowed to be bigger then the configured buffer size. But the write buffer will be flushed right after the event has been serialized.

The default value is 1MiB.

write.codec
edit

The event encoding used for serialized events. Valid values are json and cbor.

The default value is cbor.

write.flush.timeout
edit

Maximum wait time of the oldest event in the write buffer. If set to 0, the write buffer will only be flushed once write.flush.events or write.buffer_size is fulfilled.

The default value is 1s.

write.flush.events
edit

Number of buffered events. The write buffer is flushed once the limit is reached.

The default value is 16384.

read.flush.timeout
edit

The spool reader tries to read up to the output’s bulk_max_size events at once.

If read.flush.timeout is set to 0s, all available events are forwarded immediately to the output.

If read.flush.timeout is set to a value bigger then 0s, the spool will wait for more events to be flushed. Events are forwarded to the output if bulk_max_size events have been read or the oldest read event has been waiting for the configured duration.

The default value is 0s.