• Version: 3.2.0
  • Released on: 2016-09-12
  • Changelog
  • Compatible: 5.1.1.1, 5.0.0, 2.4.1, 2.4.0, 2.3.4

Description

edit

This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).

Requirements:

  • Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key)
  • S3 PutObject permission
  • Run logstash as superuser to establish connection

S3 outputs create temporary files into "/opt/logstash/S3_temp/". If you want, you can change the path at the start of register method.

S3 output files have the following format

ls.s3.ip-10-228-27-95.2013-04-18T10.00.tag_hello.part0.txt

ls.s3 : indicate logstash plugin s3

"ip-10-228-27-95" : indicates the ip of your machine. "2013-04-18T10.00" : represents the time whenever you specify time_file. "tag_hello" : this indicates the event’s tag. "part0" : this means if you indicate size_file then it will generate more parts if you file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed

Crash Recovery

edit

This plugin will recover and upload temporary log files after crash/abnormal termination

Both time_file and size_file settings can trigger a log "file rotation" A log rotation pushes the current log "part" to s3 and deleted from local temporary storage.

Usage

edit

This is an example of logstash config:

output {
   s3{
     access_key_id => "crazy_key"             (required)
     secret_access_key => "monkey_access_key" (required)
     region => "eu-west-1"                    (optional, default = "us-east-1")
     bucket => "boss_please_open_your_bucket" (required)
     size_file => 2048                        (optional) - Bytes
     time_file => 5                           (optional) - Minutes
     format => "plain"                        (optional)
     canned_acl => "private"                  (optional. Options are "private", "public_read", "public_read_write", "authenticated_read", "bucket_owner_full_control". Defaults to "private" )
   }

 

Synopsis

edit

This plugin supports the following configuration options:

Required configuration options:

s3 {
}

Available configuration options:

Setting Input type Required Default value

access_key_id

string

No

aws_credentials_file

string

No

bucket

string

No

canned_acl

string, one of ["private", "public_read", "public_read_write", "authenticated_read", "bucket_owner_full_control"]

No

"private"

codec

codec

No

"plain"

enable_metric

boolean

No

true

encoding

string, one of ["none", "gzip"]

No

"none"

id

string

No

prefix

string

No

""

proxy_uri

string

No

region

string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "ap-northeast-2", "sa-east-1", "us-gov-west-1", "cn-north-1", "ap-south-1"]

No

"us-east-1"

restore

boolean

No

false

secret_access_key

string

No

server_side_encryption

boolean

No

false

session_token

string

No

signature_version

string, one of ["v2", "v4"]

No

size_file

number

No

0

tags

array

No

[]

temporary_directory

string

No

"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"

time_file

number

No

0

upload_workers_count

number

No

1

use_ssl

boolean

No

true

workers

<<,>>

No

1

Details

edit

 

access_key_id

edit
  • Value type is string
  • There is no default value for this setting.

This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:

  1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
  2. External credentials file specified by aws_credentials_file
  3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
  5. IAM Instance Profile (available when running inside EC2)

aws_credentials_file

edit
  • Value type is string
  • There is no default value for this setting.

Path to YAML file containing a hash of AWS credentials. This file will only be loaded if access_key_id and secret_access_key aren’t set. The contents of the file should look like this:

    :access_key_id: "12345"
    :secret_access_key: "54321"

bucket

edit
  • Value type is string
  • There is no default value for this setting.

S3 bucket

canned_acl

edit
  • Value can be any of: private, public_read, public_read_write, authenticated_read, bucket_owner_full_control
  • Default value is "private"

The S3 canned ACL to use when putting the file. Defaults to "private".

codec

edit
  • Value type is codec
  • Default value is "plain"

The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output, without needing a separate filter in your Logstash pipeline.

enable_metric

edit
  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

encoding

edit
  • Value can be any of: none, gzip
  • Default value is "none"

Specify the content encoding. Supports ("gzip"). Defaults to "none"

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin instance, this ID is used for tracking information for a specific configuration of the plugin.

output {
 stdout {
   id => "ABC"
 }
}

If you don’t explicitely set this variable Logstash will generate a unique name.

prefix

edit
  • Value type is string
  • Default value is ""

Specify a prefix to the uploaded filename, this can simulate directories on S3. Prefix does not require leading slash.

proxy_uri

edit
  • Value type is string
  • There is no default value for this setting.

URI to proxy server if required

region

edit
  • Value can be any of: us-east-1, us-west-1, us-west-2, eu-central-1, eu-west-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, ap-northeast-2, sa-east-1, us-gov-west-1, cn-north-1, ap-south-1
  • Default value is "us-east-1"

The AWS Region

restore

edit
  • Value type is boolean
  • Default value is false

secret_access_key

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Secret Access Key

server_side_encryption

edit
  • Value type is boolean
  • Default value is false

Specifies wether or not to use S3’s AES256 server side encryption. Defaults to false.

session_token

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Session token for temporary credential

signature_version

edit
  • Value can be any of: v2, v4
  • There is no default value for this setting.

The version of the S3 signature hash to use. Normally uses the internal client default, can be explicitly specified here

size_file

edit
  • Value type is number
  • Default value is 0

Set the size of file in bytes, this means that files on bucket when have dimension > file_size, they are stored in two or more file. If you have tags then it will generate a specific size file for every tags

tags

edit
  • Value type is array
  • Default value is []

Define tags to be appended to the file on the S3 bucket.

Example: tags ⇒ ["elasticsearch", "logstash", "kibana"]

Will generate this file: "ls.s3.logstash.local.2015-01-01T00.00.tag_elasticsearch.logstash.kibana.part0.txt"

temporary_directory

edit
  • Value type is string
  • Default value is "/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"

Set the directory where logstash will store the tmp files before sending it to S3 default to the current OS temporary directory in linux /tmp/logstash

time_file

edit
  • Value type is number
  • Default value is 0

Set the time, in MINUTES, to close the current sub_time_section of bucket. If you define file_size you have a number of files in consideration of the section and the current tag. 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket, for now the only thing this plugin can do is to put the file when logstash restart.

upload_workers_count

edit
  • Value type is number
  • Default value is 1

Specify how many workers to use to upload the files to S3

use_ssl

edit
  • Value type is boolean
  • Default value is true

Should we require (true) or disable (false) using SSL for communicating with the AWS API The AWS SDK for Ruby defaults to SSL so we preserve that

workers

edit
  • Value type is string
  • Default value is 1

TODO remove this in Logstash 6.0 when we no longer support the :legacy type This is hacky, but it can only be herne