Sqs input plugin
editSqs input plugin
edit- Plugin version: v3.1.1
- Released on: 2018-04-06
- Changelog
Getting Help
editFor questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
Description
editPull events from an Amazon Web Services Simple Queue Service (SQS) queue.
SQS is a simple, scalable queue system that is part of the Amazon Web Services suite of tools.
Although SQS is similar to other queuing systems like AMQP, it uses a custom API and requires that you have an AWS account. See http://aws.amazon.com/sqs/ for more details on how SQS works, what the pricing schedule looks like and how to setup a queue.
To use this plugin, you must:
- Have an AWS account
- Setup an SQS queue
- Create an identify that has access to consume messages from the queue.
The "consumer" identity must have the following permissions on the queue:
-
sqs:ChangeMessageVisibility
-
sqs:ChangeMessageVisibilityBatch
-
sqs:DeleteMessage
-
sqs:DeleteMessageBatch
-
sqs:GetQueueAttributes
-
sqs:GetQueueUrl
-
sqs:ListQueues
-
sqs:ReceiveMessage
Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. A sample policy is as follows:
{ "Statement": [ { "Action": [ "sqs:ChangeMessageVisibility", "sqs:ChangeMessageVisibilityBatch", "sqs:DeleteMessage", "sqs:DeleteMessageBatch", "sqs:GetQueueAttributes", "sqs:GetQueueUrl", "sqs:ListQueues", "sqs:ReceiveMessage" ], "Effect": "Allow", "Resource": [ "arn:aws:sqs:us-east-1:123456789012:Logstash" ] } ] }
See http://aws.amazon.com/iam/ for more details on setting up AWS identities.
Sqs Input Configuration Options
editThis plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
Yes |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
||
No |
Also see Common Options for a list of options supported by all input plugins.
access_key_id
edit- Value type is string
- There is no default value for this setting.
This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:
-
Static configuration, using
access_key_id
andsecret_access_key
params in logstash plugin config -
External credentials file specified by
aws_credentials_file
-
Environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
-
Environment variables
AMAZON_ACCESS_KEY_ID
andAMAZON_SECRET_ACCESS_KEY
- IAM Instance Profile (available when running inside EC2)
aws_credentials_file
edit- Value type is string
- There is no default value for this setting.
Path to YAML file containing a hash of AWS credentials.
This file will only be loaded if access_key_id
and
secret_access_key
aren’t set. The contents of the
file should look like this:
:access_key_id: "12345" :secret_access_key: "54321"
endpoint
edit- Value type is string
- There is no default value for this setting.
The endpoint to connect to. By default it is constructed using the value of region
.
This is useful when connecting to S3 compatible services, but beware that these aren’t
guaranteed to work correctly with the AWS SDK.
id_field
edit- Value type is string
- There is no default value for this setting.
Name of the event field in which to store the SQS message ID
md5_field
edit- Value type is string
- There is no default value for this setting.
Name of the event field in which to store the SQS message MD5 checksum
polling_frequency
edit- Value type is number
-
Default value is
20
Polling frequency, default is 20 seconds
proxy_uri
edit- Value type is string
- There is no default value for this setting.
URI to proxy server if required
queue
edit- This is a required setting.
- Value type is string
- There is no default value for this setting.
Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN.
role_arn
edit- Value type is string
- There is no default value for this setting.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the AssumeRole API documentation for more information.
role_session_name
edit- Value type is string
-
Default value is
"logstash"
Session name to use when assuming an IAM role.
secret_access_key
edit- Value type is string
- There is no default value for this setting.
The AWS Secret Access Key
sent_timestamp_field
edit- Value type is string
- There is no default value for this setting.
Name of the event field in which to store the SQS message Sent Timestamp
Common Options
editThe following configuration options are supported by all input plugins:
Details
edit
codec
edit- Value type is codec
-
Default value is
"json"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
enable_metric
edit- Value type is boolean
-
Default value is
true
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
edit- Value type is string
- There is no default value for this setting.
Add a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type, for example, if you have 2 sqs inputs.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input { sqs { id => "my_plugin_id" } }
tags
edit- Value type is array
- There is no default value for this setting.
Add any number of arbitrary tags to your event.
This can help with processing later.
type
edit- Value type is string
- There is no default value for this setting.
This is the base class for Logstash inputs.
Add a type
field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.
If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.