Configure AWS functions
editConfigure AWS functions
editFunctionbeat runs as a function in your serverless environment.
Before deploying Functionbeat, you need to configure one or more functions and specify details about the services that will trigger the functions.
You configure the functions in the the functionbeat.yml
configuration file.
When you’re done, you can deploy the functions
to your serverless environment.
The aws
functions require AWS credentials configuration in order to make AWS API calls.
Users can either use AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and/or
AWS_SESSION_TOKEN
, or use shared AWS credentials file.
Please see AWS credentials options for more details.
The following example configures two functions: cloudwatch
and sqs
. The
cloudwatch
function collects events from CloudWatch Logs. The sqs
function
collects messages from Amazon Simple Queue Service (SQS). Both functions forward
the events to Elasticsearch.
functionbeat.provider.aws.endpoint: "s3.amazonaws.com" functionbeat.provider.aws.deploy_bucket: "functionbeat-deploy" functionbeat.provider.aws.functions: - name: cloudwatch enabled: true type: cloudwatch_logs description: "lambda function for cloudwatch logs" triggers: - log_group_name: /aws/lambda/my-lambda-function #filter_pattern: mylog_ - name: sqs enabled: true type: sqs description: "lambda function for SQS events" triggers: - event_source_arn: arn:aws:sqs:us-east-1:123456789012:myevents cloud.id: "MyESDeployment:SomeLongString==" cloud.auth: "elastic:SomeLongString" processors: - add_host_metadata: ~ - add_cloud_metadata: ~
Configuration options
editYou can specify the following options to configure the functions that you want to deploy.
If you change the configuration after deploying the function, use
the update
command to update your deployment.
provider.aws.deploy_bucket
editA unique name for the S3 bucket that the Lambda artifact will be uploaded to.
name
editA unique name for the Lambda function. This is the name of the function as it will appear in the Lambda console on AWS.
type
editThe type of service to monitor. For this release, the supported types are:
|
Collects events from CloudWatch logs. |
|
Collects data from Amazon Simple Queue Service (SQS). |
|
Collects data from a Kinesis stream. |
description
editA description of the function. This description is useful when you are running multiple functions and need more context about how each function is used.
triggers
editA list of triggers that will cause the function to execute. The list of valid
triggers depends on the type
:
-
For
cloudwatch_logs
, specify a list of log groups. Because the AWS limit is one subscription filter per CloudWatch log group, the log groups specified here must have no other subscription filters, or deployment will fail. For more information, see Deployment to AWS fails with "resource limit exceeded". -
For
sqs
orkinesis
, specify a list of Amazon Resource Names (ARNs).
filter_pattern
editA regular expression that matches the events you want to collect. Setting this option may reduce execution costs because the function only executes if there is data that matches the pattern.
concurrency
editThe reserved number of instances for the function. Setting this option may reduce execution costs by limiting the number of functions that can execute in your serverless environment. The default is unreserved.
memory_size
editThe maximum amount of memory to allocate for this function. Specify a value that is a factor of 64. There is a hard limit of 3008 MiB for each function. The default is 128 MiB.
role
editThe custom execution role to use for the deployed function. For example:
role: arn:aws:iam::123456789012:role/MyFunction
Make sure the custom role has the permissions required to run the function. For more information, see IAM permissions required for deployment.
If role
is not specified, the function uses the default role and policy
created during deployment.
virtual_private_cloud
editSpecifies additional settings required to connect to private resources in an Amazon Virtual Private Cloud (VPC). For example:
virtual_private_cloud: security_group_ids: - mySecurityGroup - anotherSecurityGroup subnet_ids: - myUniqueID
dead_letter_config.target_arn
editThe dead letter queue to use for messages that can’t be processed successfully. Set this option to an ARN that points to an SQS queue.
batch_size
editThe number of events to read from a Kinesis stream, the minimum value is 100 and the maximum is 10000. The default is 100.
starting_position
editThe starting position to read from a Kinesis stream, valids values are trim_horizon
and latest
.
The default is trim_horizon.
parallelization_factor
editThe number of batches to process from each shard concurrently, the minimum value is 1 and the maximum is 10 The default is 1.
keep_null
editIf this option is set to true, fields with null
values will be published in
the output document. By default, keep_null
is set to false
.
index
editIf present, this formatted string overrides the index for events from this function
(for elasticsearch outputs), or sets the raw_index
field of the event’s
metadata (for other outputs). This string can only refer to the agent name and
version and the event timestamp; for access to dynamic fields, use
output.elasticsearch.index
or a processor.
Example value: "%{[agent.name]}-myindex-%{+yyyy.MM.dd}"
might
expand to "functionbeat-myindex-2019.12.13"
.
AWS Credentials Configuration
editTo configure AWS credentials, either put the credentials into the Functionbeat configuration, or use a shared credentials file, as shown in the following examples.
Configuration parameters
edit- access_key_id: first part of access key.
- secret_access_key: second part of access key.
- session_token: required when using temporary security credentials.
- credential_profile_name: profile name in shared credentials file.
- shared_credential_file: directory of the shared credentials file.
- role_arn: AWS IAM Role to assume.
-
endpoint: URL of the entry point for an AWS web service.
Most AWS services offer a regional endpoint that can be used to make requests.
The general syntax of a regional endpoint is
protocol://service-code.region-code.endpoint-code
. Some services, such as IAM, do not support regions. The endpoints for these services do not include a region. Inaws
module,endpoint
config is to set theendpoint-code
part, such asamazonaws.com
,amazonaws.com.cn
,c2s.ic.gov
,sc2s.sgov.gov
.
Supported Formats
edit-
Use
access_key_id
,secret_access_key
and/orsession_token
Users can either put the credentials into metricbeat module configuration or use
environment variable AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and/or
AWS_SESSION_TOKEN
instead.
If running on Docker, these environment variables should be added as a part of the docker command. For example, with Metricbeat:
$ docker run -e AWS_ACCESS_KEY_ID=abcd -e AWS_SECRET_ACCESS_KEY=abcd -d --name=metricbeat --user=root --volume="$(pwd)/metricbeat.aws.yml:/usr/share/metricbeat/metricbeat.yml:ro" docker.elastic.co/beats/metricbeat:7.11.1 metricbeat -e -E cloud.auth=elastic:1234 -E cloud.id=test-aws:1234
Sample metricbeat.aws.yml
looks like:
metricbeat.modules: - module: aws period: 5m access_key_id: ${AWS_ACCESS_KEY_ID} secret_access_key: ${AWS_SECRET_ACCESS_KEY} session_token: ${AWS_SESSION_TOKEN} metricsets: - ec2
Environment variables can also be added through a file. For example:
$ cat env.list AWS_ACCESS_KEY_ID=abcd AWS_SECRET_ACCESS_KEY=abcd $ docker run --env-file env.list -d --name=metricbeat --user=root --volume="$(pwd)/metricbeat.aws.yml:/usr/share/metricbeat/metricbeat.yml:ro" docker.elastic.co/beats/metricbeat:7.11.1 metricbeat -e -E cloud.auth=elastic:1234 -E cloud.id=test-aws:1234
-
Use
role_arn
If access_key_id
and secret_access_key
are not given, then functionbeat will
check for role_arn
. role_arn
is used to specify which AWS IAM role to assume
for generating temporary credentials.
-
Use
credential_profile_name
and/orshared_credential_file
If access_key_id
, secret_access_key
and role_arn
are all not given, then
functionbeat will check for credential_profile_name
. If you use different credentials for
different tools or applications, you can use profiles to configure multiple
access keys in the same configuration file. If there is no credential_profile_name
given, the default profile will be used.
shared_credential_file
is optional to specify the directory of your shared
credentials file. If it’s empty, the default directory will be used.
In Windows, shared credentials file is at C:\Users\<yourUserName>\.aws\credentials
.
For Linux, macOS or Unix, the file is located at ~/.aws/credentials
. When running as a service,
the home path depends on the user that manages the service, so the shared_credential_file
parameter can be used to avoid ambiguity. Please see
Create Shared Credentials File
for more details.
If running on Docker, the credential file needs to be provided via a volume mount. For example, with Metricbeat:
docker run -d --name=metricbeat --user=root --volume="$(pwd)/metricbeat.aws.yml:/usr/share/metricbeat/metricbeat.yml:ro" --volume="/Users/foo/.aws/credentials:/usr/share/metricbeat/credentials:ro" docker.elastic.co/beats/metricbeat:7.11.1 metricbeat -e -E cloud.auth=elastic:1234 -E cloud.id=test-aws:1234
Sample metricbeat.aws.yml
looks like:
metricbeat.modules: - module: aws period: 5m credential_profile_name: elastic-beats shared_credential_file: /usr/share/metricbeat/credentials metricsets: - ec2
-
Use AWS credentials in Filebeat configuration
filebeat.inputs: - type: aws-s3 queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue access_key_id: '<access_key_id>' secret_access_key: '<secret_access_key>' session_token: '<session_token>'
or
filebeat.inputs: - type: aws-s3 queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue access_key_id: '${AWS_ACCESS_KEY_ID:""}' secret_access_key: '${AWS_SECRET_ACCESS_KEY:""}' session_token: '${AWS_SESSION_TOKEN:""}'
-
Use IAM role ARN
filebeat.inputs: - type: aws-s3 queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue role_arn: arn:aws:iam::123456789012:role/test-mb
-
Use shared AWS credentials file
filebeat.inputs: - type: aws-s3 queue_url: https://sqs.us-east-1.amazonaws.com/123/test-queue credential_profile_name: test-fb
AWS Credentials Types
editThere are two different types of AWS credentials can be used: access keys and temporary security credentials.
- Access keys
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are the two parts of access keys.
They are long-term credentials for an IAM user or the AWS account root user.
Please see
AWS Access Keys
and Secret Access Keys
for more details.
- IAM role ARN
An IAM role is an IAM identity that you can create in your account that has specific permissions that determine what the identity can and cannot do in AWS. A role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. IAM role Amazon Resource Name (ARN) can be used to specify which AWS IAM role to assume to generate temporary credentials. Please see AssumeRole API documentation for more details.
- Temporary security credentials
Temporary security credentials has a limited lifetime and consists of an
access key ID, a secret access key, and a security token which typically returned
from GetSessionToken
. MFA-enabled IAM users would need to submit an MFA code
while calling GetSessionToken
. default_region
identifies the AWS Region
whose servers you want to send your first API request to by default. This is
typically the Region closest to you, but it can be any Region. Please see
Temporary Security Credentials
for more details.
sts get-session-token
AWS CLI can be used to generate temporary credentials. For example. with MFA-enabled:
aws> sts get-session-token --serial-number arn:aws:iam::1234:mfa/[email protected] --token-code 456789 --duration-seconds 129600
Because temporary security credentials are short term, after they expire, the user needs to generate new ones and modify the aws.yml config file with the new credentials. Unless live reloading feature is enabled for Metricbeat, the user needs to manually restart Metricbeat after updating the config file in order to continue collecting Cloudwatch metrics. This will cause data loss if the config file is not updated with new credentials before the old ones expire. For Metricbeat, we recommend users to use access keys in config file to enable aws module making AWS api calls without have to generate new temporary credentials and update the config frequently.
IAM policy is an entity that defines permissions to an object within your AWS environment. Specific permissions needs to be added into the IAM user’s policy to authorize Metricbeat to collect AWS monitoring metrics. Please see documentation under each metricset for required permissions.