- Observability: other versions:
- Get started
- What is Elastic Observability?
- What’s new in 8.16
- Quickstart: Monitor hosts with Elastic Agent
- Quickstart: Monitor your Kubernetes cluster with Elastic Agent
- Quickstart: Monitor hosts with OpenTelemetry
- Quickstart: Unified Kubernetes Observability with Elastic Distributions of OpenTelemetry (EDOT)
- Quickstart: Collect data with AWS Firehose
- Add data from Splunk
- Applications and services
- Application performance monitoring (APM)
- Get started
- Learn about data types
- Collect application data
- View and analyze data
- Act on data
- Use APM securely
- Manage storage
- Configure APM Server
- Monitor APM Server
- APM APIs
- Troubleshooting
- Upgrade
- Release notes
- Known issues
- Synthetic monitoring
- Get started
- Scripting browser monitors
- Configure lightweight monitors
- Manage monitors
- Work with params and secrets
- Analyze monitor data
- Monitor resources on private networks
- Use the CLI
- Configure projects
- Multi-factor Authentication
- Configure Synthetics settings
- Grant users access to secured resources
- Manage data retention
- Use Synthetics with traffic filters
- Migrate from the Elastic Synthetics integration
- Scale and architect a deployment
- Synthetics support matrix
- Synthetics Encryption and Security
- Troubleshooting
- Real user monitoring
- Uptime monitoring (deprecated)
- Tutorial: Monitor a Java application
- Application performance monitoring (APM)
- CI/CD
- Cloud
- Infrastructure and hosts
- Logs
- Troubleshooting
- Incident management
- Data set quality
- Observability AI Assistant
- Reference
Monitor Amazon Web Services (AWS) with Amazon Data Firehose
editMonitor Amazon Web Services (AWS) with Amazon Data Firehose
editAmazon Data Firehose is a popular service that allows you to send your service logs and monitoring metrics to Elastic in minutes without a single line of code and without building or managing your own data ingestion and delivery infrastructure.
What you’ll learn
editIn this tutorial, you’ll learn how to:
- Install AWS integration in Kibana
- Create a delivery stream in Amazon Data Firehose
- Specify the destination settings for your Firehose stream
- Send data to the Firehose delivery stream
Before you begin
editCreate a deployment in AWS regions (including gov cloud) using our hosted Elasticsearch Service on Elastic Cloud. The deployment includes an Elasticsearch cluster for storing and searching your data, and Kibana for visualizing and managing your data.
Step 1: Install AWS integration in Kibana
edit- Install AWS integrations to load index templates, ingest pipelines, and dashboards into Kibana. Find Integrations in the main menu or use the global search field. Find the AWS Integration by browsing the catalog.
- Navigate to the Settings tab and click Install AWS assets. Confirm by clicking Install AWS in the popup.
- Install Amazon Data Firehose integration assets in Kibana.
Step 2: Create a delivery stream in Amazon Data Firehose
edit- Go to the AWS console and navigate to Amazon Data Firehose.
-
Click Create Firehose stream and choose the source and destination of your Firehose stream. Unless you are streaming data from Kinesis Data Streams, set source to
Direct PUT
and destination toElastic
. - Provide a meaningful Firehose stream name that will allow you to identify this delivery stream later.
For advanced use cases, source records can be transformed by invoking a custom Lambda function. When using Elastic integrations, this should not be required.
Step 3: Specify the destination settings for your Firehose stream
edit-
From the Destination settings panel, specify the following settings:
-
Elastic endpoint URL: Enter the Elasticsearch endpoint here. To find the Elasticsearch endpoint, go to the Elastic Cloud console, select your deployment and use Copy endpoint next to Elasticsearch. Example:
https://<deployment_name>.es.<region>.<csp>.elastic-cloud.com
. - API key: Enter the encoded Elastic API key. This can be created in Kibana by following the instructions under API Keys. If you are using an API key with Restricted privileges, be sure to review the Indices privileges to provide at least "auto_configure" & "write" permissions for the indices you will be using with this delivery stream.
- Content encoding: To reduce the data transfer costs, use GZIP encoding.
- Retry duration: Determines how long Firehose continues retrying the request in the event of an error. A duration between 60 and 300 seconds should be suitable for most use cases.
-
Parameters:
-
es_datastream_name
: This parameter is optional and can be used to set which data stream documents will be stored. If not specified, logs are stored inlogs-awsfirehose-default
data stream and metrics are stored inmetrics-aws.cloudwatch-default
data stream. -
include_cw_extracted_fields
: This parameter is optional and can be set when using a CloudWatch logs subscription filter as the Firehose data source. When set to true, extracted fields generated by the filter pattern in the subscription filter will be collected. Setting this parameter can add many fields into each record and may significantly increase data volume in Elasticsearch. As such, use of this parameter should be carefully considered and used only when the extracted fields are required for specific filtering and/or aggregation. -
set_es_document_id
: This parameter is optional and can be set to allow Elasticsearch to assign each document a random ID or use a calculated unique ID for each document. Default is true. When set to false, a random ID will be used for each document which will help indexing performance.
-
- Backup settings: It is recommended to configure S3 backup for failed records. These backups can be used to restore data losses caused by unforeseen service outages.
-
Elastic endpoint URL: Enter the Elasticsearch endpoint here. To find the Elasticsearch endpoint, go to the Elastic Cloud console, select your deployment and use Copy endpoint next to Elasticsearch. Example:
Step 4: Send data to the Firehose delivery stream
editYou can configure a variety of log sources to send data to Firehose streams directly for example VPC flow logs. Some services don’t support publishing logs directly to Firehose but they do support publishing logs to CloudWatch logs, such as CloudTrail and Lambda. Refer to the AWS documentation for more information.
For example, a typical workflow for sending CloudTrail logs to Firehose would be the following:
- Publish CloudTrail logs to a Cloudwatch log group. Refer to the AWS documentation about publishing CloudTrail logs.
- Create a subscription filter in the CloudWatch log group to the Firehose stream. Refer to the AWS documentation about using subscription filters.
We also added support for sending CloudWatch monitoring metrics to Elastic using Firehose. For example, you can configure metrics ingestion by creating a metric stream through CloudWatch. You can select an existing Firehose stream by choosing the option Custom setup with Firehose. For more information, refer to the AWS documentation about the custom setup with Firehose.
For more information on Amazon Data Firehose, you can also check the Amazon Data Firehose Integrations documentation.
On this page