Functionbeat quick start: installation and configuration

edit

Functionbeat quick start: installation and configuration

edit

This guide describes how to get started quickly monitoring data from your cloud services. You’ll learn how to:

  • download the Functionbeat distribution
  • configure details about the cloud functions you want to deploy, including the services to monitor and triggers
  • deploy the cloud functions to your serverless environment
  • collect data from cloud services and ship it to the Elastic Stack
  • visualize the data in Kibana

Step 1: Download Functionbeat

edit

The Functionbeat distribution contains the command line tools, configuration file, and binary code required to run Functionbeat in your serverless environment.

To download and extract the package, use the commands that work with your system.

curl -L -O https://artifacts.elastic.co/downloads/beats/functionbeat/functionbeat-7.9.3-darwin-x86_64.tar.gz
tar xzvf functionbeat-7.9.3-darwin-x86_64.tar.gz

Step 2: Connect to the Elastic Stack

edit

Connections to Elasticsearch and Kibana are required to set up Functionbeat.

Set the connection information in functionbeat.yml. To locate this configuration file, see Directory layout.

Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Functionbeat. For example:

cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
cloud.auth: "functionbeat_setup:YOUR_PASSWORD" 

This examples shows a hard-coded password, but you should store sensitive values in environment variables.

To learn more about required roles and privileges, see Grant users access to secured resources.

You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.

Step 3: Configure cloud functions

edit

Before deploying Functionbeat to your cloud provider, you need to specify details about the cloud functions that you want to deploy, including the function name and type, and the triggers that will cause the function to execute.

  1. In functionbeat.yml, configure the functions that you want to deploy. The configuration settings vary depending on the type of function and cloud provider you’re using. This section provides a couple of example configurations.

    • AWS example: This example configures a function called cloudwatch that collects events from CloudWatch Logs. When a message is sent to the specified log group, the cloud function executes and sends message events to the configured output:

      functionbeat.provider.aws.endpoint: "s3.amazonaws.com"
      functionbeat.provider.aws.deploy_bucket: "functionbeat-deploy" 
      functionbeat.provider.aws.functions:
        - name: cloudwatch 
          enabled: true
          type: cloudwatch_logs
          description: "lambda function for cloudwatch logs"
          triggers:
            - log_group_name: /aws/lambda/my-lambda-function

      A unique name for the S3 bucket to which the functions will be uploaded.

      Details about the function you want to deploy, including the name of the function, the type of service to monitor, and the log groups that trigger the function.

      See AWS functions for more examples.

    • Google cloud example: This example configures a function called storage that collects log events from Google Cloud Storage. When the specified event type occurs on the Cloud Storage bucket, the cloud function executes and sends events to the configured output:

      functionbeat.provider.gcp.location_id: "europe-west2"
      functionbeat.provider.gcp.project_id: "my-project-123456"
      functionbeat.provider.gcp.storage_name: "functionbeat-deploy" 
      functionbeat.provider.gcp.functions:
        - name: storage 
          enabled: true
          type: storage
          description: "Google Cloud Function for Cloud Storage"
          trigger:
            resource: "projects/my-project/buckets/my-storage"
            event_type: "google.storage.object.finalize"

      The name of the GCP storage bucket where the function artifacts will be deployed.

      Details about the function you want to deploy, including the name of the function, the type of resource to monitor, and the resource event that triggers the function.

      See Google functions for more examples.

To test your configuration file, change to the directory where the Functionbeat binary is installed, and run Functionbeat in the foreground with the following options specified: ./functionbeat test config -e. Make sure your config files are in the path expected by Functionbeat (see Directory layout), or use the -c flag to specify the path to the config file.

For more information about configuring Functionbeat, also see:

Step 4: Set up assets

edit

Functionbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:

  1. Make sure the user specified in functionbeat.yml is authorized to set up Functionbeat.
  2. From the installation directory, run:

    ./functionbeat setup -e

    -e is optional and sends output to standard error instead of the configured log output.

This step loads the recommended index template for writing to Elasticsearch.

A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see Load the index template manually.

Step 5: Deploy Functionbeat

edit

To deploy Functionbeat functions to your cloud provider, either use the Functionbeat manager, as described here, or use your own deployment infrastructure.

If you change the configuration after deploying the function, use the update command to update your deployment.

Deploy to AWS

edit
  1. Make sure you have the credentials required to authenticate with AWS. You can set environment variables that contain your credentials:

    export AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPUSER
    export AWS_SECRET_ACCESS_KEY=EXAMPLE567890devgHIJKMLOPNQRSTUVZ1234KEY
    export AWS_DEFAULT_REGION=us-east-1

    Set AWS_DEFAULT_REGION to the region where your services are running.

  2. Make sure the user has the permissions required to deploy and run the function. For more information, see IAM permissions required for deployment.
  3. Deploy the cloud functions.

    For example, the following command deploys a function called cloudwatch:

    ./functionbeat -v -e -d "*" deploy cloudwatch

    The function is deployed to AWS and ready to send log events to the configured output.

    If deployment fails, see Common problems for help troubleshooting.

Deploy to Google Cloud Platform

edit

This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.

  1. In Google Cloud, create a service account that has these required roles:

    • Cloud Functions Developer
    • Cloud Functions Service Agent
    • Service Account User
    • Storage Admin
    • Storage Object Admin

    See the Google Cloud documentation for more information about creating a service account.

  2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to the JSON file that contains your service account key. For example:

    export GOOGLE_APPLICATION_CREDENTIALS="/path/to/myproject-5a90ee91d102.json"
  3. Deploy the cloud functions.

    For example, the following command deploys a function called storage:

    ./functionbeat -v -e -d "*" deploy storage

    The function is deployed to Google Cloud Platform and ready to send events to the configured output.

    If deployment fails, see Common problems for help troubleshooting.

Step 6: View your data in Kibana

edit

There are currently no example dashboards available for Functionbeat.

To learn how to view and explore your data, see the Kibana User Guide.

What’s next?

edit

Now that you have your cloud data streaming into Elasticsearch, learn how to unify your logs, metrics, uptime, and application performance data.

  1. Ingest data from other sources by installing and configuring other Elastic Beats:

    Elastic Beats To capture

    Metricbeat

    Infrastructure metrics

    Filebeat

    Logs

    Winlogbeat

    Windows event logs

    Heartbeat

    Uptime information

    APM

    Application performance metrics

    Auditbeat

    Audit events

  2. Use the Observability apps in Kibana to search across all your data:

    Elastic apps Use to

    Metrics app

    Explore metrics about systems and services across your ecosystem

    Logs app

    Tail related log data in real time

    Uptime app

    Monitor availability issues across your apps and services

    APM app

    Monitor application performance

    SIEM app

    Analyze security events