Ingest logs with Filebeat

edit

If you haven’t already, you need to install Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it. For more information, see Spin up the Elastic Stack.

Install and configure Filebeat on your servers to collect log events. Filebeat allows you ship log data from sources that come in the form of files. It monitors the log files or locations that you specify, collects log events, and forwards them to Elasticsearch. To ease the collection and parsing of log formats for common applications such as Apache, MySQL, and Kafka, a number of modules are available.

Step 1: Install Filebeat
edit

Install Filebeat on all the servers you want to monitor.

To download and install Filebeat, use the commands that work with your system:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.7.1-amd64.deb
sudo dpkg -i filebeat-8.7.1-amd64.deb
Other installation options
edit
Step 2: Connect to Elasticsearch and Kibana
edit

Connections to Elasticsearch and Kibana are required to set up Filebeat.

Set the connection information in filebeat.yml. To locate this configuration file, see Directory layout.

Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Filebeat. For example:

cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
cloud.auth: "filebeat_setup:YOUR_PASSWORD" 

This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore.

To learn more about required roles and privileges, see Grant users access to secured resources.

You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.

Step 3: Enable and configure modules
edit

Filebeat uses modules to collect and parse log data.

  1. Identify the modules you need to enable. To see a list of available modules, run:

    filebeat modules list

    Can’t find a module for your file type? Skip this section and configure the input manually.

  2. From the installation directory, enable one or more modules. For example, the following command enables the nginx module config:

    filebeat modules enable nginx
  3. In the module config under modules.d, change the module settings to match your environment. You must enable at least one fileset in the module. Filesets are disabled by default.

    For example, log locations are set based on the OS. If your logs aren’t in default locations, set the paths variable:

    - module: nginx
      access:
        enabled: true
        var.paths: ["/var/log/nginx/access.log*"] 

To see the full list of variables for a module, see the documentation under Modules.

To test your configuration file, change to the directory where the Filebeat binary is installed, and run Filebeat in the foreground with the following options specified: ./filebeat test config -e. Make sure your config files are in the path expected by Filebeat (see Directory layout), or use the -c flag to specify the path to the config file.

For more information about configuring Filebeat, also see:

Step 4: Set up assets
edit

Filebeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:

  1. Make sure the user specified in filebeat.yml is authorized to set up Filebeat.
  2. From the installation directory, run:

    filebeat setup -e

    -e is optional and sends output to standard error instead of the configured log output.

This step loads the recommended index template for writing to Elasticsearch and deploys the sample dashboards for visualizing the data in Kibana.

This step does not load the ingest pipelines used to parse log lines. By default, ingest pipelines are set up automatically the first time you run the module and connect to Elasticsearch.

A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see:

Step 5: Start Filebeat
edit

Before starting Filebeat, modify the user credentials in filebeat.yml and specify a user who is authorized to publish events.

To start Filebeat, run:

sudo service filebeat start

If you use an init.d script to start Filebeat, you can’t specify command line flags (see Command reference). To specify flags, start Filebeat in the foreground.

Also see Filebeat and systemd.

Filebeat should begin streaming events to Elasticsearch.

Step 6: Confirm logs are streaming
edit

Let’s confirm your data is correctly streaming to your cloud instance.

  1. Launch Kibana:

    1. Log in to your Elastic Cloud account.
    2. Navigate to the Kibana endpoint in your deployment.
  2. Open the main menu, then click Discover.
  3. Select filebeat-* as your data view.

    Each document in the index that matches the filebeat-* data view is displayed. By default, Discover shows data for the last 15 minutes. If you have a time-based index, and no data displays, you might need to increase the time range.

    You can now search your log messages, filter your search results, add or remove fields, examine the document contents in either table or JSON format, and view a document in context.

Now let’s have a look at the Logs app.