- Filebeat Reference: other versions:
- Filebeat overview
- Quick start: installation and configuration
- Set up and run
- Upgrade
- How Filebeat works
- Configure
- Inputs
- Modules
- General settings
- Project paths
- Config file loading
- Output
- Kerberos
- SSL
- Index lifecycle management (ILM)
- Elasticsearch index template
- Kibana endpoint
- Kibana dashboards
- Processors
- Define processors
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- rate_limit
- registered_domain
- rename
- script
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Autodiscover
- Internal queue
- Load balancing
- Logging
- HTTP endpoint
- Regular expression support
- Instrumentation
- filebeat.reference.yml
- How to guides
- Override configuration settings
- Load the Elasticsearch index template
- Change the index name
- Load Kibana dashboards
- Load ingest pipelines
- Enrich events with geoIP information
- Deduplicate data
- Parse data using an ingest pipeline
- Use environment variables in the configuration
- Avoid YAML formatting problems
- Migrate
log
input configurations tofilestream
- Modules
- Modules overview
- ActiveMQ module
- Apache module
- Auditd module
- AWS module
- AWS Fargate module
- Azure module
- Barracuda module
- Bluecoat module
- CEF module
- Check Point module
- Cisco module
- CoreDNS module
- CrowdStrike module
- Cyberark PAS module
- Cylance module
- Elasticsearch module
- Envoyproxy Module
- F5 module
- Fortinet module
- Google Cloud module
- Google Workspace module
- HAproxy module
- IBM MQ module
- Icinga module
- IIS module
- Imperva module
- Infoblox module
- Iptables module
- Juniper module
- Kafka module
- Kibana module
- Logstash module
- Microsoft module
- MISP module
- MongoDB module
- MSSQL module
- MySQL module
- MySQL Enterprise module
- NATS module
- NetFlow module
- Netscout module
- Nginx module
- Office 365 module
- Okta module
- Oracle module
- Osquery module
- Palo Alto Networks module
- pensando module
- PostgreSQL module
- Proofpoint module
- RabbitMQ module
- Radware module
- Redis module
- Santa module
- Snort module
- Snyk module
- Sonicwall module
- Sophos module
- Squid module
- Suricata module
- System module
- Threat Intel module
- Tomcat module
- Traefik module
- Zeek (Bro) Module
- ZooKeeper module
- Zoom module
- Zscaler module
- Exported fields
- ActiveMQ fields
- Apache fields
- Auditd fields
- AWS fields
- AWS CloudWatch fields
- AWS Fargate fields
- Azure fields
- Barracuda Web Application Firewall fields
- Beat fields
- Blue Coat Director fields
- Decode CEF processor fields fields
- CEF fields
- Checkpoint fields
- Cisco fields
- Cloud provider metadata fields
- Coredns fields
- Crowdstrike fields
- CyberArk PAS fields
- CylanceProtect fields
- Docker fields
- ECS fields
- Elasticsearch fields
- Envoyproxy fields
- Big-IP Access Policy Manager fields
- Fortinet fields
- Google Cloud Platform (GCP) fields
- google_workspace fields
- HAProxy fields
- Host fields
- ibmmq fields
- Icinga fields
- IIS fields
- Imperva SecureSphere fields
- Infoblox NIOS fields
- iptables fields
- Jolokia Discovery autodiscover provider fields
- Juniper JUNOS fields
- Kafka fields
- kibana fields
- Kubernetes fields
- Log file content fields
- logstash fields
- Microsoft fields
- MISP fields
- mongodb fields
- mssql fields
- MySQL fields
- MySQL Enterprise fields
- NATS fields
- NetFlow fields
- Arbor Peakflow SP fields
- Nginx fields
- Office 365 fields
- Okta fields
- Oracle fields
- Osquery fields
- panw fields
- Pensando fields
- PostgreSQL fields
- Process fields
- Proofpoint Email Security fields
- RabbitMQ fields
- Radware DefensePro fields
- Redis fields
- s3 fields
- Google Santa fields
- Snort/Sourcefire fields
- Snyk fields
- Sonicwall-FW fields
- sophos fields
- Squid fields
- Suricata fields
- System fields
- threatintel fields
- Apache Tomcat fields
- Traefik fields
- Zeek fields
- ZooKeeper fields
- Zoom fields
- Zscaler NSS fields
- Monitor
- Secure
- Troubleshoot
- Get help
- Debug
- Common problems
- Error extracting container id while using Kubernetes metadata
- Can’t read log files from network volumes
- Filebeat isn’t collecting lines from a file
- Too many open file handlers
- Registry file is too large
- Inode reuse causes Filebeat to skip lines
- Log rotation results in lost or duplicate events
- Open file handlers cause issues with Windows file rotation
- Filebeat is using too much CPU
- Dashboard in Kibana is breaking up data fields incorrectly
- Fields are not indexed or usable in Kibana visualizations
- Filebeat isn’t shipping the last line of a file
- Filebeat keeps open file handlers of deleted files for a long time
- Filebeat uses too much bandwidth
- Error loading config file
- Found unexpected or unknown characters
- Logstash connection doesn’t work
- Publishing to Logstash fails with "connection reset by peer" message
- @metadata is missing in Logstash
- Not sure whether to use Logstash or Beats
- SSL client fails to connect to Logstash
- Monitoring UI shows fewer Beats than expected
- Dashboard could not locate the index-pattern
- High RSS memory usage due to MADV settings
- Contribute to Beats
Run Filebeat on Cloud Foundry
editRun Filebeat on Cloud Foundry
editYou can use Filebeat on Cloud Foundry to retrieve and ship logs.
Create Cloud Foundry credentials
editTo connect to loggregator and receive the logs, Filebeat requires credentials created with UAA. The uaac
command creates the required credentials for connecting to loggregator.
uaac client add filebeat --name filebeat --secret changeme --authorized_grant_types client_credentials,refresh_token --authorities doppler.firehose,cloud_controller.admin_read_only
Use a unique secret: The uaac
command shown here is an example. Remember to
replace changeme
with your secret, and update the filebeat.yml
file to
use your chosen secret.
Download Cloud Foundry deploy manifests
editYou deploy Filebeat as an application with no route.
Cloud Foundry requires that 3 files exist inside of a directory to allow Filebeat to be pushed. The commands below provide the basic steps for getting it up and running.
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.1.3-linux-x86_64.tar.gz tar xzvf filebeat-8.1.3-linux-x86_64.tar.gz cd filebeat-8.1.3-linux-x86_64 curl -L -O https://raw.githubusercontent.com/elastic/beats/8.1/deploy/cloudfoundry/filebeat/filebeat.yml curl -L -O https://raw.githubusercontent.com/elastic/beats/8.1/deploy/cloudfoundry/filebeat/manifest.yml
You need to modify the filebeat.yml
file to set the api_address
,
client_id
and client_secret
.
Load Kibana dashboards
editFilebeat comes packaged with various pre-built Kibana dashboards that you can use to visualize data in Kibana.
If these dashboards are not already loaded into Kibana, you must run the Filebeat setup
command.
To learn how, see Load Kibana dashboards.
The setup
command does not load the ingest pipelines used to parse log lines. By default, ingest pipelines
are set up automatically the first time you run Filebeat and connect to Elasticsearch.
If you are using a different output other than Elasticsearch, such as Logstash, you need to:
Deploy Filebeat
editTo deploy Filebeat to Cloud Foundry, run:
cf push
To check the status, run:
$ cf apps name requested state instances memory disk urls filebeat started 1/1 512M 1G
Log events should start flowing to Elasticsearch. The events are annotated with metadata added by the add_cloudfoundry_metadata processor.
Scale Filebeat
editA single instance of Filebeat can ship more than a hundred thousand events per minute. If your Cloud Foundry deployment is producing more events than Filebeat can collect and ship, the Firehose will start dropping events, and it will mark Filebeat as a slow consumer. If the problems persist, Filebeat may be disconnected from the Firehose. In such cases, you will need to scale Filebeat to avoid losing events.
The main settings you need to take into account are:
-
The
shard_id
specified in thecloudfoundry
input configuration. The Firehose will divide the events amongst all the Filebeat instances with the same value for this setting. All the instances with the sameshard_id
should have the same configuration. -
Number of Filebeat instances. When Filebeat is deployed as a Cloud
Foundry application, it can be scaled up and down like any other application,
with
cf scale
or by specifying the number of instances in the manifest. - Output configuration. In some cases, you can fine-tune the output configuration to improve the events throughput. Some outputs support multiple workers. The number of workers can be changed to take better advantage of the available resources.
Some basic recommendations to adjust these settings when Filebeat is not able to collect all events:
-
If Filebeat is hitting its CPU limits, you will need to increase the
number of Filebeat instances deployed with the same
shard_id
. - If Filebeat has some spare CPU, there may be some backpressure from the output. Try to increase the number of workers in the output. If this doesn’t help, the bottleneck may be in the network or in the service receiving the events sent by Filebeat.
- If you need to modify the memory limit of Filebeat, remember that CPU shares assigned to Cloud Foundry applications depend on the configured memory limit. You may need to check the other recommendations after that.
On this page