CEF module
editCEF module
editThis is a module for receiving Common Event Format (CEF) data over Syslog. When
messages are received over the syslog protocol the syslog input will parse the
header and set the timestamp value. Then the
decode_cef
processor is applied to parse the CEF
encoded data. The decoded data is written into a cef
object field. Lastly any
Elastic Common Schema (ECS) fields that can be populated with the CEF data are
populated.
Set up and run the module
editBefore doing these steps, verify that Elasticsearch and Kibana are running and that Elasticsearch is ready to receive data from Filebeat.
If you’re running our hosted Elasticsearch Service on Elastic Cloud, or you’ve enabled security in Elasticsearch and Kibana, you need to specify additional connection information before setting up and running the module. See Quick start: modules for common log formats for the complete setup.
To set up and run the module:
-
Enable the module:
deb and rpm:
filebeat modules enable cef
mac:
./filebeat modules enable cef
brew:
filebeat modules enable cef
linux:
./filebeat modules enable cef
win:
PS > .\filebeat.exe modules enable cef
This command enables the module config defined in the
modules.d
directory. See Specify which modules to run for other ways to enable modules.To see a list of enabled and disabled modules, run:
deb and rpm:
filebeat modules list
mac:
./filebeat modules list
brew:
filebeat modules list
linux:
./filebeat modules list
win:
PS > .\filebeat.exe modules list
-
Set up the initial environment:
deb and rpm:
filebeat setup -e
mac:
./filebeat setup -e
linux:
./filebeat setup -e
brew:
filebeat setup -e
win:
PS > .\filebeat.exe setup -e
The
setup
command loads the recommended index template for writing to Elasticsearch and deploys the sample dashboards (if available) for visualizing the data in Kibana. This is a one-time setup step.The
-e
flag is optional and sends output to standard error instead of syslog.The ingest pipelines used to parse log lines are set up automatically the first time you run the module, assuming the Elasticsearch output is enabled. If you’re sending events to Logstash, or plan to use Beats central management, also see Load ingest pipelines manually.
-
Run Filebeat.
If your logs aren’t in the default location, see Configure the module, then run Filebeat after you’ve set the paths variable.
deb and rpm:
service filebeat start
mac:
./filebeat -e
brew:
filebeat -e
linux:
./filebeat -e
win:
PS > Start-Service filebeat
If the module is configured correctly, you’ll see
INFO Harvester started
messages for each file specified in the config.Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config File Ownership and Permissions in the Beats Platform Reference for more information.
Configure the module
editYou can further refine the behavior of the cef
module by specifying
variable settings in the
modules.d/cef.yml
file, or overriding settings at the command line.
Variable settings
editEach fileset has separate variable settings for configuring the behavior of the
module. If you don’t specify variable settings, the cef
module uses
the defaults.
For more information, see Specify variable settings. Also see Override input settings.
When you specify a setting at the command line, remember to prefix the
setting with the module name, for example, cef.log.var.paths
instead of log.var.paths
.
log
fileset settings
edit-
var.syslog_host
-
The interface to listen to UDP based syslog traffic. Defaults to
localhost
. Set to0.0.0.0
to bind to all available interfaces. -
var.syslog_port
-
The UDP port to listen for syslog traffic. Defaults to
9003
Ports below 1024 require Filebeat to run as root.
Fields
editFor a description of each field in the module, see the exported fields section.