Create rule
editCreate rule
editThis API supports Token-based authentication only.
Creates a new detection rule.
Console supports only Elasticsearch APIs. Console doesn’t allow interactions with Kibana APIs. You must use curl
or another HTTP tool instead. For more information, refer to Run Elasticsearch API requests.
You can create the following types of rules:
- Custom query: Searches the defined indices and creates an alert when a document matches the rule’s KQL query.
- Event correlation: Searches the defined indices and creates an alert when results match an Event Query Language (EQL) query.
-
Threshold: Searches the defined indices and creates an alert when the number of times the specified field’s value meets the threshold during a single execution. When there are multiple values that meet the threshold, an alert is generated for each value.
For example, if the threshold
field
issource.ip
and itsvalue
is10
, an alert is generated for every source IP address that appears in at least 10 of the rule’s search results. If you’re interested, see Terms Aggregation for more information. -
Indicator match: Creates an alert when fields match values defined in the
specified Elasticsearch index. For example, you can
create an index for IP addresses and use this index to create an alert whenever
an event’s
destination.ip
equals a value in the index. The index’s field mappings should be ECS-compliant. - Machine learning rules: Creates an alert when a machine learning job discovers an anomaly above the defined threshold (see Anomaly Detection with Machine Learning).
To create machine learning rules, you must have the appropriate license or use a cloud deployment. Additionally, for the machine learning rule to function correctly, the associated machine learning job must be running.
To retrieve machine learning job IDs, which are required to create machine learning jobs, call the
Elasticsearch Get jobs API. Machine learning jobs that contain siem
in
the groups
field can be used to create rules:
... "job_id": "linux_anomalous_network_activity_ecs", "job_type": "anomaly_detector", "job_version": "7.7.0", "groups": [ "auditbeat", "process", "siem" ], ...
Additionally, you can set up notifications for when rules create alerts. The notifications use the Kibana Alerting and Actions framework. Each action type requires a connector. Connectors store the information required to send notifications via external systems. The following connector types are supported for rule notifications:
- Slack
- PagerDuty
- Webhook
- Microsoft Teams
- IBM Resilient
- Jira
- ServiceNow ITSM
For more information on PagerDuty fields, see Send a v2 Event.
To retrieve connector IDs, which are required to configure rule notifications,
call the Kibana Find objects API with
"type": "action"
in the request payload.
For detailed information on Kibana actions and alerting, and additional API calls, see:
Request URL
editPOST <kibana host>:<port>/api/detection_engine/rules
Request body
editA JSON object that defines the rule’s values:
- Required fields for all rule types
- Required field for query, indicator match and threshold rules
- Required field for threshold rules
- Required field for saved query rules
- Required field for event correlation rules
- Required fields for machine learning rules
- Required fields for indicator match rules
- Optional fields for all rule types
- Optional fields for query, indicator match, and event correlation rules
- Optional fields for indicator match rules
- Optional fields for query, indicator match, and threshold rules
- Optional fields for event correlation, query, and threshold rules
Required fields for all rule types
editName | Type | Description |
---|---|---|
description |
String |
The rule’s description. |
name |
String |
The rule’s name. |
risk_score |
Integer |
A numerical representation of the alert’s severity from 0 to 100, where:
|
severity |
String |
Severity level of alerts produced by the rule, which must be one of the following:
|
type |
String |
Data type on which the rule is based:
|
Required field for query, indicator match and threshold rules
editName | Type | Description |
---|---|---|
query |
String |
Query used by the rule to create alerts. For indicator match rules, only the query’s results are used to determine whether an alert is generated. |
Required field for threshold rules
editName | Type | Description |
---|---|---|
threshold |
Object |
Defines the field and threshold value for when alerts are generated, where:
|
Required field for saved query rules
editName | Type | Description |
---|---|---|
saved_id |
String |
Kibana saved search used by the rule to create alerts. |
Required field for event correlation rules
editName | Type | Description |
---|---|---|
language |
String |
Must be |
Required fields for machine learning rules
editName | Type | Description |
---|---|---|
anomaly_threshold |
Integer |
Anomaly score threshold above which the rule
creates an alert. Valid values are from |
machine_learning_job_id |
String[] |
Machine learning job ID(s) the rule monitors for anomaly scores. |
Required fields for indicator match rules
editName | Type | Description |
---|---|---|
threat_index |
String[] |
Elasticsearch indices used to check which field values generate alerts. |
threat_query |
String |
Query used to determine which fields in the Elasticsearch index are used for generating alerts. |
threat_mapping |
Object[] |
Array of
You can use Boolean |
Optional fields for all rule types
editName | Type | Description |
---|---|---|
actions |
Array defining the automated actions (notifications) taken when alerts are generated. |
|
author |
String[] |
The rule’s author. |
building_block_type |
String |
Determines if the rule acts as a building block.
By default, building-block alerts are not displayed in the UI. These rules are
used as a foundation for other rules that do generate alerts. Its value must be
|
enabled |
Boolean |
Determines whether the rule is enabled. Defaults to |
false_positives |
String[] |
String array used to describe common reasons why the rule may issue false-positive alerts. Defaults to an empty array. |
from |
String |
Time from which data is analyzed each time the rule executes,
using a date math range. For example,
|
interval |
String |
Frequency of rule execution, using a
date math range. For example, |
license |
String |
The rule’s license. |
max_signals |
Integer |
Maximum number of alerts the rule can create during a
single execution. Defaults to |
meta |
Object |
Placeholder for metadata about the rule. |
note |
String |
Notes to help investigate alerts produced by the rule. |
output_index |
String |
Index to which alerts created by the rule are saved.
If unspecified alerts are saved to |
references |
String[] |
Array containing notes about or references to relevant information about the rule. Defaults to an empty array. |
rule_id |
String |
Unique ID used to identify rules. For example, when a rule is converted from a third-party security solution. Automatically created when it is not provided. |
tags |
String[] |
String array containing words and phrases to help categorize, filter, and search rules. Defaults to an empty array. |
threat |
Object containing attack information about the type of threat the rule monitors, see ECS threat fields. Defaults to an empty array. |
|
throttle |
String |
Determines how often actions are taken:
Required when |
version |
Integer |
The rule’s version number. Defaults to |
Optional fields for query, indicator match, and event correlation rules
editName | Type | Description |
---|---|---|
exceptions_list |
Object[] |
Array of exception containers, which define exceptions that prevent the rule from generating alerts even when its other criteria are met. The object has these fields:
|
Optional fields for indicator match rules
editName | Type | Description |
---|---|---|
threat_filters |
Object[] |
Query and filter context array used to filter documents from the Elasticsearch index containing the threat values. |
threat_indicator_path |
String |
Much like an ingest processor, users can use this field to define where their threat indicator can be found on their indicator documents. Defaults to |
Optional fields for query, indicator match, and threshold rules
editName | Type | Description |
---|---|---|
language |
String |
Determines the query language, which must be
|
Optional fields for event correlation, query, and threshold rules
editName | Type | Description |
---|---|---|
filters |
Object[] |
The query and filter context array used to define the conditions for when alerts are created from events. Defaults to an empty array. |
index |
String[] |
Indices on which the rule functions. Defaults to the
Security Solution indices defined on the Kibana Advanced Settings page
(Kibana → Stack Management → Advanced Settings →
Event correlation rules have a limitation that prevents them from querying multiple indices from different clusters (local and remote). To enable this, a user with the |
risk_score_mapping |
Object[] |
Overrides generated alerts'
|
rule_name_override |
String |
Sets which field in the source event is used to
populate the alert’s |
severity_mapping |
Object[] |
Overrides generated alerts'
|
timestamp_override |
String |
Sets the time field used to query indices.
When unspecified, rules query the |
actions
schema
editAll fields are required:
Name | Type | Description |
---|---|---|
action_type_id |
String |
The connector type used for sending notifications, can be:
|
group |
String |
Optionally groups actions by use cases. Use |
id |
String |
The connector ID. |
params |
Object |
Object containing the allowed connector fields, which varies according to the connector type:
|
All text fields (such as message
fields) can contain placeholders for rule
and alert details:
-
{{state.signals_count}}
: Number of alerts detected -
{{context.alerts}}
: Array of detected alerts -
{{{context.results_link}}}
: URL to the alerts in Kibana -
{{context.rule.anomaly_threshold}}
: Anomaly threshold score above which alerts are generated (machine learning rules only) -
{{context.rule.description}}
: Rule description -
{{context.rule.false_positives}}
: Rule false positives -
{{context.rule.filters}}
: Rule filters (query rules only) -
{{context.rule.id}}
: Unique rule ID returned after creating the rule -
{{context.rule.index}}
: Indices rule runs on (query rules only) -
{{context.rule.language}}
: Rule query language (query rules only) -
{{context.rule.machine_learning_job_id}}
: ID of associated machine learning job (machine learning rules only) -
{{context.rule.max_signals}}
: Maximum allowed number of alerts per rule execution -
{{context.rule.name}}
: Rule name -
{{context.rule.output_index}}
: Index to which alerts are written -
{{context.rule.query}}
: Rule query (query rules only) -
{{context.rule.references}}
: Rule references -
{{context.rule.risk_score}}
: Rule risk score -
{{context.rule.rule_id}}
: Generated or user-defined rule ID that can be used as an identifier across systems -
{{context.rule.saved_id}}
: Saved search ID -
{{context.rule.severity}}
: Rule severity -
{{context.rule.threat}}
: Rule threat framework -
{{context.rule.threshold}}
: Rule threshold values (threshold rules only) -
{{context.rule.timeline_id}}
: Associated timeline ID -
{{context.rule.timeline_title}}
: Associated timeline name -
{{context.rule.type}}
: Rule type -
{{context.rule.version}}
: Rule version
threat
schema
editAll fields are required:
Name | Type | Description |
---|---|---|
framework |
String |
Relevant attack framework. |
tactic |
Object |
Object containing information on the attack type:
|
technique |
Array |
Array containing information on the attack techniques (optional):
|
subtechnique |
Array |
Array containing more specific information on the attack technique:
|
Only threats described using the MITRE ATT&CKTM framework are displayed in the UI (Detections → Manage detection rules → <rule name>).
Example requests
editExample 1
Query rule that searches for processes started by MS Office:
POST api/detection_engine/rules { "rule_id": "process_started_by_ms_office_program", "risk_score": 50, "description": "Process started by MS Office program - possible payload", "interval": "1h", "name": "MS Office child process", "severity": "low", "tags": [ "child process", "ms office" ], "type": "query", "from": "now-70m", "query": "process.parent.name:EXCEL.EXE or process.parent.name:MSPUB.EXE or process.parent.name:OUTLOOK.EXE or process.parent.name:POWERPNT.EXE or process.parent.name:VISIO.EXE or process.parent.name:WINWORD.EXE", "language": "kuery", "filters": [ { "query": { "match": { "event.action": { "query": "Process Create (rule: ProcessCreate)", "type": "phrase" } } } } ], "enabled": false }
The rule runs every hour. |
|
When the rule runs it analyzes data from 70 minutes before its start time. |
If the rule starts to run at 15:00, it analyzes data from 13:50 until 15:00. When it runs next, at 16:00, it will analyze data from 14:50 until 16:00.
Example 2
Threshold rule that detects multiple failed login attempts to a Windows host
from the same external source IP address, and maps the severity
value to
custom source event fields:
POST api/detection_engine/rules { "description": "Detects when there are 20 or more failed login attempts from the same IP address with a 2 minute time frame.", "enabled": true, "exceptions_list": [ { "id": "int-ips", "namespace_type": "single", "type": "detection" } ], "from": "now-180s", "index": [ "winlogbeat-*" ], "interval": "2m", "name": "Liverpool Windows server prml-19", "query": "host.name:prml-19 and event.category:authentication and event.outcome:failure", "risk_score": 30, "rule_id": "liv-win-ser-logins", "severity": "low", "severity_mapping": [ { "field": "source.geo.city_name", "operator": "equals", "severity": "low", "value": "Manchester" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "medium", "value": "London" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "high", "value": "Birmingham" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "critical", "value": "Wallingford" } ], "tags": [ "Brute force" ], "threshold": { "field": "source.ip", "value": 20 }, "type": "threshold" }
Exception list container used to exclude internal IP addresses. |
|
Alert severity levels are mapped according to the defined field values. |
|
Alerts are generated when the same source IP address is discovered in at least 20 results. |
Example 3
Machine learning rule that creates alerts, and sends Slack notifications, when the
linux_anomalous_network_activity_ecs
machine learning job discovers anomalies with a
threshold of 70 or above:
POST api/detection_engine/rules { "anomaly_threshold": 70, "rule_id": "ml_linux_network_high_threshold", "risk_score": 70, "machine_learning_job_id": "linux_anomalous_network_activity_ecs", "description": "Generates alerts when the job discovers anomalies over 70", "interval": "5m", "name": "Anomalous Linux network activity", "note": "Shut down the internet.", "severity": "high", "tags": [ "machine learning", "Linux" ], "type": "machine_learning", "from": "now-6m", "enabled": true, "throttle": "rule", "actions": [ { "action_type_id": ".slack", "group": "default", "id": "5ad22cd5-5e6e-4c6c-a81a-54b626a4cec5", "params": { "message": "Urgent: {{context.rule.description}}" } } ] }
Example 4
Event correlation rule that creates alerts when the Windows rundll32.exe
process makes
unusual network connections:
POST api/detection_engine/rules { "rule_id": "eql-outbound-rundll32-connections", "risk_score": 21, "description": "Unusual rundll32.exe network connection", "name": "rundll32.exe network connection", "severity": "low", "tags": [ "EQL", "Windows", "rundll32.exe" ], "type": "eql", "language": "eql", "query": "sequence by process.entity_id with maxspan=2h [process where event.type in (\"start\", \"process_started\") and (process.name == \"rundll32.exe\" or process.pe.original_file_name == \"rundll32.exe\") and ((process.args == \"rundll32.exe\" and process.args_count == 1) or (process.args != \"rundll32.exe\" and process.args_count == 0))] [network where event.type == \"connection\" and (process.name == \"rundll32.exe\" or process.pe.original_file_name == \"rundll32.exe\")]" }
Indicator match rule that creates an alert when one of the following is true:
-
The event’s destination IP address and port number matches destination IP
and port values in the
threat_index
index. -
The event’s source IP address matches a host IP address value in the
threat_index
index.
POST api/detection_engine/rules { "type": "threat_match", "index": [ "packetbeat-*" ], "query": "destination.ip:* or host.ip:*", "threat_index": [ "ip-threat-list" ], "threat_query": "*:*", "threat_mapping": [ { "entries": [ { "field": "destination.ip", "type": "mapping", "value": "destination.ip" }, { "field": "destination.port", "type": "mapping", "value": "destination.port" } ] }, { "entries": [ { "field": "source.ip", "type": "mapping", "value": "host.ip" } ] } ], "risk_score": 50, "severity": "medium", "name": "Bad IP threat match", "description": "Checks for bad IP addresses listed in the ip-threat-list index" }
The Elasticsearch index used for matching threat values. |
|
Query defining which threat index fields are used for matching values. In
this example, all values from the |
|
Multiple objects in a single |
|
Sibling |
Response code
edit-
200
- Indicates a successful call.
Response payload
editA JSON object that includes a unique ID, the time the rule was created, and its
version number. If the request payload did not include a rule_id
field, a
unique rule ID is also generated.
Example response for a query rule:
{ "created_at": "2020-04-07T14:51:09.755Z", "updated_at": "2020-04-07T14:51:09.970Z", "created_by": "LiverpoolFC", "description": "Process started by MS Office program - possible payload", "enabled": false, "false_positives": [], "from": "now-70m", "id": "6541b99a-dee9-4f6d-a86d-dbd1869d73b1", "immutable": false, "interval": "1h", "rule_id": "process_started_by_ms_office_program", "output_index": ".siem-signals-default", "max_signals": 100, "risk_score": 50, "name": "MS Office child process", "references": [], "severity": "low", "updated_by": "LiverpoolFC", "tags": [ "child process", "ms office" ], "to": "now", "type": "query", "threat": [], "version": 1, "actions": [], "filters": [ { "query": { "match": { "event.action": { "query": "Process Create (rule: ProcessCreate)", "type": "phrase" } } } } ], "throttle": "no_actions", "query": "process.parent.name:EXCEL.EXE or process.parent.name:MSPUB.EXE or process.parent.name:OUTLOOK.EXE or process.parent.name:POWERPNT.EXE or process.parent.name:VISIO.EXE or process.parent.name:WINWORD.EXE", "language": "kuery" }
Example response for a machine learning job rule:
{ "created_at": "2020-04-07T14:45:15.679Z", "updated_at": "2020-04-07T14:45:15.892Z", "created_by": "LiverpoolFC", "description": "Generates alerts when the job discovers anomalies over 70", "enabled": true, "false_positives": [], "from": "now-6m", "id": "83876f66-3a57-4a99-bf37-416494c80f3b", "immutable": false, "interval": "5m", "rule_id": "ml_linux_network_high_threshold", "output_index": ".siem-signals-default", "max_signals": 100, "risk_score": 70, "name": "Anomalous Linux network activity", "references": [], "severity": "high", "updated_by": "LiverpoolFC", "tags": [ "machine learning", "Linux" ], "to": "now", "type": "machine_learning", "threat": [], "version": 1, "actions": [ { "action_type_id": ".slack", "group": "default", "id": "5ad22cd5-5e6e-4c6c-a81a-54b626a4cec5", "params": { "message": "Urgent: {{context.rule.description}}" } } ], "throttle": "rule", "note": "Shut down the internet.", "status": "going to run", "status_date": "2020-04-07T14:45:21.685Z", "anomaly_threshold": 70, "machine_learning_job_id": "linux_anomalous_network_activity_ecs" }
Example response for a threshold rule:
{ "author": [], "created_at": "2020-07-22T10:27:23.486Z", "updated_at": "2020-07-22T10:27:23.673Z", "created_by": "LiverpoolFC", "description": "Detects when there are 20 or more failed login attempts from the same IP address with a 2 minute time frame.", "enabled": true, "false_positives": [], "from": "now-180s", "id": "15dbde26-b627-4d74-bb1f-a5e0ed9e4993", "immutable": false, "interval": "2m", "rule_id": "liv-win-ser-logins", "output_index": ".siem-signals-default", "max_signals": 100, "risk_score": 30, "risk_score_mapping": [], "name": "Liverpool Windows server prml-19", "references": [], "severity": "low", "severity_mapping": [ { "field": "source.geo.city_name", "operator": "equals", "severity": "low", "value": "Manchester" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "medium", "value": "London" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "high", "value": "Birmingham" }, { "field": "source.geo.city_name", "operator": "equals", "severity": "critical", "value": "Wallingford" } ], "updated_by": "LiverpoolFC", "tags": [ "Brute force" ], "to": "now", "type": "threshold", "threat": [], "version": 1, "exceptions_list": [ { "id": "int-ips", "namespace_type": "single", "type": "detection" } ], "actions": [], "index": [ "winlogbeat-*" ], "throttle": "no_actions", "query": "host.name:prml-19 and event.category:authentication and event.outcome:failure", "language": "kuery", "threshold": { "field": "source.ip", "value": 20 } }
Example response for an EQL rule:
{ "author": [], "created_at": "2020-10-05T09:06:16.392Z", "updated_at": "2020-10-05T09:06:16.403Z", "created_by": "Liverpool", "description": "Unusual rundll32.exe network connection", "enabled": true, "false_positives": [], "from": "now-6m", "id": "93808cae-b05b-4dc9-8479-73574b50f8b1", "immutable": false, "interval": "5m", "rule_id": "eql-outbound-rundll32-connections", "output_index": ".siem-signals-default", "max_signals": 100, "risk_score": 21, "risk_score_mapping": [], "name": "rundll32.exe network connection", "references": [], "severity": "low", "severity_mapping": [], "updated_by": "Liverpool", "tags": [ "EQL", "Windows", "rundll32.exe" ], "to": "now", "type": "eql", "threat": [], "version": 1, "exceptions_list": [], "actions": [], "throttle": "no_actions", "query": "sequence by process.entity_id with maxspan=2h [process where event.type in (\"start\", \"process_started\") and (process.name == \"rundll32.exe\" or process.pe.original_file_name == \"rundll32.exe\") and ((process.args == \"rundll32.exe\" and process.args_count == 1) or (process.args != \"rundll32.exe\" and process.args_count == 0))] [network where event.type == \"connection\" and (process.name == \"rundll32.exe\" or process.pe.original_file_name == \"rundll32.exe\")]", "language": "eql" }
Example response for an indicator match rule:
{ "author": [], "created_at": "2020-10-06T07:07:58.227Z", "updated_at": "2020-10-06T07:07:58.237Z", "created_by": "Liverpool", "description": "Checks for bad IP addresses listed in the ip-threat-list index", "enabled": true, "false_positives": [], "from": "now-6m", "id": "d5daa13f-81fb-4b13-be2f-31011e1d9ae1", "immutable": false, "interval": "5m", "rule_id": "608501e4-c768-4f64-9326-cec55b5d439b", "output_index": ".siem-signals-default", "max_signals": 100, "risk_score": 50, "risk_score_mapping": [], "name": "Bad IP threat match", "references": [], "severity": "medium", "severity_mapping": [], "updated_by": "Liverpool", "tags": [], "to": "now", "type": "threat_match", "threat": [], "version": 1, "exceptions_list": [], "actions": [], "index": [ "packetbeat-*" ], "throttle": "no_actions", "query": "destination.ip:* or host.ip:*", "language": "kuery", "threat_query": "*:*", "threat_index": [ "ip-threat-list" ], "threat_mapping": [ { "entries": [ { "field": "destination.ip", "type": "mapping", "value": "destination.ip" }, { "field": "destination.port", "type": "mapping", "value": "destination.port" } ] }, { "entries": [ { "field": "source.ip", "type": "mapping", "value": "host.ip" } ] } ] }