Delete By Query API
editDelete By Query API
editThe simplest usage of _delete_by_query
just performs a deletion on every
document that matches a query. Here is the API:
The query must be passed as a value to the |
That will return something like this:
{ "took" : 147, "timed_out": false, "deleted": 119, "batches": 1, "version_conflicts": 0, "noops": 0, "retries": { "bulk": 0, "search": 0 }, "throttled_millis": 0, "requests_per_second": -1.0, "throttled_until_millis": 0, "total": 119, "failures" : [ ] }
_delete_by_query
gets a snapshot of the index when it starts and deletes what
it finds using internal
versioning. That means that you’ll get a version
conflict if the document changes between the time when the snapshot was taken
and when the delete request is processed. When the versions match the document
is deleted.
Since internal
versioning does not support the value 0 as a valid
version number, documents with version equal to zero cannot be deleted using
_delete_by_query
and will fail the request.
During the _delete_by_query
execution, multiple search requests are sequentially
executed in order to find all the matching documents to delete. Every time a batch
of documents is found, a corresponding bulk request is executed to delete all
these documents. In case a search or bulk request got rejected, _delete_by_query
relies on a default policy to retry rejected requests (up to 10 times, with
exponential back off). Reaching the maximum retries limit causes the _delete_by_query
to abort and all failures are returned in the failures
of the response.
The deletions that have been performed still stick. In other words, the process
is not rolled back, only aborted. While the first failure causes the abort, all
failures that are returned by the failing bulk request are returned in the failures
element; therefore it’s possible for there to be quite a few failed entities.
If you’d like to count version conflicts rather than cause them to abort, then
set conflicts=proceed
on the url or "conflicts": "proceed"
in the request body.
Back to the API format, this will delete tweets from the twitter
index:
POST twitter/_doc/_delete_by_query?conflicts=proceed { "query": { "match_all": {} } }
It’s also possible to delete documents of multiple indexes and multiple types at once, just like the search API:
POST twitter,blog/_docs,post/_delete_by_query { "query": { "match_all": {} } }
If you provide routing
then the routing is copied to the scroll query,
limiting the process to the shards that match that routing value:
POST twitter/_delete_by_query?routing=1 { "query": { "range" : { "age" : { "gte" : 10 } } } }
By default _delete_by_query
uses scroll batches of 1000. You can change the
batch size with the scroll_size
URL parameter:
POST twitter/_delete_by_query?scroll_size=5000 { "query": { "term": { "user": "kimchy" } } }
URL Parameters
editIn addition to the standard parameters like pretty
, the delete by query API
also supports refresh
, wait_for_completion
, wait_for_active_shards
, timeout
,
and scroll
.
Sending the refresh
will refresh all shards involved in the delete by query
once the request completes. This is different than the delete API’s refresh
parameter which causes just the shard that received the delete request
to be refreshed. Also unlike the delete API it does not support wait_for
.
If the request contains wait_for_completion=false
then Elasticsearch will
perform some preflight checks, launch the request, and then return a task
which can be used with Tasks APIs
to cancel or get the status of the task. Elasticsearch will also create a
record of this task as a document at .tasks/task/${taskId}
. This is yours
to keep or remove as you see fit. When you are done with it, delete it so
Elasticsearch can reclaim the space it uses.
wait_for_active_shards
controls how many copies of a shard must be active
before proceeding with the request. See here
for details. timeout
controls how long each write request waits for unavailable
shards to become available. Both work exactly how they work in the
Bulk API. As _delete_by_query
uses scroll search, you can also specify
the scroll
parameter to control how long it keeps the "search context" alive,
e.g. ?scroll=10m
. By default it’s 5 minutes.
requests_per_second
can be set to any positive decimal number (1.4
, 6
,
1000
, etc.) and throttles the rate at which delete by query issues batches of
delete operations by padding each batch with a wait time. The throttling can be
disabled by setting requests_per_second
to -1
.
The throttling is done by waiting between batches so that scroll that
_delete_by_query
uses internally can be given a timeout that takes into
account the padding. The padding time is the difference between the batch size
divided by the requests_per_second
and the time spent writing. By default the
batch size is 1000
, so if the requests_per_second
is set to 500
:
target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk
request, large batch sizes will
cause Elasticsearch to create many requests and then wait for a while before
starting the next set. This is "bursty" instead of "smooth". The default is -1
.
Response body
editThe JSON response looks like this:
{ "took" : 147, "timed_out": false, "total": 119, "deleted": 119, "batches": 1, "version_conflicts": 0, "noops": 0, "retries": { "bulk": 0, "search": 0 }, "throttled_millis": 0, "requests_per_second": -1.0, "throttled_until_millis": 0, "failures" : [ ] }
-
took
- The number of milliseconds from start to end of the whole operation.
-
timed_out
-
This flag is set to
true
if any of the requests executed during the delete by query execution has timed out. -
total
- The number of documents that were successfully processed.
-
deleted
- The number of documents that were successfully deleted.
-
batches
- The number of scroll responses pulled back by the delete by query.
-
version_conflicts
- The number of version conflicts that the delete by query hit.
-
noops
- This field is always equal to zero for delete by query. It only exists so that delete by query, update by query, and reindex APIs return responses with the same structure.
-
retries
-
The number of retries attempted by delete by query.
bulk
is the number of bulk actions retried, andsearch
is the number of search actions retried. -
throttled_millis
-
Number of milliseconds the request slept to conform to
requests_per_second
. -
requests_per_second
- The number of requests per second effectively executed during the delete by query.
-
throttled_until_millis
-
This field should always be equal to zero in a
_delete_by_query
response. It only has meaning when using the Task API, where it indicates the next time (in milliseconds since epoch) a throttled request will be executed again in order to conform torequests_per_second
. -
failures
-
Array of failures if there were any unrecoverable errors during the process. If
this is non-empty then the request aborted because of those failures.
Delete by query is implemented using batches, and any failure causes the entire
process to abort but all failures in the current batch are collected into the
array. You can use the
conflicts
option to prevent reindex from aborting on version conflicts.
Works with the Task API
editYou can fetch the status of any running delete by query requests with the Task API:
GET _tasks?detailed=true&actions=*/delete/byquery
The response looks like:
{ "nodes" : { "r1A2WoRbTwKZ516z6NEs5A" : { "name" : "r1A2WoR", "transport_address" : "127.0.0.1:9300", "host" : "127.0.0.1", "ip" : "127.0.0.1:9300", "attributes" : { "testattr" : "test", "portsfile" : "true" }, "tasks" : { "r1A2WoRbTwKZ516z6NEs5A:36619" : { "node" : "r1A2WoRbTwKZ516z6NEs5A", "id" : 36619, "type" : "transport", "action" : "indices:data/write/delete/byquery", "status" : { "total" : 6154, "updated" : 0, "created" : 0, "deleted" : 3500, "batches" : 36, "version_conflicts" : 0, "noops" : 0, "retries": 0, "throttled_millis": 0 }, "description" : "" } } } } }
This object contains the actual status. It is just like the response JSON
with the important addition of the |
With the task id you can look up the task directly:
GET /_tasks/r1A2WoRbTwKZ516z6NEs5A:36619
The advantage of this API is that it integrates with wait_for_completion=false
to transparently return the status of completed tasks. If the task is completed
and wait_for_completion=false
was set on it then it’ll come back with
results
or an error
field. The cost of this feature is the document that
wait_for_completion=false
creates at .tasks/task/${taskId}
. It is up to
you to delete that document.
Works with the Cancel Task API
editAny delete by query can be canceled using the task cancel API:
POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel
The task ID can be found using the tasks API.
Cancellation should happen quickly but might take a few seconds. The task status API above will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.
Rethrottling
editThe value of requests_per_second
can be changed on a running delete by query
using the _rethrottle
API:
POST _delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
The task ID can be found using the tasks API.
Just like when setting it on the delete by query API, requests_per_second
can be either -1
to disable throttling or any decimal number
like 1.7
or 12
to throttle to that level. Rethrottling that speeds up the
query takes effect immediately but rethrotting that slows down the query will
take effect after completing the current batch. This prevents scroll
timeouts.
Slicing
editDelete by query supports sliced scroll to parallelize the deleting process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.
Manual slicing
editSlice a delete by query manually by providing a slice id and total number of slices to each request:
POST twitter/_delete_by_query { "slice": { "id": 0, "max": 2 }, "query": { "range": { "likes": { "lt": 10 } } } } POST twitter/_delete_by_query { "slice": { "id": 1, "max": 2 }, "query": { "range": { "likes": { "lt": 10 } } } }
Which you can verify works with:
GET _refresh POST twitter/_search?size=0&filter_path=hits.total { "query": { "range": { "likes": { "lt": 10 } } } }
Which results in a sensible total
like this one:
{ "hits": { "total": 0 } }
Automatic slicing
editYou can also let delete by query automatically parallelize using
sliced scroll to slice on _uid
. Use slices
to specify the number of
slices to use:
POST twitter/_delete_by_query?refresh&slices=5 { "query": { "range": { "likes": { "lt": 10 } } } }
Which you also can verify works with:
POST twitter/_search?size=0&filter_path=hits.total { "query": { "range": { "likes": { "lt": 10 } } } }
Which results in a sensible total
like this one:
{ "hits": { "total": 0 } }
Setting slices
to auto
will let Elasticsearch choose the number of slices
to use. This setting will use one slice per shard, up to a certain limit. If
there are multiple source indices, it will choose the number of slices based
on the index with the smallest number of shards.
Adding slices
to _delete_by_query
just automates the manual process used in
the section above, creating sub-requests which means it has some quirks:
-
You can see these requests in the
Tasks APIs. These sub-requests are "child"
tasks of the task for the request with
slices
. -
Fetching the status of the task for the request with
slices
only contains the status of completed slices. - These sub-requests are individually addressable for things like cancellation and rethrottling.
-
Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. -
Canceling the request with
slices
will cancel each sub-request. -
Due to the nature of
slices
each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. -
Parameters like
requests_per_second
andsize
on a request withslices
are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that the usingsize
withslices
might not result in exactlysize
documents being deleted. - Each sub-request gets a slightly different snapshot of the source index though these are all taken at approximately the same time.
Picking the number of slices
editIf slicing automatically, setting slices
to auto
will choose a reasonable
number for most indices. If you’re slicing manually or otherwise tuning
automatic slicing, use these guidelines.
Query performance is most efficient when the number of slices
is equal to the
number of shards in the index. If that number is large (for example,
500), choose a lower number as too many slices
will hurt performance. Setting
slices
higher than the number of shards generally does not improve efficiency
and adds overhead.
Delete performance scales linearly across available resources with the number of slices.
Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.