Clear trained model deployment cache API

edit

Clears the inference cache on all nodes where the deployment is assigned.

Request

edit

POST _ml/trained_models/<deployment_id>/deployment/cache/_clear

Prerequisites

edit

Requires the manage_ml cluster privilege. This privilege is included in the machine_learning_admin built-in role.

Description

edit

A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.

Path parameters

edit
deployment_id
(Required, string) A unique identifier for the deployment of the model.

Examples

edit

The following example clears the cache for the new deployment for the elastic__distilbert-base-uncased-finetuned-conll03-english trained model:

response = client.ml.clear_trained_model_deployment_cache(
  model_id: 'elastic__distilbert-base-uncased-finetuned-conll03-english'
)
puts response
POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/cache/_clear

The API returns the following results:

{
   "cleared": true
}