Get inference trained model statistics API

edit

Retrieves usage information for trained inference models.

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Request

edit

GET _ml/inference/_stats

GET _ml/inference/_all/_stats

GET _ml/inference/<model_id>/_stats

GET _ml/inference/<model_id>,<model_id_2>/_stats

GET _ml/inference/<model_id_pattern*>,<model_id_2>/_stats

Prerequisites

edit

Required privileges which should be added to a custom role:

  • cluster: monitor_ml

For more information, see Security privileges and Built-in roles.

Description

edit

You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Path parameters

edit
<model_id>
(Optional, string) The unique identifier of the trained inference model.

Query parameters

edit
allow_no_match

(Optional, boolean) Specifies what to do when the request:

  • Contains wildcard expressions and there are no data frame analytics jobs that match.
  • Contains the _all string or no identifiers and there are no matches.
  • Contains wildcard expressions and there are only partial matches.

The default value is true, which returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

from
(Optional, integer) Skips the specified number of data frame analytics jobs. The default value is 0.
size
(Optional, integer) Specifies the maximum number of data frame analytics jobs to obtain. The default value is 100.

Response codes

edit
404 (Missing resources)
If allow_no_match is false, this code indicates that there are no resources that match the request or only partial matches for the request.

Examples

edit

The following example gets usage information for all the trained models:

GET _ml/inference/_stats

The API returns the following results:

{
  "count": 2,
  "trained_model_stats": [
    {
      "model_id": "flight-delay-prediction-1574775339910",
      "pipeline_count": 0
    },
    {
      "model_id": "regression-job-one-1574775307356",
      "pipeline_count": 1,
      "ingest": {
        "total": {
          "count": 178,
          "time_in_millis": 8,
          "current": 0,
          "failed": 0
        },
        "pipelines": {
          "flight-delay": {
            "count": 178,
            "time_in_millis": 8,
            "current": 0,
            "failed": 0,
            "processors": [
              {
                "inference": {
                  "type": "inference",
                  "stats": {
                    "count": 178,
                    "time_in_millis": 7,
                    "current": 0,
                    "failed": 0
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}