ELSER inference service
editELSER inference service
editCreates an inference endpoint to perform an inference task with the elser
service.
The API request will automatically download and deploy the ELSER model if it isn’t already downloaded.
Request
editPUT /_inference/<task_type>/<inference_id>
Path parameters
edit-
<inference_id>
- (Required, string) The unique identifier of the inference endpoint.
-
<task_type>
-
(Required, string) The type of the inference task that the model will perform.
Available task types:
-
sparse_embedding
.
-
Request body
edit-
service
-
(Required, string)
The type of service supported for the specified task type. In this case,
elser
. -
service_settings
-
(Required, object) Settings used to install the inference model.
These settings are specific to the
elser
service.-
num_allocations
- (Required, integer) The total number of allocations this model is assigned across machine learning nodes. Increasing this value generally increases the throughput.
-
num_threads
-
(Required, integer)
Sets the number of threads used by each model allocation during inference. This generally increases the speed per inference request. The inference process is a compute-bound process;
threads_per_allocations
must not exceed the number of available allocated processors per node. Must be a power of 2. Max allowed value is 32.
-
ELSER service example
editThe following example shows how to create an inference endpoint called
my-elser-model
to perform a sparse_embedding
task type.
Refer to the ELSER model documentation for more info.
The request below will automatically download the ELSER model if it isn’t already downloaded and then deploy the model.
resp = client.inference.put( task_type="sparse_embedding", inference_id="my-elser-model", inference_config={ "service": "elser", "service_settings": { "num_allocations": 1, "num_threads": 1 } }, ) print(resp)
const response = await client.inference.put({ task_type: "sparse_embedding", inference_id: "my-elser-model", inference_config: { service: "elser", service_settings: { num_allocations: 1, num_threads: 1, }, }, }); console.log(response);
PUT _inference/sparse_embedding/my-elser-model { "service": "elser", "service_settings": { "num_allocations": 1, "num_threads": 1 } }
Example response:
{ "inference_id": "my-elser-model", "task_type": "sparse_embedding", "service": "elser", "service_settings": { "num_allocations": 1, "num_threads": 1 }, "task_settings": {} }
You might see a 502 bad gateway error in the response when using the Kibana Console.
This error usually just reflects a timeout, while the model downloads in the background.
You can check the download progress in the Machine Learning UI.
If using the Python client, you can set the timeout
parameter to a higher value.