Mistral inference integration
editMistral inference integration
editCreates an inference endpoint to perform an inference task with the mistral service.
Request
editPUT /_inference/<task_type>/<inference_id>
Path parameters
edit-
<inference_id> - (Required, string) The unique identifier of the inference endpoint.
-
<task_type> -
(Required, string) The type of the inference task that the model will perform.
Available task types:
-
text_embedding.
-
Request body
edit-
chunking_settings -
(Optional, object) Chunking configuration object. Refer to Configuring chunking to learn more about chunking.
-
max_chunk_size -
(Optional, integer)
Specifies the maximum size of a chunk in words.
Defaults to
250. This value cannot be higher than300or lower than20(forsentencestrategy) or10(forwordstrategy). -
overlap -
(Optional, integer)
Only for
wordchunking strategy. Specifies the number of overlapping words for chunks. Defaults to100. This value cannot be higher than the half ofmax_chunk_size. -
sentence_overlap -
(Optional, integer)
Only for
sentencechunking strategy. Specifies the numnber of overlapping sentences for chunks. It can be either1or0. Defaults to1. -
strategy -
(Optional, string)
Specifies the chunking strategy.
It could be either
sentenceorword.
-
-
service -
(Required, string)
The type of service supported for the specified task type. In this case,
mistral. -
service_settings -
(Required, object) Settings used to install the inference model.
These settings are specific to the
mistralservice.-
api_key -
(Required, string) A valid API key for your Mistral account. You can find your Mistral API keys or you can create a new one on the API Keys page.
You need to provide the API key only once, during the inference model creation. The Get inference API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.
-
model - (Required, string) The name of the model to use for the inference task. Refer to the Mistral models documentation for the list of available text embedding models.
-
max_input_tokens - (Optional, integer) Allows you to specify the maximum number of tokens per input before chunking occurs.
-
rate_limit -
(Optional, object) By default, the
mistralservice sets the number of requests allowed per minute to240. This helps to minimize the number of rate limit errors returned from the Mistral API. To modify this, set therequests_per_minutesetting of this object in your service settings:"rate_limit": { "requests_per_minute": <<number_of_requests>> }
-
Mistral service example
editThe following example shows how to create an inference endpoint called
mistral-embeddings-test to perform a text_embedding task type.
resp = client.inference.put(
task_type="text_embedding",
inference_id="mistral-embeddings-test",
inference_config={
"service": "mistral",
"service_settings": {
"api_key": "<api_key>",
"model": "mistral-embed"
}
},
)
print(resp)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "mistral-embeddings-test",
inference_config: {
service: "mistral",
service_settings: {
api_key: "<api_key>",
model: "mistral-embed",
},
},
});
console.log(response);
PUT _inference/text_embedding/mistral-embeddings-test
{
"service": "mistral",
"service_settings": {
"api_key": "<api_key>",
"model": "mistral-embed"
}
}
|
The |