Inference
editInference
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Inference is a machine learning feature that enables you to use supervised machine learning processes – like Regression or Classification – not only as a batch analysis but in a continuous fashion. This means that inference makes it possible to use trained machine learning models against incoming data.
For instance, suppose you have an online service and you would like to predict whether a customer is likely to churn. You have an index with historical data – information on the customer behavior throughout the years in your business – and a classification model that is trained on this data. The new information comes into a destination index of a continuous transform. With inference, you can perform the classification analysis against the new data with the same input fields that you’ve trained the model on, and get a prediction.
Let’s take a closer look at the machinery behind inference.
Trained machine learning models as functions
editWhen you create a data frame analytics job that executes a supervised process, you need to train a machine learning model on a training dataset to be able to make predictions on data points that the model has never seen. The models that are created by data frame analytics are stored as Elasticsearch documents in internal indices. In other words, the characteristics of your trained models are saved and ready to be used as functions.
Alternatively, you can use a pre-trained language identification model to determine the language of text. Language identification supports 109 languages. For more information and configuration details, check the Language identification page.
Inference processor
editInference can be used as a processor specified in an ingest pipeline. It uses a stored data frame analytics model to infer against the data that is being ingested in the pipeline. The model is used on the ingest node. Inference pre-processes the data by using the model and provides a prediction. After the process, the pipeline continues executing (if there is any other processor in the pipeline), finally the new data together with the results are indexed into the destination index.
Check the inference processor and the machine learning data frame analytics API documentation to learn more about the feature.
Inference aggregation
editInference can also be used as a pipeline aggregation. You can reference a pre-trained data frame analytics model in the aggregation to infer on the result field of the parent bucket aggregation. The inference aggregation uses the model on the results to provide a prediction. This aggregation enables you to run classification or regression analysis at search time. If you want to perform the analysis on a small set of data, this aggregation enables you to generate predictions without the need to set up a processor in the ingest pipeline.
Check the inference bucket aggregation and the machine learning data frame analytics API documentation to learn more about the feature.