- Java REST Client (deprecated): other versions:
- Overview
- Java Low Level REST Client
- Java High Level REST Client
- Getting started
- Document APIs
- Search APIs
- Async Search APIs
- Miscellaneous APIs
- Index APIs
- Analyze API
- Create Index API
- Delete Index API
- Index Exists API
- Open Index API
- Close Index API
- Shrink Index API
- Split Index API
- Clone Index API
- Refresh API
- Flush API
- Flush Synced API
- Clear Cache API
- Force Merge API
- Rollover Index API
- Put Mapping API
- Get Mappings API
- Get Field Mappings API
- Index Aliases API
- Delete Alias API
- Exists Alias API
- Get Alias API
- Update Indices Settings API
- Get Settings API
- Put Template API
- Validate Query API
- Get Templates API
- Templates Exist API
- Get Index API
- Freeze Index API
- Unfreeze Index API
- Delete Template API
- Reload Search Analyzers API
- Get Composable Index Templates API
- Put Composable Index Template API
- Delete Composable Index Template API
- Optional arguments
- Simulate Index Template API
- Cluster APIs
- Ingest APIs
- Snapshot APIs
- Tasks APIs
- Script APIs
- Licensing APIs
- Machine Learning APIs
- Close anomaly detection job API
- Delete anomaly detection job API
- Delete anomaly detection jobs from calendar API
- Delete calendar event API
- Delete calendar API
- Delete data frame analytics jobs API
- Delete datafeed API
- Delete Expired Data API
- Delete Filter API
- Delete Forecast API
- Delete Model Snapshot API
- Delete trained model API
- Estimate anomaly detection job model memory API
- Evaluate data frame analytics API
- Explain data frame analytics API
- Find file structure API
- Flush Job API
- Forecast Job API
- Get anomaly detection jobs API
- Get anomaly detection job stats API
- Get buckets API
- Get calendar events API
- Get calendars API
- Get categories API
- Get data frame analytics jobs API
- Get data frame analytics jobs stats API
- Get datafeed API
- Get datafeed stats API
- Get filters API
- Get influencers API
- ML get info API
- Get model snapshots API
- Get overall buckets API
- Get records API
- Get trained models API
- Get trained models stats API
- Open anomaly detection job API
- Post Calendar Event API
- Post Data API
- Preview Datafeed API
- Put anomaly detection job API
- Put anomaly detection jobs in calendar API
- Put calendar API
- Put data frame analytics jobs API
- Put datafeed API
- Put Filter API
- Put trained model API
- Revert Model Snapshot API
- Set Upgrade Mode API
- Start data frame analytics jobs API
- Start datafeed API
- Stop data frame analytics jobs API
- Stop Datafeed API
- Update anomaly detection job API
- Update data frame analytics jobs API
- Update datafeed API
- Update filter API
- Update model snapshot API
- Migration APIs
- Rollup APIs
- Security APIs
- Put User API
- Get Users API
- Delete User API
- Enable User API
- Disable User API
- Change Password API
- Put Role API
- Get Roles API
- Delete Role API
- Delete Privileges API
- Get Builtin Privileges API
- Get Privileges API
- Clear Roles Cache API
- Clear Privileges Cache API
- Clear Realm Cache API
- Authenticate API
- Has Privileges API
- Get User Privileges API
- SSL Certificate API
- Put Role Mapping API
- Get Role Mappings API
- Delete Role Mapping API
- Create Token API
- Invalidate Token API
- Put Privileges API
- Create API Key API
- Get API Key information API
- Invalidate API Key API
- Watcher APIs
- Graph APIs
- CCR APIs
- Index Lifecycle Management APIs
- Snapshot Lifecycle Management APIs
- Put Snapshot Lifecycle Policy API
- Delete Snapshot Lifecycle Policy API
- Get Snapshot Lifecycle Policy API
- Start Snapshot Lifecycle Management API
- Stop Snapshot Lifecycle Management API
- Snapshot Lifecycle Management Status API
- Execute Snapshot Lifecycle Policy API
- Execute Snapshot Lifecycle Retention API
- Transform APIs
- Enrich APIs
- Using Java Builders
- Migration Guide
- License
Put data frame analytics jobs API
editPut data frame analytics jobs API
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Creates a new data frame analytics job.
The API accepts a PutDataFrameAnalyticsRequest
object as a request and returns a PutDataFrameAnalyticsResponse
.
Put data frame analytics jobs request
editA PutDataFrameAnalyticsRequest
requires the following argument:
Data frame analytics configuration
editThe DataFrameAnalyticsConfig
object contains all the details about the data frame analytics job
configuration and contains the following arguments:
DataFrameAnalyticsConfig config = DataFrameAnalyticsConfig.builder() .setId("my-analytics-config") .setSource(sourceConfig) .setDest(destConfig) .setAnalysis(outlierDetection) .setAnalyzedFields(analyzedFields) .setModelMemoryLimit(new ByteSizeValue(5, ByteSizeUnit.MB)) .setDescription("this is an example description") .setMaxNumThreads(1) .build();
The data frame analytics job ID |
|
The source index and query from which to gather data |
|
The destination index |
|
The analysis to be performed |
|
The fields to be included in / excluded from the analysis |
|
The memory limit for the model created as part of the analysis process |
|
Optionally, a human-readable description |
|
The maximum number of threads to be used by the analysis. Defaults to 1. |
SourceConfig
editThe index and the query from which to collect data.
DataFrameAnalyticsSource sourceConfig = DataFrameAnalyticsSource.builder() .setIndex("put-test-source-index") .setQueryConfig(queryConfig) .setSourceFiltering(new FetchSourceContext(true, new String[] { "included_field_1", "included_field_2" }, new String[] { "excluded_field" })) .build();
Constructing a new DataFrameAnalyticsSource |
|
The source index |
|
The query from which to gather the data. If query is not set, a |
|
Source filtering to select which fields will exist in the destination index. |
QueryConfig
editThe query with which to select data from the source.
QueryConfig queryConfig = new QueryConfig(new MatchAllQueryBuilder());
DestinationConfig
editThe index to which data should be written by the data frame analytics job.
Analysis
editThe analysis to be performed.
Currently, the supported analyses include: OutlierDetection
, Classification
, Regression
.
Outlier detection
editOutlierDetection
analysis can be created in one of two ways:
or
DataFrameAnalysis outlierDetectionCustomized = OutlierDetection.builder() .setMethod(OutlierDetection.Method.DISTANCE_KNN) .setNNeighbors(5) .setFeatureInfluenceThreshold(0.1) .setComputeFeatureInfluence(true) .setOutlierFraction(0.05) .setStandardizationEnabled(true) .build();
Constructing a new OutlierDetection object |
|
The method used to perform the analysis |
|
Number of neighbors taken into account during analysis |
|
The min |
|
Whether to compute feature influence |
|
The proportion of the data set that is assumed to be outlying prior to outlier detection |
|
Whether to apply standardization to feature values |
Classification
editClassification
analysis requires to set which is the dependent_variable
and
has a number of other optional parameters:
DataFrameAnalysis classification = Classification.builder("my_dependent_variable") .setLambda(1.0) .setGamma(5.5) .setEta(5.5) .setMaxTrees(50) .setFeatureBagFraction(0.4) .setNumTopFeatureImportanceValues(3) .setPredictionFieldName("my_prediction_field_name") .setTrainingPercent(50.0) .setRandomizeSeed(1234L) .setClassAssignmentObjective(Classification.ClassAssignmentObjective.MAXIMIZE_ACCURACY) .setNumTopClasses(1) .build();
Constructing a new Classification builder object with the required dependent variable |
|
The lambda regularization parameter. A non-negative double. |
|
The gamma regularization parameter. A non-negative double. |
|
The applied shrinkage. A double in [0.001, 1]. |
|
The maximum number of trees the forest is allowed to contain. An integer in [1, 2000]. |
|
The fraction of features which will be used when selecting a random bag for each candidate split. A double in (0, 1]. |
|
If set, feature importance for the top most important features will be computed. |
|
The name of the prediction field in the results object. |
|
The percentage of training-eligible rows to be used in training. Defaults to 100%. |
|
The seed to be used by the random generator that picks which rows are used in training. |
|
The optimization objective to target when assigning class labels. Defaults to maximize_minimum_recall. |
|
The number of top classes to be reported in the results. Defaults to 2. |
Regression
editRegression
analysis requires to set which is the dependent_variable
and
has a number of other optional parameters:
DataFrameAnalysis regression = org.elasticsearch.client.ml.dataframe.Regression.builder("my_dependent_variable") .setLambda(1.0) .setGamma(5.5) .setEta(5.5) .setMaxTrees(50) .setFeatureBagFraction(0.4) .setNumTopFeatureImportanceValues(3) .setPredictionFieldName("my_prediction_field_name") .setTrainingPercent(50.0) .setRandomizeSeed(1234L) .setLossFunction(Regression.LossFunction.MSE) .setLossFunctionParameter(1.0) .build();
Constructing a new Regression builder object with the required dependent variable |
|
The lambda regularization parameter. A non-negative double. |
|
The gamma regularization parameter. A non-negative double. |
|
The applied shrinkage. A double in [0.001, 1]. |
|
The maximum number of trees the forest is allowed to contain. An integer in [1, 2000]. |
|
The fraction of features which will be used when selecting a random bag for each candidate split. A double in (0, 1]. |
|
If set, feature importance for the top most important features will be computed. |
|
The name of the prediction field in the results object. |
|
The percentage of training-eligible rows to be used in training. Defaults to 100%. |
|
The seed to be used by the random generator that picks which rows are used in training. |
|
The loss function used for regression. Defaults to |
|
An optional parameter to the loss function. |
Analyzed fields
editFetchContext object containing fields to be included in / excluded from the analysis
FetchSourceContext analyzedFields = new FetchSourceContext( true, new String[] { "included_field_1", "included_field_2" }, new String[] { "excluded_field" });
Synchronous execution
editWhen executing a PutDataFrameAnalyticsRequest
in the following manner, the client waits
for the PutDataFrameAnalyticsResponse
to be returned before continuing with code execution:
PutDataFrameAnalyticsResponse response = client.machineLearning().putDataFrameAnalytics(request, RequestOptions.DEFAULT);
Synchronous calls may throw an IOException
in case of either failing to
parse the REST response in the high-level REST client, the request times out
or similar cases where there is no response coming back from the server.
In cases where the server returns a 4xx
or 5xx
error code, the high-level
client tries to parse the response body error details instead and then throws
a generic ElasticsearchException
and adds the original ResponseException
as a
suppressed exception to it.
Asynchronous execution
editExecuting a PutDataFrameAnalyticsRequest
can also be done in an asynchronous fashion so that
the client can return directly. Users need to specify how the response or
potential failures will be handled by passing the request and a listener to the
asynchronous put-data-frame-analytics method:
The |
The asynchronous method does not block and returns immediately. Once it is
completed the ActionListener
is called back using the onResponse
method
if the execution successfully completed or using the onFailure
method if
it failed. Failure scenarios and expected exceptions are the same as in the
synchronous execution case.
A typical listener for put-data-frame-analytics
looks like:
Response
editThe returned PutDataFrameAnalyticsResponse
contains the newly created data frame analytics job.
DataFrameAnalyticsConfig createdConfig = response.getConfig();
On this page