1
+ tag::dfa-deploy-model[]
2
+ . To deploy {dfanalytics} model in a pipeline, navigate to **Machine Learning** >
3
+ **Model Management** > **Trained models** in {kib} .
4
+
5
+ . Find the model you want to deploy in the list and click **Deploy model** in
6
+ the **Actions** menu.
7
+ +
8
+ --
9
+ [role="screenshot"]
10
+ image::images/ml-dfa-trained-models-ui.png["The trained models UI in {kib}"]
11
+ --
12
+
13
+ . Create an {infer} pipeline to be able to use the model against new data
14
+ through the pipeline. Add a name and a description or use the default values.
15
+ +
16
+ --
17
+ [role="screenshot"]
18
+ image::images/ml-dfa-inference-pipeline.png["Creating an inference pipeline"]
19
+ --
20
+
21
+ . Configure the pipeline processors or use the default settings.
22
+ +
23
+ --
24
+ [role="screenshot"]
25
+ image::images/ml-dfa-inference-processor.png["Configuring an inference processor"]
26
+ --
27
+ . Configure to handle ingest failures or use the default settings.
28
+
29
+ . (Optional) Test your pipeline by running a simulation of the pipeline to
30
+ confirm it produces the anticipated results.
31
+
32
+ . Review the settings and click **Create pipeline** .
33
+
34
+ The model is deployed and ready to use through the {infer} pipeline.
35
+ end::dfa-deploy-model[]
36
+
37
+
1
38
tag::dfa-evaluation-intro[]
2
39
Using the {dfanalytics} features to gain insights from a data set is an
3
40
iterative process. After you defined the problem you want to solve, and chose
@@ -18,10 +55,8 @@ the ground truth. The {evaluatedf-api} evaluates the performance of the
18
55
end::dfa-evaluation-intro[]
19
56
20
57
tag::dfa-inference[]
21
- {infer-cap} is a {ml} feature that enables you to use supervised {ml} processes
22
- – like {regression} or {classification} – not only as a batch analysis but in a
23
- continuous fashion. This means that {infer} makes it possible to use
24
- <<ml-trained-models,trained {ml} models>> against incoming data.
58
+ {infer-cap} enables you to use <<ml-trained-models,trained {ml} models>> against
59
+ incoming data in a continuous fashion.
25
60
26
61
For instance, suppose you have an online service and you would like to predict
27
62
whether a customer is likely to churn. You have an index with historical data –
@@ -43,7 +78,7 @@ are indexed into the destination index.
43
78
44
79
Check the {ref} /inference-processor.html[{infer} processor] and
45
80
{ref} /ml-df-analytics-apis.html[the {ml} {dfanalytics} API documentation] to
46
- learn more about the feature .
81
+ learn more.
47
82
end::dfa-inference-processor[]
48
83
49
84
tag::dfa-inference-aggregation[]
@@ -58,7 +93,7 @@ to set up a processor in the ingest pipeline.
58
93
Check the
59
94
{ref} /search-aggregations-pipeline-inference-bucket-aggregation.html[{infer} bucket aggregation]
60
95
and {ref} /ml-df-analytics-apis.html[the {ml} {dfanalytics} API documentation] to
61
- learn more about the feature .
96
+ learn more.
62
97
63
98
NOTE: If you use trained model aliases to reference your trained model in an
64
99
{infer} processor or {infer} aggregation, you can replace your trained model
0 commit comments