You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/observability/observability-ai-assistant.asciidoc
+11-7Lines changed: 11 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -173,7 +173,10 @@ For example, if you create a {ref}/es-connectors-github.html[GitHub connector] y
173
173
+
174
174
Learn more about configuring and {ref}/es-connectors-usage.html[using connectors] in the Elasticsearch documentation.
175
175
176
-
After creating your connector, create the embeddings needed by the AI Assistant. You can do this using either <<obs-ai-search-connectors-ml-embeddings, a machine learning (ML) pipeline>>, which requires the ELSER model, or <<obs-ai-search-connectors-semantic-text, a `semantic_text` field type>>, which can use any available model (ELSER, E5, or a custom model).
176
+
After creating your connector, create the embeddings needed by the AI Assistant. You can do this using either:
177
+
178
+
* <<obs-ai-search-connectors-ml-embeddings, a machine learning (ML) pipeline>>: requires the ELSER ML model.
179
+
* <<obs-ai-search-connectors-semantic-text, a `semantic_text` field type>>: can use any available ML model (ELSER, E5, or a custom model).
177
180
178
181
[discrete]
179
182
[[obs-ai-search-connectors-ml-embeddings]]
@@ -182,9 +185,9 @@ After creating your connector, create the embeddings needed by the AI Assistant.
182
185
To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field) using an *ML Inference Pipeline*:
183
186
184
187
. Open the previously created connector, and select the *Pipelines* tab.
185
-
. Select *Copy and customize* button at the `Unlock your custom pipelines` box.
186
-
. Select *Add Inference Pipeline* button at the `Machine Learning Inference Pipelines` box.
187
-
. Select *ELSER (Elastic Learned Sparse EncodeR)* ML model to add the necessary embeddings to the data.
188
+
. Select *Copy and customize* under `Unlock your custom pipelines`.
189
+
. Select *Add Inference Pipeline* under `Machine Learning Inference Pipelines`.
190
+
. Select the *ELSER (Elastic Learned Sparse EncodeR)* ML model to add the necessary embeddings to the data.
188
191
. Select the fields that need to be evaluated as part of the inference pipeline.
189
192
. Test and save the inference pipeline and the overall pipeline.
190
193
@@ -194,8 +197,8 @@ After creating the pipeline, complete the following steps:
194
197
+
195
198
Once the pipeline is set up, perform a *Full Content Sync* of the connector. The inference pipeline will process the data as follows:
196
199
+
197
-
* As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a sparse vector field) are added to capture semantic meaning and context of the data.
198
-
* When you look at the documents that are ingested, you can see how the weights and token are added to the `predicted_value` field in the documents.
200
+
* As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a {ref}/query-dsl-sparse-vector-query.html[sparse vector field]) are added to capture semantic meaning and context of the data.
201
+
* When you look at the ingested documents, you can see the embeddings are added to the `predicted_value` field in the documents.
199
202
. Check if AI Assistant can use the index (optional).
200
203
+
201
204
Ask something to the AI Assistant related with the indexed data.
@@ -214,7 +217,8 @@ To create the embeddings needed by the AI Assistant using a {ref}/semantic-text.
214
217
. Add the field to your mapping by selecting *Add field*.
215
218
. Sync the data by selecting *Full Content* from the *Sync* menu.
216
219
217
-
The AI Assistant will now query the connector you've set up using the model you've selected. Check if the AI Assistant is using the index by asking it something related to the indexed data.
220
+
The AI Assistant will now query the connector you've set up using the model you've selected.
221
+
Check that the AI Assistant is using the index by asking it something related to the indexed data.
0 commit comments