Skip to content

Commit 0f9b557

Browse files
Adds size parameter to the reindex commands in the NLP examples (#2435) (#2437)
* Adds size parameter to the reindex commands in the NLP examples. * Reduce size value in NLP inference page. (cherry picked from commit 56fa097) Co-authored-by: István Zoltán Szabó <[email protected]>
1 parent ccb0877 commit 0f9b557

File tree

3 files changed

+14
-6
lines changed

3 files changed

+14
-6
lines changed

docs/en/stack/ml/nlp/ml-nlp-inference.asciidoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ POST _reindex
232232
{
233233
"source": {
234234
"index": "kibana_sample_data_logs",
235-
"size": 500
235+
"size": 50
236236
},
237237
"dest": {
238238
"index": "lang-test",
@@ -245,9 +245,9 @@ POST _reindex
245245
However, those web log messages are unlikely to contain enough words for the
246246
model to accurately identify the language.
247247

248-
TIP: Set the reindex `size` option to a value
249-
smaller than the `queue_capacity` for the trained model deployment. Otherwise, requests might be rejected
250-
with a "too many requests" 429 error code.
248+
TIP: Set the reindex `size` option to a value smaller than the `queue_capacity`
249+
for the trained model deployment. Otherwise, requests might be rejected with a
250+
"too many requests" 429 error code.
251251

252252
[discrete]
253253
[[ml-nlp-inference-discover]]

docs/en/stack/ml/nlp/ml-nlp-ner-example.asciidoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -216,14 +216,18 @@ you created:
216216
POST _reindex
217217
{
218218
"source": {
219-
"index": "les-miserables"
219+
"index": "les-miserables",
220+
"size": 50 <1>
220221
},
221222
"dest": {
222223
"index": "les-miserables-infer",
223224
"pipeline": "ner"
224225
}
225226
}
226227
--------------------------------------------------
228+
<1> The default batch size for reindexing is 1000. Reducing `size` to a smaller
229+
number makes the update of the reindexing process quicker which enables you to
230+
follow the progress closely and detect errors early.
227231

228232
Take a random paragraph from the source document as an example:
229233

docs/en/stack/ml/nlp/ml-nlp-text-emb-vector-search-example.asciidoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,14 +227,18 @@ ingest processor inserts the embedding vector into each document.
227227
POST _reindex?wait_for_completion=false
228228
{
229229
"source": {
230-
"index": "collection"
230+
"index": "collection",
231+
"size": 50 <1>
231232
},
232233
"dest": {
233234
"index": "collection-with-embeddings",
234235
"pipeline": "text-embeddings"
235236
}
236237
}
237238
--------------------------------------------------
239+
<1> The default batch size for reindexing is 1000. Reducing `size` to a smaller
240+
number makes the update of the reindexing process quicker which enables you to
241+
follow the progress closely and detect errors early.
238242

239243
The API call returns a task ID that can be used to monitor the progress:
240244

0 commit comments

Comments
 (0)