You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This page contains information about the most important configuration options of
5
5
the Python {es} client.
6
6
7
-
8
7
[discrete]
9
8
[[tls-and-ssl]]
10
9
=== TLS/SSL
11
10
12
-
The options in this section can only be used when the node is configured for HTTPS. An error will be raised if using these options with an HTTP node.
11
+
The options in this section will only be necessary when connecting to Elasticsearch Serverless via a proxy not managed by Elastic that uses its own certificates.
13
12
14
13
[discrete]
15
-
==== Verifying server certificates
14
+
==== Verifying certificates
16
15
17
-
The typical route to verify a cluster certificate is via a "CA bundle" which can be specified via the `ca_certs` parameter. If no options are given and the https://github.com/certifi/python-certifi[certifi package] is installed then certifi's CA bundle is used by default.
16
+
The typical route to verify a certificate is via a "CA bundle" which can be specified via the `ca_certs` parameter. If no options are given and the https://github.com/certifi/python-certifi[certifi package] is installed then certifi's CA bundle is used by default.
18
17
19
18
If you have your own CA bundle to use you can configure via the `ca_certs` parameter:
20
19
21
20
[source,python]
22
21
------------------------------------
23
22
es = Elasticsearch(
24
-
"https://...",
23
+
cloud_id='project-name:ABCD...',
25
24
ca_certs="/path/to/certs.pem"
26
25
)
27
26
------------------------------------
28
27
29
28
If using a generated certificate or certificate with a known fingerprint you can use the `ssl_assert_fingerprint` to specify the fingerprint which tries to match the server's leaf certificate during the TLS handshake. If there is any matching certificate the connection is verified, otherwise a `TlsError` is raised.
30
29
31
-
In Python 3.9 and earlier only the leaf certificate will be verified but in Python 3.10+ private APIs are used to verify any certificate in the certificate chain. This helps when using certificates that are generated on a multi-node cluster.
30
+
In Python 3.9 and earlier only the leaf certificate will be verified but in Python 3.10+ private APIs are used to verify any certificate in the certificate chain.
To disable certificate verification use the `verify_certs=False` parameter. This option should be avoided in production, instead use the other options to verify the clusters' certificate.
42
+
To disable certificate verification use the `verify_certs=False` parameter. This option should be avoided in production, instead use the other options to verify the certificate.
44
43
45
44
[source,python]
46
45
------------------------------------
47
46
es = Elasticsearch(
48
-
"https://...",
47
+
cloud_id='project-name:ABCD...',
49
48
verify_certs=False
50
49
)
51
50
------------------------------------
@@ -58,7 +57,6 @@ Configuring the minimum TLS version to connect to is done via the `ssl_version`
58
57
[source,python]
59
58
------------------------------------
60
59
import ssl
61
-
62
60
es = Elasticsearch(
63
61
...,
64
62
ssl_version=ssl.TLSVersion.TLSv1_2
@@ -88,43 +86,38 @@ For advanced users an `ssl.SSLContext` object can be used for configuring TLS vi
88
86
[source,python]
89
87
------------------------------------
90
88
import ssl
91
-
92
89
# Create and configure an SSLContext
93
90
ctx = ssl.create_default_context()
94
91
ctx.load_verify_locations(...)
95
-
96
92
es = Elasticsearch(
97
93
...,
98
94
ssl_context=ctx
99
95
)
100
96
------------------------------------
101
97
102
-
103
98
[discrete]
104
99
[[compression]]
105
100
=== HTTP compression
106
101
107
102
Compression of HTTP request and response bodies can be enabled with the `http_compress` parameter.
108
103
If enabled then HTTP request bodies will be compressed with `gzip` and HTTP responses will include
109
-
the `Accept-Encoding: gzip` HTTP header. By default compression is disabled.
104
+
the `Accept-Encoding: gzip` HTTP header. HTTP compression is recommended for all Serverless requests, and is enabled by default.
105
+
106
+
To disable:
110
107
111
108
[source,python]
112
109
------------------------------------
113
110
es = Elasticsearch(
114
111
...,
115
-
http_compress=True # Enable compression!
112
+
http_compress=False
116
113
)
117
114
------------------------------------
118
115
119
-
HTTP compression is recommended to be enabled when requests are traversing the network.
120
-
Compression is automatically enabled when connecting to Elastic Cloud.
121
-
122
-
123
116
[discrete]
124
117
[[timeouts]]
125
118
=== Request timeouts
126
119
127
-
Requests can be configured to timeout if taking too long to be serviced. The `request_timeout` parameter can be passed via the client constructor or the client `.options()` method. When the request times out the node will raise a `ConnectionTimeout` exception which can trigger retries.
120
+
Requests can be configured to timeout if taking too long to be serviced. The `request_timeout` parameter can be passed via the client constructor or the client `.options()` method. When the request times out the project will raise a `ConnectionTimeout` exception which can trigger retries.
128
121
129
122
Setting `request_timeout` to `None` will disable timeouts.
130
123
@@ -159,12 +152,11 @@ es.options(
159
152
)
160
153
------------------------------------
161
154
162
-
163
155
[discrete]
164
156
[[retries]]
165
157
=== Retries
166
158
167
-
Requests can be retried if they don't return with a successful response. This provides a way for requests to be resilient against transient failures or overloaded nodes.
159
+
Requests can be retried if they don't return with a successful response. This provides a way for requests to be resilient against transient failures.
168
160
169
161
The maximum number of retries per request can be configured via the `max_retries` parameter. Setting this parameter to 0 disables retries. This parameter can be set in the client constructor or per-request via the client `.options()` method:
170
162
@@ -246,115 +238,6 @@ resp.meta.status # Can be either '2XX' or '400'
246
238
247
239
When using the `ignore_status` parameter the error response will be returned serialized just like a non-error response. In these cases it can be useful to inspect the HTTP status of the response. To do this you can inspect the `resp.meta.status`.
248
240
249
-
[discrete]
250
-
[[sniffing]]
251
-
=== Sniffing for new nodes
252
-
253
-
Additional nodes can be discovered by a process called "sniffing" where the client will query the cluster for more nodes that can handle requests.
254
-
255
-
Sniffing can happen at three different times: on client instantiation, before requests, and on a node failure. These three behaviors can be enabled and disabled with the `sniff_on_start`, `sniff_before_requests`, and `sniff_on_node_failure` parameters.
256
-
257
-
IMPORTANT: When using an HTTP load balancer or proxy you cannot use sniffing functionality as the cluster would supply the client with IP addresses to directly connect to the cluster, circumventing the load balancer. Depending on your configuration this might be something you don't want or break completely.
258
-
259
-
[discrete]
260
-
==== Waiting between sniffing attempts
261
-
262
-
To avoid needlessly sniffing too often there is a delay between attempts to discover new nodes. This value can be controlled via the `min_delay_between_sniffing` parameter.
263
-
264
-
[discrete]
265
-
==== Filtering nodes which are sniffed
266
-
267
-
By default nodes which are marked with only a `master` role will not be used. To change the behavior the parameter `sniffed_node_callback` can be used. To mark a sniffed node not to be added to the node pool
268
-
return `None` from the `sniffed_node_callback`, otherwise return a `NodeConfig` instance.
269
-
270
-
[source,python]
271
-
------------------------------------
272
-
from typing import Optional, Dict, Any
273
-
from elastic_transport import NodeConfig
274
-
from elasticsearch import Elasticsearch
275
-
276
-
def filter_master_eligible_nodes(
277
-
node_info: Dict[str, Any],
278
-
node_config: NodeConfig
279
-
) -> Optional[NodeConfig]:
280
-
# This callback ignores all nodes that are master eligible
You can specify a node selector pattern via the `node_selector_class` parameter. The supported values are `round_robin` and `random`. Default is `round_robin`.
322
-
323
-
[source,python]
324
-
------------------------------------
325
-
es = Elasticsearch(
326
-
...,
327
-
node_selector_class="round_robin"
328
-
)
329
-
------------------------------------
330
-
331
-
Custom selectors are also supported:
332
-
333
-
[source,python]
334
-
------------------------------------
335
-
from elastic_transport import NodeSelector
336
-
337
-
class CustomSelector(NodeSelector):
338
-
def select(nodes): ...
339
-
340
-
es = Elasticsearch(
341
-
...,
342
-
node_selector_class=CustomSelector
343
-
)
344
-
------------------------------------
345
-
346
-
[discrete]
347
-
==== Marking nodes dead and alive
348
-
349
-
Individual nodes of Elasticsearch may have transient connectivity or load issues which may make them unable to service requests. To combat this the pool of nodes will detect when a node isn't able to service requests due to transport or API errors.
350
-
351
-
After a node has been timed out it will be moved back to the set of "alive" nodes but only after the node returns a successful response will the node be marked as "alive" in terms of consecutive errors.
352
-
353
-
The `dead_node_backoff_factor` and `max_dead_node_backoff` parameters can be used to configure how long the node pool will put the node into timeout with each consecutive failure. Both parameters use a unit of seconds.
354
-
355
-
The calculation is equal to `min(dead_node_backoff_factor * (2 ** (consecutive_failures - 1)), max_dead_node_backoff)`.
356
-
357
-
358
241
[discrete]
359
242
[[serializer]]
360
243
=== Serializers
@@ -365,7 +248,7 @@ You can define custom serializers via the `serializers` parameter:
365
248
366
249
[source,python]
367
250
------------------------------------
368
-
from elasticsearch import Elasticsearch, JsonSerializer
251
+
from elasticsearch_serverless import Elasticsearch, JsonSerializer
369
252
370
253
class JsonSetSerializer(JsonSerializer):
371
254
"""Custom JSON serializer that handles Python sets"""
@@ -395,7 +278,7 @@ For all of the built-in HTTP node implementations like `urllib3`, `requests`, an
395
278
396
279
[source,python]
397
280
------------------------------------
398
-
from elasticsearch import Elasticsearch
281
+
from elasticsearch_serverless import Elasticsearch
399
282
400
283
es = Elasticsearch(
401
284
...,
@@ -407,7 +290,7 @@ You can also specify a custom node implementation via the `node_class` parameter
407
290
408
291
[source,python]
409
292
------------------------------------
410
-
from elasticsearch import Elasticsearch
293
+
from elasticsearch_serverless import Elasticsearch
411
294
from elastic_transport import Urllib3HttpNode
412
295
413
296
class CustomHttpNode(Urllib3HttpNode):
@@ -420,14 +303,14 @@ es = Elasticsearch(
420
303
------------------------------------
421
304
422
305
[discrete]
423
-
==== HTTP connections per node
306
+
==== HTTP connections
424
307
425
-
Each node contains its own pool of HTTP connections to allow for concurrent requests. This value is configurable via the `connections_per_node` parameter:
308
+
The client maintains a pool of HTTP connections to the Elasticsearch Serverless project to allow for concurrent requests. This value is configurable via the `connections` parameter:
0 commit comments