Skip to content

Clean up HTTP feature support #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Sep 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
157 changes: 20 additions & 137 deletions docs/guide/configuration.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,48 +4,47 @@
This page contains information about the most important configuration options of
the Python {es} client.


[discrete]
[[tls-and-ssl]]
=== TLS/SSL

The options in this section can only be used when the node is configured for HTTPS. An error will be raised if using these options with an HTTP node.
The options in this section will only be necessary when connecting to Elasticsearch Serverless via a proxy not managed by Elastic that uses its own certificates.

[discrete]
==== Verifying server certificates
==== Verifying certificates

The typical route to verify a cluster certificate is via a "CA bundle" which can be specified via the `ca_certs` parameter. If no options are given and the https://github.com/certifi/python-certifi[certifi package] is installed then certifi's CA bundle is used by default.
The typical route to verify a certificate is via a "CA bundle" which can be specified via the `ca_certs` parameter. If no options are given and the https://github.com/certifi/python-certifi[certifi package] is installed then certifi's CA bundle is used by default.

If you have your own CA bundle to use you can configure via the `ca_certs` parameter:

[source,python]
------------------------------------
es = Elasticsearch(
"https://...",
cloud_id='project-name:ABCD...',
ca_certs="/path/to/certs.pem"
)
------------------------------------

If using a generated certificate or certificate with a known fingerprint you can use the `ssl_assert_fingerprint` to specify the fingerprint which tries to match the server's leaf certificate during the TLS handshake. If there is any matching certificate the connection is verified, otherwise a `TlsError` is raised.

In Python 3.9 and earlier only the leaf certificate will be verified but in Python 3.10+ private APIs are used to verify any certificate in the certificate chain. This helps when using certificates that are generated on a multi-node cluster.
In Python 3.9 and earlier only the leaf certificate will be verified but in Python 3.10+ private APIs are used to verify any certificate in the certificate chain.

[source,python]
------------------------------------
es = Elasticsearch(
"https://...",
cloud_id='project-name:ABCD...',
ssl_assert_fingerprint=(
"315f5bdb76d078c43b8ac0064e4a0164612b1fce77c869345bfc94c75894edd3"
)
)
------------------------------------

To disable certificate verification use the `verify_certs=False` parameter. This option should be avoided in production, instead use the other options to verify the clusters' certificate.
To disable certificate verification use the `verify_certs=False` parameter. This option should be avoided in production, instead use the other options to verify the certificate.

[source,python]
------------------------------------
es = Elasticsearch(
"https://...",
cloud_id='project-name:ABCD...',
verify_certs=False
)
------------------------------------
Expand All @@ -58,7 +57,6 @@ Configuring the minimum TLS version to connect to is done via the `ssl_version`
[source,python]
------------------------------------
import ssl

es = Elasticsearch(
...,
ssl_version=ssl.TLSVersion.TLSv1_2
Expand Down Expand Up @@ -88,43 +86,38 @@ For advanced users an `ssl.SSLContext` object can be used for configuring TLS vi
[source,python]
------------------------------------
import ssl

# Create and configure an SSLContext
ctx = ssl.create_default_context()
ctx.load_verify_locations(...)

es = Elasticsearch(
...,
ssl_context=ctx
)
------------------------------------


[discrete]
[[compression]]
=== HTTP compression

Compression of HTTP request and response bodies can be enabled with the `http_compress` parameter.
If enabled then HTTP request bodies will be compressed with `gzip` and HTTP responses will include
the `Accept-Encoding: gzip` HTTP header. By default compression is disabled.
the `Accept-Encoding: gzip` HTTP header. HTTP compression is recommended for all Serverless requests, and is enabled by default.

To disable:

[source,python]
------------------------------------
es = Elasticsearch(
...,
http_compress=True # Enable compression!
http_compress=False
)
------------------------------------

HTTP compression is recommended to be enabled when requests are traversing the network.
Compression is automatically enabled when connecting to Elastic Cloud.


[discrete]
[[timeouts]]
=== Request timeouts

Requests can be configured to timeout if taking too long to be serviced. The `request_timeout` parameter can be passed via the client constructor or the client `.options()` method. When the request times out the node will raise a `ConnectionTimeout` exception which can trigger retries.
Requests can be configured to timeout if taking too long to be serviced. The `request_timeout` parameter can be passed via the client constructor or the client `.options()` method. When the request times out the project will raise a `ConnectionTimeout` exception which can trigger retries.

Setting `request_timeout` to `None` will disable timeouts.

Expand Down Expand Up @@ -159,12 +152,11 @@ es.options(
)
------------------------------------


[discrete]
[[retries]]
=== Retries

Requests can be retried if they don't return with a successful response. This provides a way for requests to be resilient against transient failures or overloaded nodes.
Requests can be retried if they don't return with a successful response. This provides a way for requests to be resilient against transient failures.

The maximum number of retries per request can be configured via the `max_retries` parameter. Setting this parameter to 0 disables retries. This parameter can be set in the client constructor or per-request via the client `.options()` method:

Expand Down Expand Up @@ -246,115 +238,6 @@ resp.meta.status # Can be either '2XX' or '400'

When using the `ignore_status` parameter the error response will be returned serialized just like a non-error response. In these cases it can be useful to inspect the HTTP status of the response. To do this you can inspect the `resp.meta.status`.

[discrete]
[[sniffing]]
=== Sniffing for new nodes

Additional nodes can be discovered by a process called "sniffing" where the client will query the cluster for more nodes that can handle requests.

Sniffing can happen at three different times: on client instantiation, before requests, and on a node failure. These three behaviors can be enabled and disabled with the `sniff_on_start`, `sniff_before_requests`, and `sniff_on_node_failure` parameters.

IMPORTANT: When using an HTTP load balancer or proxy you cannot use sniffing functionality as the cluster would supply the client with IP addresses to directly connect to the cluster, circumventing the load balancer. Depending on your configuration this might be something you don't want or break completely.

[discrete]
==== Waiting between sniffing attempts

To avoid needlessly sniffing too often there is a delay between attempts to discover new nodes. This value can be controlled via the `min_delay_between_sniffing` parameter.

[discrete]
==== Filtering nodes which are sniffed

By default nodes which are marked with only a `master` role will not be used. To change the behavior the parameter `sniffed_node_callback` can be used. To mark a sniffed node not to be added to the node pool
return `None` from the `sniffed_node_callback`, otherwise return a `NodeConfig` instance.

[source,python]
------------------------------------
from typing import Optional, Dict, Any
from elastic_transport import NodeConfig
from elasticsearch import Elasticsearch

def filter_master_eligible_nodes(
node_info: Dict[str, Any],
node_config: NodeConfig
) -> Optional[NodeConfig]:
# This callback ignores all nodes that are master eligible
# instead of master-only nodes (default behavior)
if "master" in node_info.get("roles", ()):
return None
return node_config

client = Elasticsearch(
"https://localhost:9200",
sniffed_node_callback=filter_master_eligible_nodes
)
------------------------------------

The `node_info` parameter is part of the response from the `nodes.info()` API, below is an example
of what that object looks like:

[source,json]
------------------------------------
{
"name": "SRZpKFZ",
"transport_address": "127.0.0.1:9300",
"host": "127.0.0.1",
"ip": "127.0.0.1",
"version": "5.0.0",
"build_hash": "253032b",
"roles": ["master", "data", "ingest"],
"http": {
"bound_address": ["[fe80::1]:9200", "[::1]:9200", "127.0.0.1:9200"],
"publish_address": "1.1.1.1:123",
"max_content_length_in_bytes": 104857600
}
}
------------------------------------


[discrete]
[[node-pool]]
=== Node Pool

[discrete]
==== Selecting a node from the pool

You can specify a node selector pattern via the `node_selector_class` parameter. The supported values are `round_robin` and `random`. Default is `round_robin`.

[source,python]
------------------------------------
es = Elasticsearch(
...,
node_selector_class="round_robin"
)
------------------------------------

Custom selectors are also supported:

[source,python]
------------------------------------
from elastic_transport import NodeSelector

class CustomSelector(NodeSelector):
def select(nodes): ...

es = Elasticsearch(
...,
node_selector_class=CustomSelector
)
------------------------------------

[discrete]
==== Marking nodes dead and alive

Individual nodes of Elasticsearch may have transient connectivity or load issues which may make them unable to service requests. To combat this the pool of nodes will detect when a node isn't able to service requests due to transport or API errors.

After a node has been timed out it will be moved back to the set of "alive" nodes but only after the node returns a successful response will the node be marked as "alive" in terms of consecutive errors.

The `dead_node_backoff_factor` and `max_dead_node_backoff` parameters can be used to configure how long the node pool will put the node into timeout with each consecutive failure. Both parameters use a unit of seconds.

The calculation is equal to `min(dead_node_backoff_factor * (2 ** (consecutive_failures - 1)), max_dead_node_backoff)`.


[discrete]
[[serializer]]
=== Serializers
Expand All @@ -365,7 +248,7 @@ You can define custom serializers via the `serializers` parameter:

[source,python]
------------------------------------
from elasticsearch import Elasticsearch, JsonSerializer
from elasticsearch_serverless import Elasticsearch, JsonSerializer

class JsonSetSerializer(JsonSerializer):
"""Custom JSON serializer that handles Python sets"""
Expand Down Expand Up @@ -395,7 +278,7 @@ For all of the built-in HTTP node implementations like `urllib3`, `requests`, an

[source,python]
------------------------------------
from elasticsearch import Elasticsearch
from elasticsearch_serverless import Elasticsearch

es = Elasticsearch(
...,
Expand All @@ -407,7 +290,7 @@ You can also specify a custom node implementation via the `node_class` parameter

[source,python]
------------------------------------
from elasticsearch import Elasticsearch
from elasticsearch_serverless import Elasticsearch
from elastic_transport import Urllib3HttpNode

class CustomHttpNode(Urllib3HttpNode):
Expand All @@ -420,14 +303,14 @@ es = Elasticsearch(
------------------------------------

[discrete]
==== HTTP connections per node
==== HTTP connections

Each node contains its own pool of HTTP connections to allow for concurrent requests. This value is configurable via the `connections_per_node` parameter:
The client maintains a pool of HTTP connections to the Elasticsearch Serverless project to allow for concurrent requests. This value is configurable via the `connections` parameter:

[source,python]
------------------------------------
es = Elasticsearch(
...,
connections_per_node=5
connections=5
)
------------------------------------
Loading