Skip to content

Commit 3424b9a

Browse files
authored
Merge branch 'master' into grippy/issue-2224
2 parents 66e7b16 + 63cf7ec commit 3424b9a

File tree

17 files changed

+571
-34
lines changed

17 files changed

+571
-34
lines changed

CHANGES

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11

2+
* Compare commands case-insensitively in the asyncio command parser
23
* Allow negative `retries` for `Retry` class to retry forever
34
* Add `items` parameter to `hset` signature
45
* Create codeql-analysis.yml (#1988). Thanks @chayim
@@ -10,9 +11,11 @@
1011
* Fix broken connection writer lock-up for asyncio (#2065)
1112
* Fix auth bug when provided with no username (#2086)
1213
* Fix missing ClusterPipeline._lock (#2189)
14+
* Added dynaminc_startup_nodes configuration to RedisCluster
15+
* Fix reusing the old nodes' connections when cluster topology refresh is being done
16+
* Fix RedisCluster to immediately raise AuthenticationError without a retry
1317
* ClusterPipeline Doesn't Handle ConnectionError for Dead Hosts (#2225)
1418

15-
1619
* 4.1.3 (Feb 8, 2022)
1720
* Fix flushdb and flushall (#1926)
1821
* Add redis5 and redis4 dockers (#1871)

README.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1006,6 +1006,7 @@ a slots cache which maps each of the 16384 slots to the node/s handling them,
10061006
a nodes cache that contains ClusterNode objects (name, host, port, redis connection)
10071007
for all of the cluster's nodes, and a commands cache contains all the server
10081008
supported commands that were retrieved using the Redis 'COMMAND' output.
1009+
See *RedisCluster specific options* below for more.
10091010

10101011
RedisCluster instance can be directly used to execute Redis commands. When a
10111012
command is being executed through the cluster instance, the target node(s) will
@@ -1245,6 +1246,55 @@ The following commands are not supported:
12451246

12461247
Using scripting within pipelines in cluster mode is **not supported**.
12471248

1249+
1250+
**RedisCluster specific options**
1251+
1252+
require_full_coverage: (default=False)
1253+
1254+
When set to False (default value): the client will not require a
1255+
full coverage of the slots. However, if not all slots are covered,
1256+
and at least one node has 'cluster-require-full-coverage' set to
1257+
'yes,' the server will throw a ClusterDownError for some key-based
1258+
commands. See -
1259+
https://redis.io/topics/cluster-tutorial#redis-cluster-configuration-parameters
1260+
When set to True: all slots must be covered to construct the
1261+
cluster client. If not all slots are covered, RedisClusterException
1262+
will be thrown.
1263+
1264+
read_from_replicas: (default=False)
1265+
1266+
Enable read from replicas in READONLY mode. You can read possibly
1267+
stale data.
1268+
When set to true, read commands will be assigned between the
1269+
primary and its replications in a Round-Robin manner.
1270+
1271+
dynamic_startup_nodes: (default=False)
1272+
1273+
Set the RedisCluster's startup nodes to all of the discovered nodes.
1274+
If true, the cluster's discovered nodes will be used to determine the
1275+
cluster nodes-slots mapping in the next topology refresh.
1276+
It will remove the initial passed startup nodes if their endpoints aren't
1277+
listed in the CLUSTER SLOTS output.
1278+
If you use dynamic DNS endpoints for startup nodes but CLUSTER SLOTS lists
1279+
specific IP addresses, keep it at false.
1280+
1281+
cluster_error_retry_attempts: (default=3)
1282+
1283+
Retry command execution attempts when encountering ClusterDownError
1284+
or ConnectionError
1285+
1286+
reinitialize_steps: (default=10)
1287+
1288+
Specifies the number of MOVED errors that need to occur before
1289+
reinitializing the whole cluster topology. If a MOVED error occurs
1290+
and the cluster does not need to be reinitialized on this current
1291+
error handling, only the MOVED slot will be patched with the
1292+
redirected node.
1293+
To reinitialize the cluster on every MOVED error, set
1294+
reinitialize_steps to 1.
1295+
To avoid reinitializing the cluster on moved errors, set
1296+
reinitialize_steps to 0.
1297+
12481298
### Author
12491299

12501300
redis-py is developed and maintained by [Redis Inc](https://redis.com). It can be found [here](

docs/examples.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,5 @@ Examples
1010
examples/asyncio_examples
1111
examples/search_json_examples
1212
examples/set_and_get_examples
13-
examples/search_vector_similarity_examples
13+
examples/search_vector_similarity_examples
14+
examples/pipeline_examples

docs/examples/pipeline_examples.ipynb

Lines changed: 308 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,308 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Pipeline examples\n",
8+
"\n",
9+
"This example show quickly how to use pipelines in `redis-py`."
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"metadata": {},
15+
"source": [
16+
"## Checking that Redis is running"
17+
]
18+
},
19+
{
20+
"cell_type": "code",
21+
"execution_count": 1,
22+
"metadata": {},
23+
"outputs": [
24+
{
25+
"data": {
26+
"text/plain": [
27+
"True"
28+
]
29+
},
30+
"execution_count": 1,
31+
"metadata": {},
32+
"output_type": "execute_result"
33+
}
34+
],
35+
"source": [
36+
"import redis \n",
37+
"\n",
38+
"r = redis.Redis(decode_responses=True)\n",
39+
"r.ping()"
40+
]
41+
},
42+
{
43+
"cell_type": "markdown",
44+
"metadata": {},
45+
"source": [
46+
"## Simple example"
47+
]
48+
},
49+
{
50+
"cell_type": "markdown",
51+
"metadata": {},
52+
"source": [
53+
"### Creating a pipeline instance"
54+
]
55+
},
56+
{
57+
"cell_type": "code",
58+
"execution_count": 2,
59+
"metadata": {},
60+
"outputs": [],
61+
"source": [
62+
"pipe = r.pipeline()"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"### Adding commands to the pipeline"
70+
]
71+
},
72+
{
73+
"cell_type": "code",
74+
"execution_count": 3,
75+
"metadata": {},
76+
"outputs": [
77+
{
78+
"data": {
79+
"text/plain": [
80+
"Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>"
81+
]
82+
},
83+
"execution_count": 3,
84+
"metadata": {},
85+
"output_type": "execute_result"
86+
}
87+
],
88+
"source": [
89+
"pipe.set(\"a\", \"a value\")\n",
90+
"pipe.set(\"b\", \"b value\")\n",
91+
"\n",
92+
"pipe.get(\"a\")"
93+
]
94+
},
95+
{
96+
"cell_type": "markdown",
97+
"metadata": {},
98+
"source": [
99+
"### Executing the pipeline"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": 4,
105+
"metadata": {},
106+
"outputs": [
107+
{
108+
"data": {
109+
"text/plain": [
110+
"[True, True, 'a value']"
111+
]
112+
},
113+
"execution_count": 4,
114+
"metadata": {},
115+
"output_type": "execute_result"
116+
}
117+
],
118+
"source": [
119+
"pipe.execute()"
120+
]
121+
},
122+
{
123+
"cell_type": "markdown",
124+
"metadata": {},
125+
"source": [
126+
"The responses of the three commands are stored in a list. In the above example, the two first boolean indicates that the the `set` commands were successfull and the last element of the list is the result of the `get(\"a\")` comand."
127+
]
128+
},
129+
{
130+
"cell_type": "markdown",
131+
"metadata": {},
132+
"source": [
133+
"## Chained call\n",
134+
"\n",
135+
"The same result as above can be obtained in one line of code by chaining the opperations."
136+
]
137+
},
138+
{
139+
"cell_type": "code",
140+
"execution_count": 5,
141+
"metadata": {},
142+
"outputs": [
143+
{
144+
"data": {
145+
"text/plain": [
146+
"[True, True, 'a value']"
147+
]
148+
},
149+
"execution_count": 5,
150+
"metadata": {},
151+
"output_type": "execute_result"
152+
}
153+
],
154+
"source": [
155+
"pipe = r.pipeline()\n",
156+
"pipe.set(\"a\", \"a value\").set(\"b\", \"b value\").get(\"a\").execute()"
157+
]
158+
},
159+
{
160+
"cell_type": "markdown",
161+
"metadata": {},
162+
"source": [
163+
"## Performance comparison\n",
164+
"\n",
165+
"Using pipelines can improve performance, for more informations, see [Redis documentation about pipelining](https://redis.io/docs/manual/pipelining/). Here is a simple comparison test of performance between basic and pipelined commands (we simply increment a value and measure the time taken by both method)."
166+
]
167+
},
168+
{
169+
"cell_type": "code",
170+
"execution_count": 6,
171+
"metadata": {},
172+
"outputs": [],
173+
"source": [
174+
"from datetime import datetime\n",
175+
"\n",
176+
"incr_value = 100000"
177+
]
178+
},
179+
{
180+
"cell_type": "markdown",
181+
"metadata": {},
182+
"source": [
183+
"### Without pipeline"
184+
]
185+
},
186+
{
187+
"cell_type": "code",
188+
"execution_count": 7,
189+
"metadata": {},
190+
"outputs": [],
191+
"source": [
192+
"r.set(\"incr_key\", \"0\")\n",
193+
"\n",
194+
"start = datetime.now()\n",
195+
"\n",
196+
"for _ in range(incr_value):\n",
197+
" r.incr(\"incr_key\")\n",
198+
"res_without_pipeline = r.get(\"incr_key\")\n",
199+
"\n",
200+
"time_without_pipeline = (datetime.now() - start).total_seconds()"
201+
]
202+
},
203+
{
204+
"cell_type": "code",
205+
"execution_count": 8,
206+
"metadata": {},
207+
"outputs": [
208+
{
209+
"name": "stdout",
210+
"output_type": "stream",
211+
"text": [
212+
"Without pipeline\n",
213+
"================\n",
214+
"Time taken: 21.759733\n",
215+
"Increment value: 100000\n"
216+
]
217+
}
218+
],
219+
"source": [
220+
"print(\"Without pipeline\")\n",
221+
"print(\"================\")\n",
222+
"print(\"Time taken: \", time_without_pipeline)\n",
223+
"print(\"Increment value: \", res_without_pipeline)"
224+
]
225+
},
226+
{
227+
"cell_type": "markdown",
228+
"metadata": {},
229+
"source": [
230+
"### With pipeline"
231+
]
232+
},
233+
{
234+
"cell_type": "code",
235+
"execution_count": 9,
236+
"metadata": {},
237+
"outputs": [],
238+
"source": [
239+
"r.set(\"incr_key\", \"0\")\n",
240+
"\n",
241+
"start = datetime.now()\n",
242+
"\n",
243+
"pipe = r.pipeline()\n",
244+
"for _ in range(incr_value):\n",
245+
" pipe.incr(\"incr_key\")\n",
246+
"pipe.get(\"incr_key\")\n",
247+
"res_with_pipeline = pipe.execute()[-1]\n",
248+
"\n",
249+
"time_with_pipeline = (datetime.now() - start).total_seconds()"
250+
]
251+
},
252+
{
253+
"cell_type": "code",
254+
"execution_count": 10,
255+
"metadata": {},
256+
"outputs": [
257+
{
258+
"name": "stdout",
259+
"output_type": "stream",
260+
"text": [
261+
"With pipeline\n",
262+
"=============\n",
263+
"Time taken: 2.357863\n",
264+
"Increment value: 100000\n"
265+
]
266+
}
267+
],
268+
"source": [
269+
"print(\"With pipeline\")\n",
270+
"print(\"=============\")\n",
271+
"print(\"Time taken: \", time_with_pipeline)\n",
272+
"print(\"Increment value: \", res_with_pipeline)"
273+
]
274+
},
275+
{
276+
"cell_type": "markdown",
277+
"metadata": {},
278+
"source": [
279+
"Using pipelines provides the same result in much less time."
280+
]
281+
}
282+
],
283+
"metadata": {
284+
"interpreter": {
285+
"hash": "84048e2f8e89effc8610b2fb270e4858ef00e9403d223856d62b05266db287ca"
286+
},
287+
"kernelspec": {
288+
"display_name": "Python 3.9.2 ('.venv': venv)",
289+
"language": "python",
290+
"name": "python3"
291+
},
292+
"language_info": {
293+
"codemirror_mode": {
294+
"name": "ipython",
295+
"version": 3
296+
},
297+
"file_extension": ".py",
298+
"mimetype": "text/x-python",
299+
"name": "python",
300+
"nbconvert_exporter": "python",
301+
"pygments_lexer": "ipython3",
302+
"version": "3.9.2"
303+
},
304+
"orig_nbformat": 4
305+
},
306+
"nbformat": 4,
307+
"nbformat_minor": 2
308+
}

0 commit comments

Comments
 (0)