Skip to content

BlockingConnectionPool simplification: use counter instead of connection list #2518

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

Fogapod
Copy link

@Fogapod Fogapod commented Dec 18, 2022

Pull Request check-list

Please make sure to review and check all of these items:

  • Does $ tox pass with this change (including linting)?
    I do not have docker on my machine and i couldn't find a way to make it work with podman. I hope repo CI can run test and catch bugs.
    There are a lot of linting errors/warnings about docstring formatting in cluster.py file which I haven't touched.
  • Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
    Action doesn't start for my branch. I assume it only works for PRs
  • Is the new or changed code fully tested?
  • Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
  • Is there an example added to the examples folder (if applicable)?
  • Was the change added to CHANGES file?

NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.

Description of change

Existing code juggling full queue of Nones and using extra list to store connections looked weird so I simplified it to use a simple counter to track how many connections were allocated.
Also added slots to pools.

I tested my change with the following code using different max_connections values:

import asyncio

from redis import asyncio as redis


async def main() -> None:
    pool = redis.BlockingConnectionPool(max_connections=50)
    async with redis.Redis(connection_pool=pool) as rd:
        await asyncio.gather(*[rd.set(str(i), i) for i in range(200_000)])


asyncio.run(main())

It also performs a few % better than existing solution, probably because of slots.

@codecov-commenter
Copy link

codecov-commenter commented Dec 18, 2022

Codecov Report

Base: 92.28% // Head: 92.27% // Decreases project coverage by -0.01% ⚠️

Coverage data is based on head (c5e6af4) compared to base (428d609).
Patch coverage: 100.00% of modified lines in pull request are covered.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2518      +/-   ##
==========================================
- Coverage   92.28%   92.27%   -0.01%     
==========================================
  Files         115      115              
  Lines       29660    29660              
==========================================
- Hits        27371    27369       -2     
- Misses       2289     2291       +2     
Impacted Files Coverage Δ
redis/asyncio/connection.py 87.62% <100.00%> (+0.12%) ⬆️
tests/test_asyncio/test_connection.py 97.39% <100.00%> (ø)
tests/test_asyncio/test_search.py 98.29% <0.00%> (-0.35%) ⬇️
tests/test_asyncio/test_pubsub.py 99.37% <0.00%> (-0.16%) ⬇️
tests/test_cluster.py 96.86% <0.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@dvora-h dvora-h added the maintenance Maintenance (CI, Releases, etc) label Dec 19, 2022
@@ -1646,7 +1652,10 @@ async def disconnect(self, inuse_connections: bool = True):
self._checkpid()
async with self._lock:
resp = await asyncio.gather(
*(connection.disconnect() for connection in self._connections),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This disconnects only those connections in the pool, not all connections as before.

def __init__(
self,
max_connections: int = 50,
timeout: Optional[int] = 20,
connection_class: Type[Connection] = Connection,
queue_class: Type[asyncio.Queue] = asyncio.LifoQueue,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LifoQueue ensures that hot connections are the ones re-used first, which is probably beneficial.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was no difference in my simple tests, maybe i did something wrong but Lifo had 0 impact

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I´m not sure there will be a difference, or that this was the original intent of the LifoQueue. But LIFO is often used in caching, to maintain hot information. There is a degenerate case which can happen with Fifo, which is that if you pull always the oldest connections, you may end up with all the connections having timed out, or been disconnected due to idleness or other such things. But maybe that doesn't matter at all. Maybe the only intention of the LifoQueue was to keep those None objects at the end.

@Fogapod
Copy link
Author

Fogapod commented Feb 2, 2023

I added _in_use_connections set similar to non-blocking pool to not leak in-use connections. I am unsure about what it means for pool to "own" a connection so I assumed that it means different pool created it. This is why I call self._in_use_connections.remove(connection) only if pool owns connection

@Fogapod
Copy link
Author

Fogapod commented Feb 2, 2023

Test is failing because of __slots__. Unsure how to fix mock yet: AttributeError: 'ConnectionPool' object attribute 'get_connection' is read-only

@kristjanvalur
Copy link
Contributor

Btw, this is probably obsoleted with the merging of #2911

@Fogapod
Copy link
Author

Fogapod commented Sep 14, 2023

It is

@Fogapod Fogapod closed this Sep 14, 2023
@Fogapod Fogapod deleted the redis-pool branch September 14, 2023 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maintenance Maintenance (CI, Releases, etc)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants