Skip to content

Commit 07d6063

Browse files
committed
io_uring/kbuf: prune deferred locked cache when tearing down
We used to just use our page list for final teardown, which would ensure that we got all the buffers, even the ones that were not on the normal cached list. But while moving to slab for the io_buffers, we know only prune this list, not the deferred locked list that we have. This can cause a leak of memory, if the workload ends up using the intermediate locked list. Fix this by always pruning both lists when tearing down. Fixes: b3a4dbc ("io_uring/kbuf: Use slab for struct io_buffer objects") Signed-off-by: Jens Axboe <[email protected]>
1 parent b10b73c commit 07d6063

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

io_uring/kbuf.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -306,6 +306,14 @@ void io_destroy_buffers(struct io_ring_ctx *ctx)
306306
kfree(bl);
307307
}
308308

309+
/*
310+
* Move deferred locked entries to cache before pruning
311+
*/
312+
spin_lock(&ctx->completion_lock);
313+
if (!list_empty(&ctx->io_buffers_comp))
314+
list_splice_init(&ctx->io_buffers_comp, &ctx->io_buffers_cache);
315+
spin_unlock(&ctx->completion_lock);
316+
309317
list_for_each_safe(item, tmp, &ctx->io_buffers_cache) {
310318
buf = list_entry(item, struct io_buffer, list);
311319
kmem_cache_free(io_buf_cachep, buf);

0 commit comments

Comments
 (0)