Skip to content
This repository was archived by the owner on Nov 8, 2023. It is now read-only.

Commit f4ce3b5

Browse files
axboegregkh
authored andcommitted
io_uring: check if we need to reschedule during overflow flush
[ Upstream commit eac2ca2 ] In terms of normal application usage, this list will always be empty. And if an application does overflow a bit, it'll have a few entries. However, nothing obviously prevents syzbot from running a test case that generates a ton of overflow entries, and then flushing them can take quite a while. Check for needing to reschedule while flushing, and drop our locks and do so if necessary. There's no state to maintain here as overflows always prune from head-of-list, hence it's fine to drop and reacquire the locks at the end of the loop. Link: https://lore.kernel.org/io-uring/[email protected]/ Reported-by: [email protected] Signed-off-by: Jens Axboe <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
1 parent 3088483 commit f4ce3b5

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

io_uring/io_uring.c

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -701,6 +701,21 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
701701
memcpy(cqe, &ocqe->cqe, cqe_size);
702702
list_del(&ocqe->list);
703703
kfree(ocqe);
704+
705+
/*
706+
* For silly syzbot cases that deliberately overflow by huge
707+
* amounts, check if we need to resched and drop and
708+
* reacquire the locks if so. Nothing real would ever hit this.
709+
* Ideally we'd have a non-posting unlock for this, but hard
710+
* to care for a non-real case.
711+
*/
712+
if (need_resched()) {
713+
io_cq_unlock_post(ctx);
714+
mutex_unlock(&ctx->uring_lock);
715+
cond_resched();
716+
mutex_lock(&ctx->uring_lock);
717+
io_cq_lock(ctx);
718+
}
704719
}
705720

706721
if (list_empty(&ctx->cq_overflow_list)) {

0 commit comments

Comments
 (0)