Skip to content

Commit 371d795

Browse files
rmurphy-armjoergroedel
authored andcommitted
iommu/iova: Improve restart logic
When restarting after searching below the cached node fails, resetting the start point to the anchor node is often overly pessimistic. If allocations are made with mixed limits - particularly in the case of the opportunistic 32-bit allocation for PCI devices - this could mean significant time wasted walking through the whole populated upper range just to reach the initial limit. We can improve on that by implementing a proper tree traversal to find the first node above the relevant limit, and set the exact start point. Signed-off-by: Robin Murphy <[email protected]> Link: https://lore.kernel.org/r/076b3484d1e5057b95d8c387c894bd6ad2514043.1614962123.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <[email protected]>
1 parent 7ae31ce commit 371d795

File tree

1 file changed

+38
-1
lines changed

1 file changed

+38
-1
lines changed

drivers/iommu/iova.c

Lines changed: 38 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,43 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
154154
iovad->cached_node = rb_next(&free->node);
155155
}
156156

157+
static struct rb_node *iova_find_limit(struct iova_domain *iovad, unsigned long limit_pfn)
158+
{
159+
struct rb_node *node, *next;
160+
/*
161+
* Ideally what we'd like to judge here is whether limit_pfn is close
162+
* enough to the highest-allocated IOVA that starting the allocation
163+
* walk from the anchor node will be quicker than this initial work to
164+
* find an exact starting point (especially if that ends up being the
165+
* anchor node anyway). This is an incredibly crude approximation which
166+
* only really helps the most likely case, but is at least trivially easy.
167+
*/
168+
if (limit_pfn > iovad->dma_32bit_pfn)
169+
return &iovad->anchor.node;
170+
171+
node = iovad->rbroot.rb_node;
172+
while (to_iova(node)->pfn_hi < limit_pfn)
173+
node = node->rb_right;
174+
175+
search_left:
176+
while (node->rb_left && to_iova(node->rb_left)->pfn_lo >= limit_pfn)
177+
node = node->rb_left;
178+
179+
if (!node->rb_left)
180+
return node;
181+
182+
next = node->rb_left;
183+
while (next->rb_right) {
184+
next = next->rb_right;
185+
if (to_iova(next)->pfn_lo >= limit_pfn) {
186+
node = next;
187+
goto search_left;
188+
}
189+
}
190+
191+
return node;
192+
}
193+
157194
/* Insert the iova into domain rbtree by holding writer lock */
158195
static void
159196
iova_insert_rbtree(struct rb_root *root, struct iova *iova,
@@ -219,7 +256,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
219256
if (low_pfn == iovad->start_pfn && retry_pfn < limit_pfn) {
220257
high_pfn = limit_pfn;
221258
low_pfn = retry_pfn;
222-
curr = &iovad->anchor.node;
259+
curr = iova_find_limit(iovad, limit_pfn);
223260
curr_iova = to_iova(curr);
224261
goto retry;
225262
}

0 commit comments

Comments
 (0)