Skip to content

Commit 93927ca

Browse files
committed
drm/i915: Revert shrinker changes from "Track unbound pages"
This partially reverts commit 6c085a7 Author: Chris Wilson <[email protected]> Date: Mon Aug 20 11:40:46 2012 +0200 drm/i915: Track unbound pages Closer inspection of that patch revealed a bunch of unrelated changes in the shrinker: - The shrinker count is now in pages instead of objects. - For counting the shrinkable objects the old code only looked at the inactive list, the new code looks at all bounds objects (including pinned ones). That is obviously in addition to the new unbound list. - The shrinker cound is no longer scaled with sysctl_vfs_cache_pressure. Note though that with the default tuning value of vfs_cache_pressue = 100 this doesn't affect the shrinker behaviour. - When actually shrinking objects, the old code first dropped purgeable objects, then normal (inactive) objects. Only then did it, in a last-ditch effort idle the gpu and evict everything. The new code omits the intermediate step of evicting normal inactive objects. Safe for the first change, which seems benign, and the shrinker count scaling, which is a bit a different story, the endresult of all these changes is that the shrinker is _much_ more likely to fall back to the last-ditch resort of idling the gpu and evicting everything. The old code could only do that if something else evicted lots of objects meanwhile (since without any other changes the nr_to_scan will be smaller than the object count). Reverting the vfs_cache_pressure behaviour itself is a bit bogus: Only dentry/inode object caches should scale their shrinker counts with vfs_cache_pressure. Originally I've had that change reverted, too. But Chris Wilson insisted that it's too bogus and shouldn't again see the light of day. Hence revert all these other changes and restore the old shrinker behaviour, with the minor adjustment that we now first scan the unbound list, then the inactive list for each object category (purgeable or normal). A similar patch has been tested by a few people affected by the gen4/5 hangs which started to appear in 3.7, which some people bisected to the "drm/i915: Track unbound pages" commit. But just disabling the unbound logic alone didn't change things at all. Note that this patch doesn't fix the referenced bugs, it only hides the underlying bug(s) well enough to restore pre-3.7 behaviour. The key to achieve that is to massively reduce the likelyhood of going into a full gpu stall and evicting everything. v2: Reword commit message a bit, taking Chris Wilson's comment into account. v3: On Chris Wilson's insistency, do not reinstate the rather bogus vfs_cache_pressure change. Tested-by: Greg KH <[email protected]> Tested-by: Dave Kleikamp <[email protected]> References: https://bugs.freedesktop.org/show_bug.cgi?id=55984 References: https://bugs.freedesktop.org/show_bug.cgi?id=57122 References: https://bugs.freedesktop.org/show_bug.cgi?id=56916 References: https://bugs.freedesktop.org/show_bug.cgi?id=57136 Cc: Chris Wilson <[email protected]> Cc: [email protected] Acked-by: Chris Wilson <[email protected]> Signed-off-by: Daniel Vetter <[email protected]>
1 parent ca320ac commit 93927ca

File tree

1 file changed

+14
-4
lines changed

1 file changed

+14
-4
lines changed

drivers/gpu/drm/i915/i915_gem.c

Lines changed: 14 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1717,15 +1717,16 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
17171717
}
17181718

17191719
static long
1720-
i915_gem_purge(struct drm_i915_private *dev_priv, long target)
1720+
__i915_gem_shrink(struct drm_i915_private *dev_priv, long target,
1721+
bool purgeable_only)
17211722
{
17221723
struct drm_i915_gem_object *obj, *next;
17231724
long count = 0;
17241725

17251726
list_for_each_entry_safe(obj, next,
17261727
&dev_priv->mm.unbound_list,
17271728
gtt_list) {
1728-
if (i915_gem_object_is_purgeable(obj) &&
1729+
if ((i915_gem_object_is_purgeable(obj) || !purgeable_only) &&
17291730
i915_gem_object_put_pages(obj) == 0) {
17301731
count += obj->base.size >> PAGE_SHIFT;
17311732
if (count >= target)
@@ -1736,7 +1737,7 @@ i915_gem_purge(struct drm_i915_private *dev_priv, long target)
17361737
list_for_each_entry_safe(obj, next,
17371738
&dev_priv->mm.inactive_list,
17381739
mm_list) {
1739-
if (i915_gem_object_is_purgeable(obj) &&
1740+
if ((i915_gem_object_is_purgeable(obj) || !purgeable_only) &&
17401741
i915_gem_object_unbind(obj) == 0 &&
17411742
i915_gem_object_put_pages(obj) == 0) {
17421743
count += obj->base.size >> PAGE_SHIFT;
@@ -1748,6 +1749,12 @@ i915_gem_purge(struct drm_i915_private *dev_priv, long target)
17481749
return count;
17491750
}
17501751

1752+
static long
1753+
i915_gem_purge(struct drm_i915_private *dev_priv, long target)
1754+
{
1755+
return __i915_gem_shrink(dev_priv, target, true);
1756+
}
1757+
17511758
static void
17521759
i915_gem_shrink_all(struct drm_i915_private *dev_priv)
17531760
{
@@ -4395,6 +4402,9 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
43954402

43964403
if (nr_to_scan) {
43974404
nr_to_scan -= i915_gem_purge(dev_priv, nr_to_scan);
4405+
if (nr_to_scan > 0)
4406+
nr_to_scan -= __i915_gem_shrink(dev_priv, nr_to_scan,
4407+
false);
43984408
if (nr_to_scan > 0)
43994409
i915_gem_shrink_all(dev_priv);
44004410
}
@@ -4403,7 +4413,7 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
44034413
list_for_each_entry(obj, &dev_priv->mm.unbound_list, gtt_list)
44044414
if (obj->pages_pin_count == 0)
44054415
cnt += obj->base.size >> PAGE_SHIFT;
4406-
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list)
4416+
list_for_each_entry(obj, &dev_priv->mm.inactive_list, gtt_list)
44074417
if (obj->pin_count == 0 && obj->pages_pin_count == 0)
44084418
cnt += obj->base.size >> PAGE_SHIFT;
44094419

0 commit comments

Comments
 (0)