You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When an endpoint ring is freed, it is either cached in a per-device ring
cache, or simply freed if the ring cache is full. If the ring was added
to the cache, then virt_dev->num_rings_cached is incremented. The cache
is designed to hold up to 31 endpoint rings, in array indexes 0 to 30.
When the device is freed (when the slot was disabled),
xhci_free_virt_device() is called, it would free the cached rings in
array indexes 0 to virt_dev->num_rings_cached.
Unfortunately, the original code in xhci_free_or_cache_endpoint_ring()
would put the first entry into the ring cache in array index 1, instead of
array index 0. This was caused by the second assignment to rings_cached:
rings_cached = virt_dev->num_rings_cached;
if (rings_cached < XHCI_MAX_RINGS_CACHED) {
virt_dev->num_rings_cached++;
rings_cached = virt_dev->num_rings_cached;
virt_dev->ring_cache[rings_cached] =
virt_dev->eps[ep_index].ring;
This meant that when the device was freed, cached rings with indexes 0 to
N would be freed, and the last cached ring in index N+1 would not be
freed. When the driver was unloaded, this caused interesting messages
like:
xhci_hcd 0000:06:00.0: dma_pool_destroy xHCI ring segments, ffff880063040000 busy
This should be queued to stable kernels back to 2.6.33.
Signed-off-by: Sarah Sharp <[email protected]>
Cc: [email protected]
0 commit comments