Skip to content

Commit e4a454c

Browse files
Maxim Levitskybonzini
authored andcommitted
KVM: add kvm_lock_all_vcpus and kvm_trylock_all_vcpus
In a few cases, usually in the initialization code, KVM locks all vCPUs of a VM to ensure that userspace doesn't do funny things while KVM performs an operation that affects the whole VM. Until now, all these operations were implemented using custom code, and all of them share the same problem: Lockdep can't cope with simultaneous locking of a large number of locks of the same class. However if these locks are taken while another lock is already held, which is luckily the case, it is possible to take advantage of little known _nest_lock feature of lockdep which allows in this case to have an unlimited number of locks of same class to be taken. To implement this, create two functions: kvm_lock_all_vcpus() and kvm_trylock_all_vcpus() Both functions are needed because some code that will be replaced in the subsequent patches, uses mutex_trylock, instead of regular mutex_lock. Suggested-by: Paolo Bonzini <[email protected]> Signed-off-by: Maxim Levitsky <[email protected]> Acked-by: Marc Zyngier <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent fb49f07 commit e4a454c

File tree

2 files changed

+63
-0
lines changed

2 files changed

+63
-0
lines changed

include/linux/kvm_host.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1015,6 +1015,10 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
10151015

10161016
void kvm_destroy_vcpus(struct kvm *kvm);
10171017

1018+
int kvm_trylock_all_vcpus(struct kvm *kvm);
1019+
int kvm_lock_all_vcpus(struct kvm *kvm);
1020+
void kvm_unlock_all_vcpus(struct kvm *kvm);
1021+
10181022
void vcpu_load(struct kvm_vcpu *vcpu);
10191023
void vcpu_put(struct kvm_vcpu *vcpu);
10201024

virt/kvm/kvm_main.c

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1368,6 +1368,65 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
13681368
return 0;
13691369
}
13701370

1371+
int kvm_trylock_all_vcpus(struct kvm *kvm)
1372+
{
1373+
struct kvm_vcpu *vcpu;
1374+
unsigned long i, j;
1375+
1376+
lockdep_assert_held(&kvm->lock);
1377+
1378+
kvm_for_each_vcpu(i, vcpu, kvm)
1379+
if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock))
1380+
goto out_unlock;
1381+
return 0;
1382+
1383+
out_unlock:
1384+
kvm_for_each_vcpu(j, vcpu, kvm) {
1385+
if (i == j)
1386+
break;
1387+
mutex_unlock(&vcpu->mutex);
1388+
}
1389+
return -EINTR;
1390+
}
1391+
EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus);
1392+
1393+
int kvm_lock_all_vcpus(struct kvm *kvm)
1394+
{
1395+
struct kvm_vcpu *vcpu;
1396+
unsigned long i, j;
1397+
int r;
1398+
1399+
lockdep_assert_held(&kvm->lock);
1400+
1401+
kvm_for_each_vcpu(i, vcpu, kvm) {
1402+
r = mutex_lock_killable_nest_lock(&vcpu->mutex, &kvm->lock);
1403+
if (r)
1404+
goto out_unlock;
1405+
}
1406+
return 0;
1407+
1408+
out_unlock:
1409+
kvm_for_each_vcpu(j, vcpu, kvm) {
1410+
if (i == j)
1411+
break;
1412+
mutex_unlock(&vcpu->mutex);
1413+
}
1414+
return r;
1415+
}
1416+
EXPORT_SYMBOL_GPL(kvm_lock_all_vcpus);
1417+
1418+
void kvm_unlock_all_vcpus(struct kvm *kvm)
1419+
{
1420+
struct kvm_vcpu *vcpu;
1421+
unsigned long i;
1422+
1423+
lockdep_assert_held(&kvm->lock);
1424+
1425+
kvm_for_each_vcpu(i, vcpu, kvm)
1426+
mutex_unlock(&vcpu->mutex);
1427+
}
1428+
EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus);
1429+
13711430
/*
13721431
* Allocation size is twice as large as the actual dirty bitmap size.
13731432
* See kvm_vm_ioctl_get_dirty_log() why this is needed.

0 commit comments

Comments
 (0)