Skip to content

Commit f676958

Browse files
shihweiliChristoffer Dall
authored andcommitted
KVM: arm/arm64: vgic: Avoid flushing vgic state when there's no pending IRQ
We do not need to flush vgic states in each world switch unless there is pending IRQ queued to the vgic's ap list. We can thus reduce the overhead by not grabbing the spinlock and not making the extra function call to vgic_flush_lr_state. Note: list_empty is a single atomic read (uses READ_ONCE) and can therefore check if a list is empty or not without the need to take the spinlock protecting the list. Reviewed-by: Marc Zyngier <[email protected]> Signed-off-by: Shih-Wei Li <[email protected]> Signed-off-by: Christoffer Dall <[email protected]>
1 parent 328e566 commit f676958

File tree

1 file changed

+17
-0
lines changed

1 file changed

+17
-0
lines changed

virt/kvm/arm/vgic/vgic.c

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -637,12 +637,17 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
637637
/* Sync back the hardware VGIC state into our emulation after a guest's run. */
638638
void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
639639
{
640+
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
641+
640642
if (unlikely(!vgic_initialized(vcpu->kvm)))
641643
return;
642644

643645
vgic_process_maintenance_interrupt(vcpu);
644646
vgic_fold_lr_state(vcpu);
645647
vgic_prune_ap_list(vcpu);
648+
649+
/* Make sure we can fast-path in flush_hwstate */
650+
vgic_cpu->used_lrs = 0;
646651
}
647652

648653
/* Flush our emulation state into the GIC hardware before entering the guest. */
@@ -651,6 +656,18 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
651656
if (unlikely(!vgic_initialized(vcpu->kvm)))
652657
return;
653658

659+
/*
660+
* If there are no virtual interrupts active or pending for this
661+
* VCPU, then there is no work to do and we can bail out without
662+
* taking any lock. There is a potential race with someone injecting
663+
* interrupts to the VCPU, but it is a benign race as the VCPU will
664+
* either observe the new interrupt before or after doing this check,
665+
* and introducing additional synchronization mechanism doesn't change
666+
* this.
667+
*/
668+
if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
669+
return;
670+
654671
spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock);
655672
vgic_flush_lr_state(vcpu);
656673
spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);

0 commit comments

Comments
 (0)