Skip to content

Commit

Permalink
KVM: Harden against unpaired kvm_mmu_notifier_invalidate_range_end() …
Browse files Browse the repository at this point in the history
…calls

When handling the end of an mmu_notifier invalidation, WARN if
mn_active_invalidate_count is already 0 do not decrement it further, i.e.
avoid causing mn_active_invalidate_count to underflow/wrap.  In the worst
case scenario, effectively corrupting mn_active_invalidate_count could
cause kvm_swap_active_memslots() to hang indefinitely.

end() calls are *supposed* to be paired with start(), i.e. underflow can
only happen if there is a bug elsewhere in the kernel, but due to lack of
lockdep assertions in the mmu_notifier helpers, it's all too easy for a
bug to go unnoticed for some time, e.g. see the recently introduced
PAGEMAP_SCAN ioctl().

Ideally, mmu_notifiers would incorporate lockdep assertions, but users of
mmu_notifiers aren't required to hold any one specific lock, i.e. adding
the necessary annotations to make lockdep aware of all locks that are
mutally exclusive with mm_take_all_locks() isn't trivial.

Link: https://lore.kernel.org/all/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sean Christopherson <[email protected]>
  • Loading branch information
sean-jc committed Jan 29, 2024
1 parent 41bccc9 commit d489ec9
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion virt/kvm/kvm_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -890,7 +890,9 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,

/* Pairs with the increment in range_start(). */
spin_lock(&kvm->mn_invalidate_lock);
wake = (--kvm->mn_active_invalidate_count == 0);
if (!WARN_ON_ONCE(!kvm->mn_active_invalidate_count))
--kvm->mn_active_invalidate_count;
wake = !kvm->mn_active_invalidate_count;
spin_unlock(&kvm->mn_invalidate_lock);

/*
Expand Down

0 comments on commit d489ec9

Please sign in to comment.