From: Michael Jeanson Date: Mon, 26 Oct 2020 21:07:13 +0000 (-0400) Subject: fix: KVM: x86/mmu: Return unique RET_PF_* values if the fault was fixed (v5.10) X-Git-Tag: v2.13.0-rc1~118 X-Git-Url: https://git.lttng.org./?a=commitdiff_plain;h=5e3317501af1b5d3474369f1ea8186ec3ebc628c;p=lttng-modules.git fix: KVM: x86/mmu: Return unique RET_PF_* values if the fault was fixed (v5.10) See upstream commit : commit c4371c2a682e0da1ed2cd7e3c5496f055d873554 Author: Sean Christopherson Date: Wed Sep 23 15:04:24 2020 -0700 KVM: x86/mmu: Return unique RET_PF_* values if the fault was fixed Introduce RET_PF_FIXED and RET_PF_SPURIOUS to provide unique return values instead of overloading RET_PF_RETRY. In the short term, the unique values add clarity to the code and RET_PF_SPURIOUS will be used by set_spte() to avoid unnecessary work for spurious faults. In the long term, TDX will use RET_PF_FIXED to deterministically map memory during pre-boot. The page fault flow may bail early for benign reasons, e.g. if the mmu_notifier fires for an unrelated address. With only RET_PF_RETRY, it's impossible for the caller to distinguish between "cool, page is mapped" and "darn, need to try again", and thus cannot handle benign cases like the mmu_notifier retry. Signed-off-by: Michael Jeanson Signed-off-by: Mathieu Desnoyers Change-Id: Ie0855c78852b45f588e131fe2463e15aae1bc023 --- diff --git a/include/instrumentation/events/arch/x86/kvm/mmutrace.h b/include/instrumentation/events/arch/x86/kvm/mmutrace.h index 8e5bf1c1..f585d027 100644 --- a/include/instrumentation/events/arch/x86/kvm/mmutrace.h +++ b/include/instrumentation/events/arch/x86/kvm/mmutrace.h @@ -233,7 +233,27 @@ LTTNG_TRACEPOINT_EVENT_MAP( ) ) -#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0) || \ +#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0)) +LTTNG_TRACEPOINT_EVENT_MAP( + fast_page_fault, + + kvm_mmu_fast_page_fault, + + TP_PROTO(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 error_code, + u64 *sptep, u64 old_spte, int ret), + TP_ARGS(vcpu, cr2_or_gpa, error_code, sptep, old_spte, ret), + + TP_FIELDS( + ctf_integer(int, vcpu_id, vcpu->vcpu_id) + ctf_integer(gpa_t, cr2_or_gpa, cr2_or_gpa) + ctf_integer(u32, error_code, error_code) + ctf_integer_hex(u64 *, sptep, sptep) + ctf_integer(u64, old_spte, old_spte) + ctf_integer(u64, new_spte, *sptep) + ctf_integer(int, ret, ret) + ) +) +#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0) || \ LTTNG_KERNEL_RANGE(4,19,103, 4,20,0) || \ LTTNG_KERNEL_RANGE(5,4,19, 5,5,0) || \ LTTNG_KERNEL_RANGE(5,5,3, 5,6,0) || \