ghsa-m8v3-m369-c2xp
Vulnerability from github
Published
2025-03-07 09:30
Modified
2025-05-09 09:33
Details

In the Linux kernel, the following vulnerability has been resolved:

KVM: x86: Load DR6 with guest value only before entering .vcpu_run() loop

Move the conditional loading of hardware DR6 with the guest's DR6 value out of the core .vcpu_run() loop to fix a bug where KVM can load hardware with a stale vcpu->arch.dr6.

When the guest accesses a DR and host userspace isn't debugging the guest, KVM disables DR interception and loads the guest's values into hardware on VM-Enter and saves them on VM-Exit. This allows the guest to access DRs at will, e.g. so that a sequence of DR accesses to configure a breakpoint only generates one VM-Exit.

For DR0-DR3, the logic/behavior is identical between VMX and SVM, and also identical between KVM_DEBUGREG_BP_ENABLED (userspace debugging the guest) and KVM_DEBUGREG_WONT_EXIT (guest using DRs), and so KVM handles loading DR0-DR3 in common code, outside of the core kvm_x86_ops.vcpu_run() loop.

But for DR6, the guest's value doesn't need to be loaded into hardware for KVM_DEBUGREG_BP_ENABLED, and SVM provides a dedicated VMCB field whereas VMX requires software to manually load the guest value, and so loading the guest's value into DR6 is handled by {svm,vmx}vcpu_run(), i.e. is done _inside the core run loop.

Unfortunately, saving the guest values on VM-Exit is initiated by common x86, again outside of the core run loop. If the guest modifies DR6 (in hardware, when DR interception is disabled), and then the next VM-Exit is a fastpath VM-Exit, KVM will reload hardware DR6 with vcpu->arch.dr6 and clobber the guest's actual value.

The bug shows up primarily with nested VMX because KVM handles the VMX preemption timer in the fastpath, and the window between hardware DR6 being modified (in guest context) and DR6 being read by guest software is orders of magnitude larger in a nested setup. E.g. in non-nested, the VMX preemption timer would need to fire precisely between #DB injection and the #DB handler's read of DR6, whereas with a KVM-on-KVM setup, the window where hardware DR6 is "dirty" extends all the way from L1 writing DR6 to VMRESUME (in L1).

L1's view:
==========
<L1 disables DR interception>
       CPU 0/KVM-7289    [023] d....  2925.640961: kvm_entry: vcpu 0

A: L1 Writes DR6 CPU 0/KVM-7289 [023] d.... 2925.640963: : Set DRs, DR6 = 0xffff0ff1

B: CPU 0/KVM-7289 [023] d.... 2925.640967: kvm_exit: vcpu 0 reason EXTERNAL_INTERRUPT intr_info 0x800000ec

D: L1 reads DR6, arch.dr6 = 0 CPU 0/KVM-7289 [023] d.... 2925.640969: : Sync DRs, DR6 = 0xffff0ff0

       CPU 0/KVM-7289    [023] d....  2925.640976: kvm_entry: vcpu 0
L2 reads DR6, L1 disables DR interception
       CPU 0/KVM-7289    [023] d....  2925.640980: kvm_exit: vcpu 0 reason DR_ACCESS info1 0x0000000000000216
       CPU 0/KVM-7289    [023] d....  2925.640983: kvm_entry: vcpu 0

       CPU 0/KVM-7289    [023] d....  2925.640983: <hack>: Set DRs, DR6 = 0xffff0ff0

L2 detects failure
       CPU 0/KVM-7289    [023] d....  2925.640987: kvm_exit: vcpu 0 reason HLT
L1 reads DR6 (confirms failure)
       CPU 0/KVM-7289    [023] d....  2925.640990: <hack>: Sync DRs, DR6 = 0xffff0ff0

L0's view:
==========
L2 reads DR6, arch.dr6 = 0
      CPU 23/KVM-5046    [001] d....  3410.005610: kvm_exit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216
      CPU 23/KVM-5046    [001] .....  3410.005610: kvm_nested_vmexit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216

L2 => L1 nested VM-Exit
      CPU 23/KVM-5046    [001] .....  3410.005610: kvm_nested_vmexit_inject: reason: DR_ACCESS ext_inf1: 0x0000000000000216

      CPU 23/KVM-5046    [001] d....  3410.005610: kvm_entry: vcpu 23
      CPU 23/KVM-5046    [001] d....  3410.005611: kvm_exit: vcpu 23 reason VMREAD
      CPU 23/KVM-5046    [001] d....  3410.005611: kvm_entry: vcpu 23
      CPU 23/KVM-5046    [001] d....  3410.

---truncated---

Show details on source website


{
  "affected": [],
  "aliases": [
    "CVE-2025-21839"
  ],
  "database_specific": {
    "cwe_ids": [],
    "github_reviewed": false,
    "github_reviewed_at": null,
    "nvd_published_at": "2025-03-07T09:15:16Z",
    "severity": null
  },
  "details": "In the Linux kernel, the following vulnerability has been resolved:\n\nKVM: x86: Load DR6 with guest value only before entering .vcpu_run() loop\n\nMove the conditional loading of hardware DR6 with the guest\u0027s DR6 value\nout of the core .vcpu_run() loop to fix a bug where KVM can load hardware\nwith a stale vcpu-\u003earch.dr6.\n\nWhen the guest accesses a DR and host userspace isn\u0027t debugging the guest,\nKVM disables DR interception and loads the guest\u0027s values into hardware on\nVM-Enter and saves them on VM-Exit.  This allows the guest to access DRs\nat will, e.g. so that a sequence of DR accesses to configure a breakpoint\nonly generates one VM-Exit.\n\nFor DR0-DR3, the logic/behavior is identical between VMX and SVM, and also\nidentical between KVM_DEBUGREG_BP_ENABLED (userspace debugging the guest)\nand KVM_DEBUGREG_WONT_EXIT (guest using DRs), and so KVM handles loading\nDR0-DR3 in common code, _outside_ of the core kvm_x86_ops.vcpu_run() loop.\n\nBut for DR6, the guest\u0027s value doesn\u0027t need to be loaded into hardware for\nKVM_DEBUGREG_BP_ENABLED, and SVM provides a dedicated VMCB field whereas\nVMX requires software to manually load the guest value, and so loading the\nguest\u0027s value into DR6 is handled by {svm,vmx}_vcpu_run(), i.e. is done\n_inside_ the core run loop.\n\nUnfortunately, saving the guest values on VM-Exit is initiated by common\nx86, again outside of the core run loop.  If the guest modifies DR6 (in\nhardware, when DR interception is disabled), and then the next VM-Exit is\na fastpath VM-Exit, KVM will reload hardware DR6 with vcpu-\u003earch.dr6 and\nclobber the guest\u0027s actual value.\n\nThe bug shows up primarily with nested VMX because KVM handles the VMX\npreemption timer in the fastpath, and the window between hardware DR6\nbeing modified (in guest context) and DR6 being read by guest software is\norders of magnitude larger in a nested setup.  E.g. in non-nested, the\nVMX preemption timer would need to fire precisely between #DB injection\nand the #DB handler\u0027s read of DR6, whereas with a KVM-on-KVM setup, the\nwindow where hardware DR6 is \"dirty\" extends all the way from L1 writing\nDR6 to VMRESUME (in L1).\n\n    L1\u0027s view:\n    ==========\n    \u003cL1 disables DR interception\u003e\n           CPU 0/KVM-7289    [023] d....  2925.640961: kvm_entry: vcpu 0\n A:  L1 Writes DR6\n           CPU 0/KVM-7289    [023] d....  2925.640963: \u003chack\u003e: Set DRs, DR6 = 0xffff0ff1\n\n B:        CPU 0/KVM-7289    [023] d....  2925.640967: kvm_exit: vcpu 0 reason EXTERNAL_INTERRUPT intr_info 0x800000ec\n\n D: L1 reads DR6, arch.dr6 = 0\n           CPU 0/KVM-7289    [023] d....  2925.640969: \u003chack\u003e: Sync DRs, DR6 = 0xffff0ff0\n\n           CPU 0/KVM-7289    [023] d....  2925.640976: kvm_entry: vcpu 0\n    L2 reads DR6, L1 disables DR interception\n           CPU 0/KVM-7289    [023] d....  2925.640980: kvm_exit: vcpu 0 reason DR_ACCESS info1 0x0000000000000216\n           CPU 0/KVM-7289    [023] d....  2925.640983: kvm_entry: vcpu 0\n\n           CPU 0/KVM-7289    [023] d....  2925.640983: \u003chack\u003e: Set DRs, DR6 = 0xffff0ff0\n\n    L2 detects failure\n           CPU 0/KVM-7289    [023] d....  2925.640987: kvm_exit: vcpu 0 reason HLT\n    L1 reads DR6 (confirms failure)\n           CPU 0/KVM-7289    [023] d....  2925.640990: \u003chack\u003e: Sync DRs, DR6 = 0xffff0ff0\n\n    L0\u0027s view:\n    ==========\n    L2 reads DR6, arch.dr6 = 0\n          CPU 23/KVM-5046    [001] d....  3410.005610: kvm_exit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216\n          CPU 23/KVM-5046    [001] .....  3410.005610: kvm_nested_vmexit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216\n\n    L2 =\u003e L1 nested VM-Exit\n          CPU 23/KVM-5046    [001] .....  3410.005610: kvm_nested_vmexit_inject: reason: DR_ACCESS ext_inf1: 0x0000000000000216\n\n          CPU 23/KVM-5046    [001] d....  3410.005610: kvm_entry: vcpu 23\n          CPU 23/KVM-5046    [001] d....  3410.005611: kvm_exit: vcpu 23 reason VMREAD\n          CPU 23/KVM-5046    [001] d....  3410.005611: kvm_entry: vcpu 23\n          CPU 23/KVM-5046    [001] d....  3410.\n---truncated---",
  "id": "GHSA-m8v3-m369-c2xp",
  "modified": "2025-05-09T09:33:18Z",
  "published": "2025-03-07T09:30:35Z",
  "references": [
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-21839"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/4eb063de686bfcdfd03a8c801d1bbe87d2d5eb55"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/93eeb6df1605b3a24f38afdba7ab903ba6b64133"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/9efb2b99b96c86664bbdbdd2cdb354ac9627eb20"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/a1723e9c53fe6431415be19302a56543daf503f5"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/c2fee09fc167c74a64adb08656cb993ea475197e"
    },
    {
      "type": "WEB",
      "url": "https://git.kernel.org/stable/c/d456de38d9eb753a4e9fde053c18d4ef8e485339"
    }
  ],
  "schema_version": "1.4.0",
  "severity": []
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…