fkie_cve-2025-22115
Vulnerability from fkie_nvd
Published
2025-04-16 15:16
Modified
2025-07-24 07:15
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: btrfs: fix block group refcount race in btrfs_create_pending_block_groups() Block group creation is done in two phases, which results in a slightly unintuitive property: a block group can be allocated/deallocated from after btrfs_make_block_group() adds it to the space_info with btrfs_add_bg_to_space_info(), but before creation is completely completed in btrfs_create_pending_block_groups(). As a result, it is possible for a block group to go unused and have 'btrfs_mark_bg_unused' called on it concurrently with 'btrfs_create_pending_block_groups'. This causes a number of issues, which were fixed with the block group flag 'BLOCK_GROUP_FLAG_NEW'. However, this fix is not quite complete. Since it does not use the unused_bg_lock, it is possible for the following race to occur: btrfs_create_pending_block_groups btrfs_mark_bg_unused if list_empty // false list_del_init clear_bit else if (test_bit) // true list_move_tail And we get into the exact same broken ref count and invalid new_bgs state for transaction cleanup that BLOCK_GROUP_FLAG_NEW was designed to prevent. The broken refcount aspect will result in a warning like: [1272.943527] refcount_t: underflow; use-after-free. [1272.943967] WARNING: CPU: 1 PID: 61 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x110 [1272.944731] Modules linked in: btrfs virtio_net xor zstd_compress raid6_pq null_blk [last unloaded: btrfs] [1272.945550] CPU: 1 UID: 0 PID: 61 Comm: kworker/u32:1 Kdump: loaded Tainted: G W 6.14.0-rc5+ #108 [1272.946368] Tainted: [W]=WARN [1272.946585] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014 [1272.947273] Workqueue: btrfs_discard btrfs_discard_workfn [btrfs] [1272.947788] RIP: 0010:refcount_warn_saturate+0xba/0x110 [1272.949532] RSP: 0018:ffffbf1200247df0 EFLAGS: 00010282 [1272.949901] RAX: 0000000000000000 RBX: ffffa14b00e3f800 RCX: 0000000000000000 [1272.950437] RDX: 0000000000000000 RSI: ffffbf1200247c78 RDI: 00000000ffffdfff [1272.950986] RBP: ffffa14b00dc2860 R08: 00000000ffffdfff R09: ffffffff90526268 [1272.951512] R10: ffffffff904762c0 R11: 0000000063666572 R12: ffffa14b00dc28c0 [1272.952024] R13: 0000000000000000 R14: ffffa14b00dc2868 R15: 000001285dcd12c0 [1272.952850] FS: 0000000000000000(0000) GS:ffffa14d33c40000(0000) knlGS:0000000000000000 [1272.953458] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1272.953931] CR2: 00007f838cbda000 CR3: 000000010104e000 CR4: 00000000000006f0 [1272.954474] Call Trace: [1272.954655] <TASK> [1272.954812] ? refcount_warn_saturate+0xba/0x110 [1272.955173] ? __warn.cold+0x93/0xd7 [1272.955487] ? refcount_warn_saturate+0xba/0x110 [1272.955816] ? report_bug+0xe7/0x120 [1272.956103] ? handle_bug+0x53/0x90 [1272.956424] ? exc_invalid_op+0x13/0x60 [1272.956700] ? asm_exc_invalid_op+0x16/0x20 [1272.957011] ? refcount_warn_saturate+0xba/0x110 [1272.957399] btrfs_discard_cancel_work.cold+0x26/0x2b [btrfs] [1272.957853] btrfs_put_block_group.cold+0x5d/0x8e [btrfs] [1272.958289] btrfs_discard_workfn+0x194/0x380 [btrfs] [1272.958729] process_one_work+0x130/0x290 [1272.959026] worker_thread+0x2ea/0x420 [1272.959335] ? __pfx_worker_thread+0x10/0x10 [1272.959644] kthread+0xd7/0x1c0 [1272.959872] ? __pfx_kthread+0x10/0x10 [1272.960172] ret_from_fork+0x30/0x50 [1272.960474] ? __pfx_kthread+0x10/0x10 [1272.960745] ret_from_fork_asm+0x1a/0x30 [1272.961035] </TASK> [1272.961238] ---[ end trace 0000000000000000 ]--- Though we have seen them in the async discard workfn as well. It is most likely to happen after a relocation finishes which cancels discard, tears down the block group, etc. Fix this fully by taking the lock arou ---truncated---
Impacted products
Vendor Product Version



{
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: fix block group refcount race in btrfs_create_pending_block_groups()\n\nBlock group creation is done in two phases, which results in a slightly\nunintuitive property: a block group can be allocated/deallocated from\nafter btrfs_make_block_group() adds it to the space_info with\nbtrfs_add_bg_to_space_info(), but before creation is completely completed\nin btrfs_create_pending_block_groups(). As a result, it is possible for a\nblock group to go unused and have \u0027btrfs_mark_bg_unused\u0027 called on it\nconcurrently with \u0027btrfs_create_pending_block_groups\u0027. This causes a\nnumber of issues, which were fixed with the block group flag\n\u0027BLOCK_GROUP_FLAG_NEW\u0027.\n\nHowever, this fix is not quite complete. Since it does not use the\nunused_bg_lock, it is possible for the following race to occur:\n\nbtrfs_create_pending_block_groups            btrfs_mark_bg_unused\n                                           if list_empty // false\n        list_del_init\n        clear_bit\n                                           else if (test_bit) // true\n                                                list_move_tail\n\nAnd we get into the exact same broken ref count and invalid new_bgs\nstate for transaction cleanup that BLOCK_GROUP_FLAG_NEW was designed to\nprevent.\n\nThe broken refcount aspect will result in a warning like:\n\n  [1272.943527] refcount_t: underflow; use-after-free.\n  [1272.943967] WARNING: CPU: 1 PID: 61 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x110\n  [1272.944731] Modules linked in: btrfs virtio_net xor zstd_compress raid6_pq null_blk [last unloaded: btrfs]\n  [1272.945550] CPU: 1 UID: 0 PID: 61 Comm: kworker/u32:1 Kdump: loaded Tainted: G        W          6.14.0-rc5+ #108\n  [1272.946368] Tainted: [W]=WARN\n  [1272.946585] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014\n  [1272.947273] Workqueue: btrfs_discard btrfs_discard_workfn [btrfs]\n  [1272.947788] RIP: 0010:refcount_warn_saturate+0xba/0x110\n  [1272.949532] RSP: 0018:ffffbf1200247df0 EFLAGS: 00010282\n  [1272.949901] RAX: 0000000000000000 RBX: ffffa14b00e3f800 RCX: 0000000000000000\n  [1272.950437] RDX: 0000000000000000 RSI: ffffbf1200247c78 RDI: 00000000ffffdfff\n  [1272.950986] RBP: ffffa14b00dc2860 R08: 00000000ffffdfff R09: ffffffff90526268\n  [1272.951512] R10: ffffffff904762c0 R11: 0000000063666572 R12: ffffa14b00dc28c0\n  [1272.952024] R13: 0000000000000000 R14: ffffa14b00dc2868 R15: 000001285dcd12c0\n  [1272.952850] FS:  0000000000000000(0000) GS:ffffa14d33c40000(0000) knlGS:0000000000000000\n  [1272.953458] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033\n  [1272.953931] CR2: 00007f838cbda000 CR3: 000000010104e000 CR4: 00000000000006f0\n  [1272.954474] Call Trace:\n  [1272.954655]  \u003cTASK\u003e\n  [1272.954812]  ? refcount_warn_saturate+0xba/0x110\n  [1272.955173]  ? __warn.cold+0x93/0xd7\n  [1272.955487]  ? refcount_warn_saturate+0xba/0x110\n  [1272.955816]  ? report_bug+0xe7/0x120\n  [1272.956103]  ? handle_bug+0x53/0x90\n  [1272.956424]  ? exc_invalid_op+0x13/0x60\n  [1272.956700]  ? asm_exc_invalid_op+0x16/0x20\n  [1272.957011]  ? refcount_warn_saturate+0xba/0x110\n  [1272.957399]  btrfs_discard_cancel_work.cold+0x26/0x2b [btrfs]\n  [1272.957853]  btrfs_put_block_group.cold+0x5d/0x8e [btrfs]\n  [1272.958289]  btrfs_discard_workfn+0x194/0x380 [btrfs]\n  [1272.958729]  process_one_work+0x130/0x290\n  [1272.959026]  worker_thread+0x2ea/0x420\n  [1272.959335]  ? __pfx_worker_thread+0x10/0x10\n  [1272.959644]  kthread+0xd7/0x1c0\n  [1272.959872]  ? __pfx_kthread+0x10/0x10\n  [1272.960172]  ret_from_fork+0x30/0x50\n  [1272.960474]  ? __pfx_kthread+0x10/0x10\n  [1272.960745]  ret_from_fork_asm+0x1a/0x30\n  [1272.961035]  \u003c/TASK\u003e\n  [1272.961238] ---[ end trace 0000000000000000 ]---\n\nThough we have seen them in the async discard workfn as well. It is\nmost likely to happen after a relocation finishes which cancels discard,\ntears down the block group, etc.\n\nFix this fully by taking the lock arou\n---truncated---"
    },
    {
      "lang": "es",
      "value": "En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: btrfs: correcci\u00f3n de la ejecuci\u00f3n de recuento de referencias de grupos de bloques en btrfs_create_pending_block_groups(). La creaci\u00f3n de grupos de bloques se realiza en dos fases, lo que resulta en una propiedad poco intuitiva: un grupo de bloques se puede asignar/desasignar despu\u00e9s de que btrfs_make_block_group() lo a\u00f1ada a space_info con btrfs_add_bg_to_space_info(), pero antes de que la creaci\u00f3n se complete por completo en btrfs_create_pending_block_groups(). Como resultado, es posible que un grupo de bloques quede sin usar y que se invoque \u0027btrfs_mark_bg_unused\u0027 simult\u00e1neamente con \u0027btrfs_create_pending_block_groups\u0027. Esto causa varios problemas, que se solucionaron con la bandera de grupo de bloques \u0027BLOCK_GROUP_FLAG_NEW\u0027. Sin embargo, esta correcci\u00f3n no est\u00e1 del todo completa. Dado que no utiliza el bloqueo de bloques no utilizados (unused_bg_lock), es posible que se produzca la siguiente ejecuci\u00f3n: btrfs_create_pending_block_groups btrfs_mark_bg_unused if list_empty // false list_del_init clear_bit else if (test_bit) // true list_move_tail. Y llegamos al mismo estado de recuento de referencias err\u00f3neo y new_bgs no v\u00e1lido para la limpieza de transacciones, que BLOCK_GROUP_FLAG_NEW pretend\u00eda evitar. El aspecto de recuento de referencias err\u00f3neo generar\u00e1 una advertencia como: [1272.943527] refcount_t: underflow; use-after-free. [1272.943967] ADVERTENCIA: CPU: 1 PID: 61 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x110 [1272.944731] Modules linked in: btrfs virtio_net xor zstd_compress raid6_pq null_blk [last unloaded: btrfs] [1272.945550] CPU: 1 UID: 0 PID: 61 Comm: kworker/u32:1 Kdump: loaded Tainted: G W 6.14.0-rc5+ #108 [1272.946368] Tainted: [W]=WARN [1272.946585] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014 [1272.947273] Workqueue: btrfs_discard btrfs_discard_workfn [btrfs] [1272.947788] RIP: 0010:refcount_warn_saturate+0xba/0x110 [1272.949532] RSP: 0018:ffffbf1200247df0 EFLAGS: 00010282 [1272.949901] RAX: 0000000000000000 RBX: ffffa14b00e3f800 RCX: 0000000000000000 [1272.950437] RDX: 0000000000000000 RSI: ffffbf1200247c78 RDI: 00000000ffffdfff [1272.950986] RBP: ffffa14b00dc2860 R08: 00000000ffffdfff R09: ffffffff90526268 [1272.951512] R10: ffffffff904762c0 R11: 0000000063666572 R12: ffffa14b00dc28c0 [1272.952024] R13: 0000000000000000 R14: ffffa14b00dc2868 R15: 000001285dcd12c0 [1272.952850] FS: 0000000000000000(0000) GS:ffffa14d33c40000(0000) knlGS:0000000000000000 [1272.953458] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [1272.953931] CR2: 00007f838cbda000 CR3: 000000010104e000 CR4: 00000000000006f0 [1272.954474] Call Trace: [1272.954655]  [1272.954812] ? refcount_warn_saturate+0xba/0x110 [1272.955173] ? __warn.cold+0x93/0xd7 [1272.955487] ? refcount_warn_saturate+0xba/0x110 [1272.955816] ? report_bug+0xe7/0x120 [1272.956103] ? handle_bug+0x53/0x90 [1272.956424] ? exc_invalid_op+0x13/0x60 [1272.956700] ? asm_exc_invalid_op+0x16/0x20 [1272.957011] ? refcount_warn_saturate+0xba/0x110 [1272.957399] btrfs_discard_cancel_work.cold+0x26/0x2b [btrfs] [1272.957853] btrfs_put_block_group.cold+0x5d/0x8e [btrfs] [1272.958289] btrfs_discard_workfn+0x194/0x380 [btrfs] [1272.958729] process_one_work+0x130/0x290 [1272.959026] worker_thread+0x2ea/0x420 [1272.959335] ? __pfx_worker_thread+0x10/0x10 [1272.959644] kthread+0xd7/0x1c0 [1272.959872] ? __pfx_kthread+0x10/0x10 [1272.960172] ret_from_fork+0x30/0x50 [1272.960474] ? __pfx_kthread+0x10/0x10 [1272.960745] ret_from_fork_asm+0x1a/0x30 [1272.961035]  [1272.961238] ---[ end trace 0000000000000000 ]--- Aunque tambi\u00e9n los hemos visto en la funci\u00f3n de trabajo de descarte as\u00edncrono. Es m\u00e1s probable que ocurra despu\u00e9s de finalizar una reubicaci\u00f3n que cancela el descarte, desmantela el grupo de bloques, etc. Solucione esto completamente desmantelando el bloqueo de ---truncated---"
    }
  ],
  "id": "CVE-2025-22115",
  "lastModified": "2025-07-24T07:15:52.840",
  "metrics": {},
  "published": "2025-04-16T15:16:05.710",
  "references": [
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/2d8e5168d48a91e7a802d3003e72afb4304bebfa"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/9d383a6fc59271aaaf07a33b23b2eac5b9268b7a"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/ee56da95f8962b86fec4ef93f866e64c8d025a58"
    }
  ],
  "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
  "vulnStatus": "Awaiting Analysis"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…